id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.08695
Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents
Resolving the scope of a negation within a sentence is a challenging NLP task. The complexity of legal texts and the lack of annotated in-domain negation corpora pose challenges for state-of-the-art (SotA) models when performing negation scope resolution on multilingual legal data. Our experiments demonstrate that models pre-trained without legal data underperform in the task of negation scope resolution. Our experiments, using language models exclusively fine-tuned on domains like literary texts and medical data, yield inferior results compared to the outcomes documented in prior cross-domain experiments. We release a new set of annotated court decisions in German, French, and Italian and use it to improve negation scope resolution in both zero-shot and multilingual settings. We achieve token-level F1-scores of up to 86.7% in our zero-shot cross-lingual experiments, where the models are trained on two languages of our legal datasets and evaluated on the third. Our multilingual experiments, where the models were trained on all available negation data and evaluated on our legal datasets, resulted in F1-scores of up to 91.1%.
Ramona Christen, Anastassia Shaitarova, Matthias Stürmer, Joel Niklaus
2023-09-15T18:38:06Z
http://arxiv.org/abs/2309.08695v1
# Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents ###### Abstract Resolving the scope of a negation within a sentence is a challenging NLP task. The complexity of legal texts and the lack of annotated in-domain negation corpora pose challenges for state-of-the-art (SotA) models when performing negation scope resolution on multilingual legal data. Our experiments demonstrate that models pre-trained without legal data under-perform in the task of negation scope resolution. Our experiments, using language models exclusively fine-tuned on domains like literary texts and medical data, yield inferior results compared to the outcomes documented in prior cross-domain experiments. We release a new set of annotated court decisions in German, French, and Italian and use it to improve negation scope resolution in both zero-shot and multilingual settings. We achieve token-level F1-scores of up to 86.7% in our zero-shot cross-lingual experiments, where the models are trained on two languages of our legal datasets and evaluated on the third. Our multilingual experiments, where the models were trained on all available negation data and evaluated on our legal datasets, resulted in F1-scores of up to 91.1%. ## 1 Introduction Negation scope resolution is an important research problem in the field of Natural Language Processing (NLP). It describes the detection of words that are affected by a negation cue (e.g. no or not) in a sentence, which is important for understanding its true meaning. Although this task is far from trivial, deep learning approaches have shown promising results [1, 13, 14, 15]. As with many NLP tasks, the largest amount of annotated data is available in English.1 multilingual datasets are less common and often not easily accessible. For example, on the huggingface hub, hosting most important open-source datasets, 4559 datasets are tagged as English. The next most common language is Chinese with 10 times fewer datasets for a total of 469.2 In addition, much of the work conducted in the area of negation scope resolution has been done in the medical domain in order to automatically process clinical reports and discharge summaries [20]. Other datasets consist of literary texts [16] or more informal data such as online reviews [17]. The legal domain differs from all of the above in that it is often very complex (i.e., legalese) and uses highly specific vocabulary and knowledge that is not common outside the legal domain [18, 19]. This poses a challenge to any model tackling tasks in the legal domain. While a large amount of legal data is publicly available and has been annotated for various tasks [1, 13, 16, 15, 14, 17, 18, 19, 20], _inter alia_, to the best of our Figure 1: Results over main experiments from select models. For all results see Appendix B. knowledge there exists no legal negation corpus. We annotate four new datasets containing legal judgments from Swiss and German courts in German, French and Italian for negation cues and scopes. We find that these legal documents contain on average longer sentences as well as longer annotated negation scopes, compared to existing datasets. Our experiments show that the legal domain poses a significant challenge to models attempting negation scope resolution. The results achieved by models pre-trained in different domains and evaluated on legal data are lower than those seen in other cross-corpus experiments (Khandelwal and Sawant, 2020; Shaitarova and Rinaldi, 2021). Using our newly annotated datasets, we can improve these results. We conduct experiments where the models are fine-tuned on two languages of the legal data and evaluated on the third. In these zero-shot cross-lingual experiments, our models achieve higher F1-scores than the models pre-trained only on different domains. By training on all available data, we are able to further improve these results, achieving F1-scores around 90% for our multilingual experiments. Our results provide an interesting insight into how even smaller datasets can make a valuable contribution to improving the performance of language models (LMs) on a specific downstream task such as negation scope resolution. ### Contributions The contributions of this paper are three-fold: * We annotate new datasets of legal documents for negation in German, French, and Italian each containing around 1000 sentences. * We train and evaluate models on the task of negation scope resolution on the newly annotated datasets to provide a reference point and achieve token-level F1-scores in the mid eighties for cross-lingual zero-shot experiments and up to 91% in multilingual experiments. * We publicly release the annotation guidelines, the data, the models and the experimentation code as resources and for reproducibility.3 Footnote 3: The annotation guidelines as well as the code to fine-tune our models can be found on GitHub: [https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data](https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data). Our best model ([https://huggingface.co/rcds/neg-xlm-roberta-base](https://huggingface.co/rcds/neg-xlm-roberta-base)) and dataset ([https://huggingface.co/rcds/MultiLegalNeg](https://huggingface.co/rcds/MultiLegalNeg)) are published on huggingface. ## 2 Related Work Different approaches have been used to address the issue of negation detection and negation scope resolution. Early research focused mainly on rule-based approaches. NegEx, a simple regular expression algorithm developed by Chapman et al. (2001), was successfully able to identify negations in the medical domain. Morante et al. (2008) first took a machine learning approach to negation scope resolution. They used two memory-based classifiers, one to identify the negation cue in a sentence, and one to identify the scope of the negation. On the negation scope resolution task, they achieved an F1-score of 81% on the BioScope corpus (Szarvas et al., 2008). These results were later surpassed by Fancellu et al. (2017), achieving an F1-score of 92% by using neural networks for scope detection. Khandelwal and Sawant (2020) achieved the best results on the BioScope corpus, as well as on two other publicly available negation corpora, the SFU Review Corpus (Konstantinova et al., 2012) and the ConanDoyle-neg corpus (Morante and Blanco, 2012). Their NegBERT model uses Bidirectional Encoder Representation from Transformers (BERT) (Devlin et al., 2019) and applies a transfer learning approach for negation detection and scope resolution. Only a limited amount of work has been conducted on negation scope resolution across different languages. Fancellu et al. (2018) developed a cross-lingual system, trained on English data and tested on a Chinese corpus. By employing cross-lingual universal dependencies in English they were able to achieve an F1-score of 72% on the Chinese data. Shaitarova et al. (2020) investigated cross-lingual zero-shot negation scope resolution between English, Spanish, and French. They built on NegBERT but used the multilingual BERT (mBERT) model. Shaitarova and Rinaldi (2021) built on this using NegBERT with mBERT and XLM-RLarge (Conneau et al., 2020), and were able to achieve a token-level F1-score of 87% on zero-shot transfer from Spanish to Russian. The sparse amount of cross-lingual research can be explained by the lack of annotated data in languages other than English. There are few corpora annotated with negations in German and Italian (Jimenez-Zafra et al., 2020). The only German corpus annotated for negation and speculation contains medical data and clinical notes (Cotik et al., 2016). However, the corpus is not publicly available and no annotation guidelines have been published. For Italian et al. (2017) presented a framework for the annotation of negations and applied it to a corpus of news articles and tweets, parts of which are publicly available. In French et al. (2020) annotated a medical corpus, available on request. To our knowledge, no legal corpus annotated with negations currently exists. ## 3 Data ### Legal Data We use court decisions in our legal datasets, also often referred to as judgments. The judgments form German courts were collected from _Bayern.Recht_4 and include a variety of legal domains and structures Glaser et al. (2021). The Swiss court decisions in French, Italian, and German (CH) were collected from the Federal Supreme Court of Switzerland (FSCS). The FSCS is the highest legal authority in Switzerland and oversees federal criminal, administrative, patent, and cantonal courts. Footnote 4: [https://www.gesetze-bayern.de/](https://www.gesetze-bayern.de/) Judgments published by the FSCS usually consist of four sections: 1) The introduction gives information about the date, chamber, involved judge(s) and parties, and the topic of the court decision. 2) The facts outline the important case information. 3) The considerations form the basis for the final ruling by providing relevant case law and other cited rulings. 4) The rulings gives the final decision made by the court. ### Datasets We annotated four new datasets in three languages for negation cues and scopes, and standardized the existing French and English datasets to make them more accessible. Our datasets consist of publicly available legal judgments from Swiss and German courts. Since negation scope resolution is a sentence-level task, we first split the data into sentences using sentence boundary annotations. The French (fr) and Italian (it) datasets consist of a subset of Swiss court decisions from the Swiss-Judgment-Prediction (SJP) dataset Niklaus et al. (2022) and the Multi-Legal-Pile Niklaus et al. (2023) which were annotated for sentence spans by Brugger et al. (2023). The main German data (de (DE)) is a subset of judgments from German courts collected by Glaser et al. (2021). Only judgments were included in our dataset because they include a variety of sources and legal areas, they also have a higher density of negation cues compared to other legal texts. To validate that the negation scope prediction also works on German court data from Switzerland, we curated a small dataset of German-Swiss court decisions (de (CH)) that is also a subset of the SJP corpus. We separated each dataset into a train (70%), test (20%), and validation (10%) split. To ensure that sufficient negation data is available in each dataset, a negation score was assigned to each document based on a simple word search for the most common negation words in each language. The documents with the highest negation scores were then selected to be annotated. Table 1 shows the amount of data and the distribution of negations for the newly created datasets in comparison to the existing datasets in English and French. Our datasets contain a slightly higher ratio of negated sentences compared to the other datasets. This can be attributed to the nature of legal data and our pre-selection procedure. Because we annotated only a subset of an existing corpus we were able to exclude documents without or only few negations while other corpora like ConanDoyle-neg and SFU annotated complete existing datasets or stories. Annotations were done by native-language human annotators using the tool Prodigy. All annotators are university students but not part of a legal study program. The annotations were cross-checked by one annotator, who has a linguistic background, with the help of an online translator to ensure that they adhere to the annotation guidelines and are consistent across all three languages. The annotation guidelines are based on existing guidelines for the English datasets, and have been extended to cover all three languages included in our data, as well as the characteristics of the legal domain. Key guidelines are summarized below. \begin{table} \begin{tabular}{l l c c c} \hline \hline & **Dataset** & **Total** & **Negated** & \%neg \\ \hline \multirow{4}{*}{\begin{tabular}{l} **English** \\ **FSCS** \\ **end**} & fr & 1059 & 382 & 36.07 \\ & it & 1001 & 418 & 41.76 \\ & de (DE) & 1098 & 454 & 41.35 \\ & de (CH) & 208 & 112 & 53.85 \\ \hline \multirow{4}{*}{ \begin{tabular}{l} **English** \\ **FSCS** \\ **end**} & SFU & 17672 & 3528 & 19.96 \\ & BioScope & 14700 & 2095 & 14.25 \\ \cline{1-1} & ConanDoyle-neg & 5714 & 1421 & 24.87 \\ \cline{1-1} & Dalloux & 11032 & 1817 & 16.47 \\ \hline \hline \end{tabular} \end{table} Table 1: Total number of sentences, and number and percentage of sentences containing at least one negation. Negation CuesCues were not annotated as part of the negation scope following the annotation guidelines for the ConanDoyle-neg corpus Morante et al. (2011). We excluded affixal cues5 in our annotations and kept all annotations to the word as the level of the minimal syntactic unit. Footnote 5: Affixal cues are cues within a word such as impossible Multiple negationsAnnotators were instructed to annotate one negation per sentence. Sentences with multiple negations were duplicated before annotation based on the most common negation cues. To ensure that the same cue was not annotated twice, duplicates were displayed next to each other in the annotation tool to allow annotators to see which clues had yet to be annotated. Maxiumum scope strategyAs with BioScope, we used a maximum scope strategy. This means that the scope extends to the largest possible unit. If a negated clause has subordinate clauses providing additional information to the clause, the scope extends over the negated clause and all of its subordinate clauses, as illustrated in example 1. This sentence structure is very common in our set of legal data. In all following examples we mark the cue in **bold** and underline the scope. We provide an English translation for clarity. [MISSING_PAGE_POST] multilingual LMs outlined in Table 3. We ran each experiment five times with different random seeds and report the mean token-level F1-score averaged over random seeds, together with the standard deviation. All experiments were conducted with the same hyperparameters for all models, optimized with a search over learning rate (5e-7, 1e-6, 3e-6, 1e-5, 3e-5, 5e-5) and batch size (4, 8, 16, 32, 64, and 128). We optimized the hyperparameters for mBERT and XLM-R and concluded that the best results can be achieved with an initial learning rate of 1e-5 and a batch size of 16. To avoid overfitting, we used early stopping with patience set to 8 as a compromise between the patience of 6 used in the original NegBERT experiments (Khandelwal and Sawant, 2020) and 9 used in the multilingual experiments of Shaitarova and Rinaldi (2021). We extended the maximum input length to 252 tokens for our data. Experiments ran on an NVIDIA A100 GPU via Google Colab, totaling around 105 hours of training time. Firstly, we evaluated ChatGPT in zero- and few-shot experiments to interpret the results of a non-fine-tuned model in the negation scope resolution task. For all subsequent experiments, we used the NegBERT architecture. In the first NegBERT experiment, models were fine-tuned on all existing French and English datasets and evaluated on our new legal datasets, representing a Zero-shot cross-domain transfer. For a second series of zero-shot experiments, we attempted a Zero-shot cross-lingual transfer within our legal datasets. In each cross-lingual experiment, models were trained on two dataset languages and evaluated on the third. We also executed Multilingual experiments using our datasets and all available data. experiments conducted with only our legal data. Although these datasets are considerably smaller than the existing English and French datasets, we were able to increase the F1-score by an average of 15.6% across all models and datasets. The legal models still performed well in these experiments, but they no longer showed an advantage over the other LMs. XLM-R\({}_{\text{Base}}\) achieved the best results. All models, except for DistilmBERT, performed significantly better than in the previous experiment across all datasets. DistilmBERT performed worse on the German datasets than in the previous experiment. One explanation for this might be that DistilmBERT is the only cased model used in our experiments. While cased models usually outperform uncased models, this does not seem to apply to cross-lingual experiments. Similar results were found by Mackova and Straka (2020), who conducted cross-lingual reading comprehension experiments from English to Czech and found that the uncased models outperformed the cased models in these experiments. They theorized that the overlap of sub-words is larger between English and Czech for uncased models because they disregard diacritical marks, which are common in Czech. A similar argument could be made for the cross-lingual transfer between Italian, French, and German because German includes a lot of casing information while Italian and French do not. Multilingual experimentsThe best results for negation scope resolution on our legal datasets were achieved by training our models on the entirety of the available data (Table 7). This multilingual approach achieved an average F1-score of 90% across all models and datasets and outperformed all of the previous setups. This indicates that a relatively small amount of training data in the domain and language of the test dataset can significantly improve the performance of a LM. It is also notable that there seems to be no substantial difference in the performance of the different LMs in this experiment, with a standard deviation of only \(\pm\) 3.6 over all models and datasets. Although DistilmBERT obtained the lowest scores in this experiment, its performance is not significantly inferior to that of the mBERT model. This could be attributed to the fact that the training data also included German examples which might have mitigated the advantage of the uncased models with regard to shared vocabulary. We also conducted multilingual experiments only using our new datasets which achieved very similar results with an overall F1-score of \(89.1_{\pm 4}\) (see Appendix C). ### Error analysis We investigated the length of the predicted negation scopes as well as random samples of the predictions on the French and German test data to identify some common error cases. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Test Dataset**} & \multirow{2}{*}{**0-shot**} & \multirow{2}{*}{**1-shot**} & \multirow{2}{*}{**5-shot**} & \multirow{2}{*}{**10-shot**} & Mean F1 \\ & & & & & by Dataset \\ \hline fr & \(13.00_{\pm 2.1}\) & \(16.63_{\pm 10.3}\) & \(14.90_{\pm 7.5}\) & \(22.53_{\pm 10.7}\) & \(16.77_{\pm 8.5}\) \\ it & \(25.11_{\pm 1.5}\) & \(18.22_{\pm 6.5}\) & \(31.07_{\pm 7.1}\) & \(26.10_{\pm 3.8}\) & \(25.12_{\pm 6.7}\) \\ de (DE) & \(16.47_{\pm 2.6}\) & \(22.45_{\pm 9.1}\) & \(17.34_{\pm 2.7}\) & \(24.48_{\pm 10.7}\) & \(20.18_{\pm 7.5}\) \\ de (CH) & \(32.91_{\pm 7.9}\) & \(21.20_{\pm 5.8}\) & \(36.89_{\pm 18.6}\) & \(19.83_{\pm 10.3}\) & \(27.71_{\pm 13.1}\) \\ \hline **Mean F1 by experiment** & \(21.87_{\pm 8.9}\) & \(19.62_{\pm 7.8}\) & \(25.05_{\pm 13.6}\) & \(23.23_{\pm 8.9}\) & \\ \hline \hline \end{tabular} \end{table} Table 4: Results for zero- and few-shot experiments conducted over the ChatGPT API. \begin{table} \begin{tabular}{l l r r r r r r} \hline \hline **Model** & **Source** & **InLen** & **Params** & **Vocab** & **NumTokens** & **Corpus** & **Longs** \\ \hline DistilmBERT & Sanh et al. (2020) & 512 & 134M & 120K & n/a & Wikipedia & 104 \\ mBERT & Devlin et al. (2019) & 512 & 177K & 120K & n/a & Wikipedia & 104 \\ XLM-R\({}_{\text{Base}}\)Large & Conneau et al. (2020) & 512 & 278M/560M & 250K & 6’291B & 2.5TB CC100 & 100 \\ \hline Glo500-m & ImaniGooghari et al. (2023) & 512 & 395M & 401K & 94B & glot500-c & 511 \\ \hline Legal-Swiss-R\({}_{\text{Base}}\)Large & Rasiah et al. (2023) & 512 & 184M/435M & 128K & 262B/131B & CH Rulings/Legislation & 3 \\ Legal-XLM-R\({}_{\text{Base}}\)Large & Niklaus et al. (2023b) & 512 & 184M/435M & 128K & 262B/131B & CH Rulings/Legislation & 3 \\ \hline \hline \end{tabular} \end{table} Table 3: Model stats. InLen: max input length during pre-training. Params: total parameter count. NumTokens: Batch size \(\times\) Steps \(\times\) InLen Predicted scope lengthAs expected, our cross-domain zero-shot experiments without legal training data achieved the lowest F1-scores overall. This can mostly be attributed to the differences in annotation for each dataset, as well as the different domains. Although the external corpora included French data, this did not improve the performance on the French dataset compared to the other legal datasets. A possible reason is that the subject was not annotated as part of the scope in the Dalloux dataset opposed to the French legal dataset. Analyzing the predicted scope length compared to the actual scope length reveals one main issue with the zero-shot transfer from the external datasets of different domains to our legal datasets. Figure 2 shows the analysis of the predicted scopes by the Legal-XLM-R\({}_{\text{Large}}\) model. In our cross-domain zero-shot experiment, the predicted scope length is significantly shorter than the actual annotated scope length. This is clarified by Table 2, revealing the external datasets have a shorter annotated scope length (24%) compared to our legal datasets (38.6%). Sample predictions confirm that the model often omits the subject from the annotated scope. Annotation: [noitemsep,topsep=0pt] Es sei festzustellen, dass der Rucker-stattungsanspruch**nicht**verjahrt sei. _It should be noted that the claim for restitution is **not** forfeited._ Prediction: [noitemsep,topsep=0pt] Es sei festzustellen, dass der Rucker-statungsanspruch**nicht**verjahrt sei. _It should be noted that the claim for restitution is **not** forfeited._ As soon as some legal data is added to our training sets, the predicted scope length as well as the F1-score increases. An inspection of the predictions made by the legal and multilingual models shows that the additional training data helps to predict the subject as part of the scope. One exception where the subject was not annotated in the prediction is for subjects represented by an initial instead of a pronoun or a full name, which is common in Figure 2: Actual scope length and scope length predicted by Legal-XLM-R\({}_{\text{Large}}\) for each experiment. X marks the scope length of the train data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline ModelTest Dataset & fr & it & de (DE) & de (CH) & \begin{tabular}{c} Mean F1 \\ by Model \\ \end{tabular} \\ \hline DistilmBERT & \(61.43_{\pm 1.9}\) & \(63.40_{\pm 2.6}\) & \(63.50_{\pm 4.3}\) & \(58.78_{\pm 4.5}\) & \(61.78_{\pm 3.8}\) \\ mBERT & \(66.39_{\pm 2.1}\) & \(68.49_{\pm 0.8}\) & \(64.17_{\pm 3.1}\) & \(54.31_{\pm 4.8}\) & \(63.34_{\pm 6.2}\) \\ XLM-R\({}_{\text{Base}}\) & \(66.80_{\pm 1.9}\) & \(71.40_{\pm 0.8}\) & \(67.29_{\pm 3.7}\) & \(62.44_{\pm 2.9}\) & \(66.98_{\pm 4.0}\) \\ XLM-R\({}_{\text{Large}}\) & \(72.30_{\pm 2.0}\) & \(70.30_{\pm 0.9}\) & \(73.81_{\pm 4.2}\) & \(\mathbf{63.72}_{\pm 4.6}\) & \(70.03_{\pm 5.0}\) \\ \hline Glot500-m & \(63.78_{\pm 0.8}\) & \(65.54_{\pm 1.1}\) & \(61.38_{\pm 4.0}\) & \(54.51_{\pm 2.5}\) & \(61.30_{\pm 4.9}\) \\ \hline Legal-Swiss-R\({}_{\text{Base}}\) & \(69.48_{\pm 2.3}\) & \(68.64_{\pm 1.0}\) & \(71.81_{\pm 3.8}\) & \(54.26_{\pm 4.9}\) & \(66.05_{\pm 7.7}\) \\ Legal-Swiss-R\({}_{\text{Large}}\) & \(\mathbf{74.66}_{\pm 2.4}\) & \(72.68_{\pm 1.5}\) & \(\mathbf{76.5}_{\pm 1.6}\) & \(51.75_{\pm 6.6}\) & \(68.89_{\pm 10.8}\) \\ Legal-XLM-R\({}_{\text{Base}}\) & \(71.50_{\pm 3.1}\) & \(71.48_{\pm 2.2}\) & \(71.35_{\pm 5.4}\) & \(51.93_{\pm 3.5}\) & \(66.57_{\pm 9.3}\) \\ Legal-XLM-R\({}_{\text{Large}}\) & \(74.52_{\pm 2.1}\) & \(\mathbf{74.48}_{\pm 3.3}\) & \(76.06_{\pm 3.3}\) & \(61.30_{\pm 8.9}\) & \(\mathbf{71.59}_{\pm 7.7}\) \\ \hline _ChatGPT_ & \(13.00_{\pm 2.1}\) & \(25.11_{\pm 1.5}\) & \(16.47_{\pm 2.6}\) & \(32.91_{\pm 7.9}\) & \(21.87_{\pm 8.9}\) \\ \hline Mean F1 by Dataset & \(68.99_{\pm 4.9}\) & \(\mathbf{69.60}_{\pm 3.7}\) & \(69.54_{\pm 6.4}\) & \(57.00_{\pm 6.4}\) & \(66.28_{\pm 7.6}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Cross-domain zero-shot results from existing datasets to our new legal datasets. All models except for ChatGPT were pre-trained on all external datasets, ChatGPT did not receive any training data. The bottom right entry shows the average across all datasets and models except ChatGPT. legal documents for anonymization reasons. We suspect that in these cases the models were not able to identify the initial as the subject because these kinds of subjects might be more uncommon outside of the legal domain. Annotation: E\({}_{-}\)**ne**disposait d'**aucune**autonomie budgetaire; _E\({}_{-}\) had no budgetary autonomy_ Prediction: E\({}_{-}\)**ne**disposait d'**aucune**autonomie budgetaire; _E\({}_{-}\) had no budgetary autonomy_ Non-continuous scopesAnother error case is sentences where the scope is not continuous because it is interrupted by an interjection or contrasting statement. These kinds of sentences are more complex than the average sentence and not very common in the training data. A larger amount of training data containing similar sentence structures could improve accuracy. Annotation: Eine ordentliche Kundigung ist wahrend der vereinbarten Laufzeit beiderseits nur zum Vertragsende und **nicht** zu einem fruheren Zeitpunkt zulassig. _An ordinary termination during the agreed term is only permissible on both sides at the end of the contract and not at an earlier time_ Prediction: Eine ordentliche Kundigung ist wahrend der vereinbarten Laufzeit beiderseits nur zum Vertragsende und **nicht** zu einem fruheren Zeitpunkt zulassig. _An ordinary termination during the agreed term is only permissible on both sides at the end of the contract and not at an earlier time_ Prediction: Eine ordentliche Kundigung ist wahrend der vereinbarten Laufzeit beiderseits nur zum Vertragsende und **nicht** zu einem fruheren Zeitpunkt zulassig. _An ordinary termination during the agreed term is only permissible on both sides at the end of the contract and not at an earlier time_ ## 6 Conclusions and Future Work ### Conclusion We released new legal datasets in German, French and Italian, annotated for negation cues and scopes and showed that the legal domain does pose a challenge for models in negation scope resolution. Cross-domain zero-shot experiments showed that models without legal training data do not perform as well on multilingual legal datasets as they do on other domains. The task is also too complex for ChatGPT, which was not able to reach F1-scores above 37%. Using our new datasets we fine-tuned different models on the legal domain, significantly improving the results and showing that even relatively small amounts of training data in a specific domain and language can improve the performance of multilingual LMs for negation scope resolution. ### Future Work Negation scope resolution models in the legal domain could benefit from more training data to increase the accuracy of predictions of more complex sentence structures such as non-continuous scopes. More diverse data from different legal fields could further improve the performance of negation scope models in the legal domain. With our new datasets we were able to show that existing systems performing well on datasets \begin{table} \begin{tabular}{l c c c c c} \hline \hline ModelTest Dataset & fr & it & de (DE) & de (CH) & \begin{tabular}{c} Mean F1 \\ by Model \\ \end{tabular} \\ \hline DistilmBERT & \(79.56_{\pm 1.0}\) & \(74.94_{\pm 1.7}\) & \(58.74_{\pm 9.6}\) & \(52.59_{\pm 11.3}\) & \(66.46_{\pm 13.3}\) \\ mBERT & \(87.22_{\pm 1.6}\) & \(81.94_{\pm 1.3}\) & \(81.39_{\pm 3.6}\) & \(70.78_{\pm 6.7}\) & \(80.33_{\pm 7.1}\) \\ XLM-RBase & \(88.70_{\pm 0.8}\) & \(\textbf{86.43}_{\pm 2.2}\) & \(88.00_{\pm 1.9}\) & \(\textbf{83.71}_{\pm 4.8}\) & \(\textbf{86.71}_{\pm 3.3}\) \\ XLM-RLarge & \(\textbf{90.55}_{\pm 0.9}\) & \(84.93_{\pm 1.7}\) & \(\textbf{91.36}_{\pm 0.8}\) & \(76.65_{\pm 4.5}\) & \(85.87_{\pm 6.4}\) \\ \hline Glot500-m & \(86.77_{\pm 2.3}\) & \(83.41_{\pm 1.3}\) & \(90.10_{\pm 2.0}\) & \(77.73_{\pm 4.6}\) & \(84.50_{\pm 5.4}\) \\ \hline Legal-Swiss-RBase & \(87.42_{\pm 1.2}\) & \(84.54_{\pm 1.6}\) & \(88.24_{\pm 1.0}\) & \(70.95_{\pm 3.6}\) & \(82.79_{\pm 7.4}\) \\ Legal-Swiss-RLarge & \(84.63_{\pm 1.0}\) & \(83.88_{\pm 1.9}\) & \(88.47_{\pm 3.9}\) & \(70.33_{\pm 6.0}\) & \(81.83_{\pm 7.8}\) \\ Legal-XLM-RBase & \(86.40_{\pm 2.1}\) & \(83.28_{\pm 1.4}\) & \(89.56_{\pm 2.5}\) & \(74.52_{\pm 8.0}\) & \(83.44_{\pm 7.0}\) \\ Legal-XLM-RLarge & \(85.51_{\pm 1.7}\) & \(85.76_{\pm 0.3}\) & \(89.58_{\pm 1.8}\) & \(80.16_{\pm 4.0}\) & \(85.25_{\pm 4.1}\) \\ \hline Mean F1 by dataset & \(\textbf{86.31}_{\pm 3.2}\) & \(83.23_{\pm 3.5}\) & \(85.05_{\pm 10.4}\) & \(73.05_{\pm 10.3}\) & \(81.91_{\pm 9.3}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Multilingual zero-shot experiments within our legal datasets. Each column represents a different set of test and train data where the test data includes all legal datasets in languages that are not the language of the test dataset i.e. models evaluated on fr were trained with it and de (DE,CH). across different domains are not necessarily able to perform as well on legal data. This should motivate future work to focus on this complex domain and evaluate the performance of existing systems in diverse NLP tasks. ### Limitations Due to resource constraints, our datasets are relatively small compared to other publicly available corpora. A larger set of legal data accross a diverse set of sources, annotated with negations could further improve the performance of LMs for negation scope resolution in this field. We also did not investigate the potential of cross-lingual cue detection since this is the more trivial part of negation research and can easily be replaced by a list of negation cues for each language. ### Ethics Statement The goal of our work was to improve the performance of negation scope resolution systems in the legal domain. These improved systems could be used to support legal professionals in processing and analysing legal texts. These systems should only be used as an assistance to human experts with considerations to their limitations and possible biases. To the best of our knowledge there is currently no real world application of a negation scope resolution system in the legal domain. The legal data that we annotated and used to train our models is all publicly available and has all been anonymized. It should therefore not include any sensitive information.
2309.14294
AspGap: Augmented Stellar Parameters and Abundances for 23 million RGB stars from Gaia XP low-resolution spectra
We present AspGap, a new approach to infer stellar labels from low-resolution Gaia XP spectra, including precise [$\alpha$/M] estimates for the first time. AspGap is a neural-network based regression model trained on APOGEE spectra. In the training step, AspGap learns to use XP spectra not only to predict stellar labels but also the high-resolution APOGEE spectra that lead to the same stellar labels. The inclusion of this last model component -- dubbed the hallucinator -- creates a more physically motivated mapping and significantly improves the prediction of stellar labels in the validation, particularly of [$\alpha$/M]. For giant stars, we find cross-validated rms accuracies for Teff, log g, [M/H], [$\alpha$/M] of ~1%, 0.12 dex, 0.07 dex, 0.03 dex, respectively. We also validate our labels through comparison with external datasets and through a range of astrophysical tests that demonstrate that we are indeed determining [$\alpha$/M] from the XP spectra, rather than just inferring it indirectly from correlations with other labels. We publicly release the AspGap codebase, along with our stellar parameter catalog for all giants observed by Gaia XP. AspGap enables new insights into the formation and chemo-dynamics of our Galaxy by providing precise [$\alpha$/M] estimates for 23 million giant stars, including 12 million with radial velocities from Gaia.
Jiadong Li, Kaze W. K. Wong, David W. Hogg, Hans-Walter Rix, Vedant Chandra
2023-09-25T17:06:01Z
http://arxiv.org/abs/2309.14294v1
AspGap: Augmented Stellar Parameters and Abundances for 23 million RGB stars from Gaia XP low-resolution spectra ###### Abstract We present AspGap, a new approach to infer stellar labels from low-resolution Gaia XP spectra, including precise [\(\alpha\)/M] estimates for the first time. AspGap is a neural-network based regression model trained on APOGEE spectra. In the training step, AspGap learns to use XP spectra not only to predict stellar labels but also the high-resolution APOGEE spectra that lead to the same stellar labels. The inclusion of this last model component -- dubbed the _hallucinator_ -- creates a more physically motivated mapping and significantly improves the prediction of stellar labels in the validation, particularly of [\(\alpha\)/M]. For giant stars, we find cross-validated _rms_ accuracies for \(T_{\rm eff}\), log \(g\), [M/H], [\(\alpha\)/M] of \(\sim\) 1%, 0.12 dex, 0.07 dex, 0.03 dex, respectively. We also validate our labels through comparison with external datasets and through a range of astrophysical tests that demonstrate that we are indeed determining [\(\alpha\)/M] from the XP spectra, rather than just inferring it indirectly from correlations with other labels. We publicly release the AspGap codebase, along with our stellar parameter catalog for all giants observed by Gaia XP. AspGap enables new insights into the formation and chemo-dynamics of our Galaxy by providing precise [\(\alpha\)/M] estimates for 23 million giant stars, including 12 million with radial velocities from Gaia. 0000-0001-8000-0001-8000]J.J. L. ([A]) 0000-0002-3870-7000]K. K. Wong 0000-0002-8001-8000]David W. Hogg 0000-0002-1881-7000]H.-Walter Rix 0000-0002-8001-8000]Vedant Chandra ## 1 Introduction Understanding the star formation and Galactic enrichment history of the Milky way is essential for gaining insights into the broader context of galaxy evolution and cosmology. For decades, the field of _Galactic Archaeology_ has been dedicated to unraveling the formation history of our own Galaxy. This endeavor has been greatly aided by large-scale spectroscopic surveys such as the Sloan Digital Sky Survey (SDSS)/The Apache Point Observatory Galactic Evolution Experiment (APOGEE) (Majewski et al., 2017), the Milky Way Mapper of SDSS-V (Kollmeier et al., 2017; Almeida et al., 2023), the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) (Cui et al., 2012; Deng et al., 2012; Luo et al., 2012), GALactic Archaeology with HERMES (GALAH) (Buder et al., 2021), and the 4-metre Multi-Object Spectroscopic Telescope (4MOST) (de Jong et al., 2019). These surveys have played a crucial role in providing extensive spectroscopic data and enabling detailed investigations into the chemical and dynamical properties of stars in the Milky Way. In addition to the spectroscopic data, the European Space Agency's (ESA) Gaia mission (Gaia Collaboration et al., 2016) has played a pivotal role in this field by observing billions of stars with unprecedented precision. The Gaia mission has provided measurements of parallax and proper motion, enabling the construction of a comprehensive six-dimensional phase-space information of our Galaxy, revolutionizing our knowledge of the structure of the Milky Way (MW). By combining astrometric information, ongoing and future spectroscopic surveys have the potential to significantly expand our understanding of fundamental galactic astronomy. These surveys can broaden the distribution range of atmospheric parameters and chemical abundances, leading to valuable insights into various aspects such as the formation history of the Milky Way (Xiang & Rix, 2022), the variation in the stellar initial mass function across different chemical environments and star formation histories (Li et al., 2023), and the discovery of the existence of very massive stars in the early universe (Xing et al., 2023). In the recent Gaia Data Release 3 (DR3) (Gaia Collaboration et al., 2022), a substantial number of approximately 220 million low-resolution spectra of stars have been made available (Gaia Collaboration et al., 2022; De Angeli et al., 2022; Montegriffo et al., 2022). These spectra, obtained through the combined observations of the blue photometer (BP) and red photometer (RP) in the Gaia mission, provide essential information on stellar parameters, including chemical abundance measurements. The wavelength range covered by the combined BP/RP (XP) spectra spans from 3,300 to 10,500, with a resolution \(\mathcal{R}\approx 15-85\)(Andrae et al., 2022). These spectra serve as valuable resources for inferring stellar parameters, distances, and extinctions for stars within the Milky Way (Andrae et al., 2022). The potential of using the Gaia XP spectra for stellar parameter estimation was initially explored by Liu et al. (2012). Subsequently, the Gaia General Stellar Parameterizer from Photometry, known as GSP-phot(Andrae et al., 2022), applied a Bayesian forward-modeling approach to fit the XP spectra and produced a homogeneous catalog containing effective temperature (\(T_{\rm eff}\)), surface gravity, and metallicity estimates for approximately 471 million stars with \(G\)-band magnitudes brighter than 19. Although GSP-phot provides valuable information on fundamental stellar parameters, the estimation of metal abundance ([M/H]) is not without its limitations due to the characteristics of the Gaia XP system and the limited information it provides about [M/H] (Andrae et al., 2022; Andrae et al., 2023). A theoretical study by Ting et al. (2017) has demonstrated that valuable information on element abundances, including [Fe/H] and \(\alpha\)-abundance ([\(\alpha\)/M]), can be gleaned from low-resolution spectra, although stellar labels become degenerate at \(\mathcal{R}\leq 100\). Remarkably, the precision of these element abundances remains largely unaffected by the resolution of the spectra as long as the exposure time and number of detector pixels are held constant. In fact, low-resolution spectra such as Gaia XP offer the advantage of higher signal-to-noise ratio (S/N) per pixel and a broader wavelength coverage within a single observation. These characteristics highlight the potential of low-resolution spectra to deliver accurate measurements of element abundances. Accurately determining chemical abundances from XP spectra using traditional model-driven methods, which rely on comparing observed spectra to stellar spectral libraries, presents challenges due to the inherent systematic differences in flux calibration between observations and synthetic spectra. Theoretical spectra and observed spectra often exhibit separate distributions in the high-dimensional flux space because of the imperfections in the theoretical spectra and errors introduced by observation conditions and instrument effects (Wang et al., 2023). This calibration issue becomes particularly problematic when attempting to identify \(\alpha\)-sensitive spectral lines (Gavel et al., 2021). Given the advantages of low-resolution spectra and the challenges associated with traditional model-driven methods, employing a data-driven approach for estimating chemical abundances from low-resolution spectra becomes a natural and promising choice (Ting et al., 2017). By leveraging the information contained in the data itself, data-driven methods can overcome the limitations of model-driven approaches and provide more robust and accurate estimates of chemical abundances. Data-driven methods and machine learning techniques have become widely adopted for deriving stellar labels (parameters) from large volumes of low-resolution spectra. These methods offer an alternative approach to traditional model-driven methods and have shown great success in accurately estimating stellar properties in recent years (e.g., Ness et al., 2015; Ting et al., 2017; Ting et al., 2019; Zhang et al., 2020; Xiang et al., 2022). There are two main categories of data-driven methods: empirical forward models and discriminative models. In the empirical forward model approach, models are built to predict the spectrum based on the stellar parameters (Ness et al., 2015; Casey et al., 2016; Ho et al., 2017; Ting et al., 2017; Ting et al., 2019; Zhang et al., 2020; Li et al., 2021; Xiang et al., 2022; Zhang et al., 2023). These models utilize a large training dataset with known stellar parameters to establish the relationship between the spectra and the stellar labels. By applying these models to new spectra, the stellar parameters can be inferred. On the other hand, discriminative models take spectra as input and output stellar labels (Leung and Bovy, 2019; Rix et al., 2022; Andrae et al., 2023; Yao et al., 2023). These models are trained using a labeled dataset, where both the spectra and the corresponding stellar parameters are known. The models learn the complex mapping between the input spectra and the desired output labels, allowing them to predict stellar parameters for unseen spectra. Both empirical forward models and discriminative models have their strengths and applications. Empirical forward models directly predict the spectra based on the stellar parameters, which can be useful for studying the physical processes shaping the spectra and for generating synthetic spectra for stellar population synthesis. The discriminative models, on the other hand, provide a more direct approach to infer stellar labels from spectra, which is beneficial when the focus is on estimating stellar parameters for large datasets efficiently. Recently, Leung and Bovy (2023) demonstrated that a sin gle Transformer-based neural network trained on heterogeneous spectroscopic and photometric datasets could perform both discriminative tasks like predicting stellar parameters from spectra as well as generative tasks such as generating spectra from parameters, which opens up some new ideas on how to use spectra and stellar labels. The Cannon(Ness et al., 2015) is a data-driven generative model that was introduced for spectroscopic data analysis. This approach involves establishing mappings between known stellar labels and spectra using a training dataset. Once the data-driven model is trained, it can be applied to infer labels for observed spectra. The Cannon has demonstrated its effectiveness in deriving labels for high-resolution APOGEE spectra (Ness et al., 2015; Casey et al., 2016), as well as low-resolution spectra from surveys like LAMOST (Ho et al., 2017). Data-driven methods offer several advantages, including the ability to learn patterns or features in spectra and make predictions based on them, as well as improved performance and efficiency compared to other methods (Casey et al., 2016). One advantage of the forward modeling approach is its interpretability, allowing for the examination of residuals and the identification of new systematics and explanatory variables. It also has the capability to handle missing data (Andrae et al., 2023). However, both generative and discriminative models have their limitations, including the risk of learning physically implausible relationships (Hogg et al., 2019). Additionally, they may not offer novel insights beyond our current understanding of the underlying physics. In contrast, the direct discriminative approach can identify features in spectra that are correlated with stellar parameters. However, it may not capture all the relevant parameters and can be susceptible to systematic errors. These supervised-learning models are generally easier to train and better at avoiding overfitting, where the model becomes overly complex and fails to generalize to new data. Forward modeling approaches can address overfitting by incorporating a regularization term (Casey et al., 2016). O'Briain et al. (2021) introduced a hybrid generative domain-adaptation method that utilizes unsupervised learning on large spectroscopic surveys to transform simulated stellar spectra into realistic spectra. This method successfully calibrates synthetic data to match observations (Wang et al., 2023), bridging the gap between theoretical models and practical observations. It also enables the identification of missing spectral lines in synthetic modeling (O'Briain et al., 2021). This innovative approach has the potential to enhance data analysis techniques in stellar spectroscopy and other fields that rely on large datasets. Notably, the methodology employed in this study offers a balanced approach to stellar parameterization, neither relying solely on forward models nor discriminative models. In this paper, we present a novel data-driven method called AspGap, which enables the simultaneous estimation of stellar labels (\(T_{\mathrm{eff}}\), \(\log\ g\), [M/H], and [\(\alpha\)/M]) for red giant branch (RGB) stars using Gaia XP spectra combined with APOGEE labels. Our approach lies between the forward modeling and direct supervised methods, leveraging the benefits of both. The architecture of our model functions as a mapping from XP spectra to APOGEE labels, but we incorporate APOGEE spectra during training to enhance the model's performance. This combination allows us to exploit the rich information in both datasets effectively. We demonstrate that Gaia XP spectra can accurately predict \(T_{\mathrm{eff}}\) and \(\log\ g\) due to their clear reflection in the overall spectrum profile. However, the estimation of metal abundance ([M/H]) and \(\alpha\) abundance ([\(\alpha\)/M]) from XP spectra is more challenging. Despite this challenge, our model can still achieve comparable precision in determining [M/H] and [\(\alpha\)/M] as with LAMOST spectra, which typically have higher resolution of \(\mathcal{R}\approx 1800\). For [M/H], the expected median absolute error (MAE) was estimated to be 0.1-0.2 dex for Gaia XP spectra (Liu et al., 2012). Although Andrae et al. (2022) reported a slightly higher MAE of 0.21 dex for [M/H] derived from Gaia XP spectra compared to APOGEE, this information is still valuable, albeit at a qualitative level. Recently, Rix et al. (2022) and Andrae et al. (2023) (henceforth A23) developed an extreme gradient boosting ensemble model trained on APOGEE [M/H] values and achieved a significantly improved median absolute error (MAE) of 0.06 dex when using only XP information. However, the estimation of \(\alpha\)-abundance ([\(\alpha\)/M]) from XP spectra has been found to be a remapping between [\(\alpha\)/M] and other parameters rather than a direct causal effect of \(\alpha\) information (Gavel et al., 2021). In our study, we demonstrate that our approach successfully derives meaningful [\(\alpha\)/M] values for a large sample of approximately 23 million stars. This sample size is more than an order of magnitude larger than the current largest sample of [\(\alpha\)/M] measurements, which consists of around 2 million stars observed with a resolution of \(\mathcal{R}\sim 1800\) in LAMOST DR8. This substantial increase in sample size provides unprecedented statistical power for studying the Galactic enrichment history of the Milky Way. An additional advantage of our data product, presented in this paper, is its independence from complicated selection effects introduced by cross-matching with multiple catalogs. Our published catalog is based solely on the selection function of Gaia, ensuring a homogeneous and all-sky dataset for studying the Galactic enrichment history of the Milky Way. The subsequent sections of the paper are organized as follows: Section 2 focuses on the dataset utilized for training AspGap and provides insights into its composition and characteristics. In Section 3, we provide a comprehensive explanation of the AspGap method, including a detailed description of the model architecture and the loss function. In Section 4, we evaluate the performance of AspGap and present a catalog containing the labels and uncertainties of approximately 23 million red-giant stars obtained using AspGap. Finally, in Section 5, we conclude the paper by discussing the implications of our results and addressing potential limitations and considerations associated with the use of our data product. The resulting catalogs generated in this study have been published online (Li, 2023) and can be accessed at the following link: [https://zenodo.org/record/8002699](https://zenodo.org/record/8002699). ## 2 Data In this Section we briefly introduce the Gaia _XP_ data and their preprocessing, and the APOGEE training set. ### XP: the Gaia BP/RP low-resolution spectra The third data release of the Gaia mission (DR3) (Gaia Collaboration et al., 2022) offers low-resolution aperture prism spectra (De Angeli et al., 2022; Montegriffo et al., 2022) for approximately 220 million stars. These spectra are obtained using the blue photometer (BP, 330-680 nm) and red photometer (RP, 640-1050 nm) instruments. The Gaia observation coverage includes a staggering 78 billion transits, and the processing pipeline of BP/RP spectra generates calibrations for each individual transit spectrum out of a total of 65 billion epoch spectra (De Angeli et al., 2022). The final data product consists of more than two billion sources obtained by averaging the epoch spectra. It's important to note that the XP spectra differ from classical spectra in terms of their representation. Instead of providing flux values corresponding to specific sampled wavelengths, the XP spectra are represented as a continuous function using a set of basis functions (De Angeli et al., 2022). The continuous spectra are then encoded as an array of coefficients, with the first coefficients capturing the majority of the flux and the higher-order coefficients storing narrow spectral features (De Angeli et al., 2022). This representation maximizes the information content of the XP spectroscopy data by efficiently representing the spectra with a reduced number of coefficients (De Angeli et al., 2022). ### Training datatset For training, we use stars in common between the Gaia XP data and APOGEE targets in the SDSS Data Release 17(Abdurro'uf et al., 2022) and Gaia DR3 of interest, after some data cleaning. First, we rule out stars with problematic flags (ASPCAPFLAG) from the ASPCAP. We remove stars with WARNING or RAD flags in the data model defined in ASPCAP for \(T_{\rm eff}\), log \(g\), [M/H], and [\(\alpha\)/M], as well as stars with flags with CHI2 and NO_GRID. The reason we did not use a strict flag cut (i.e., ASPCAPFLAG \(\neq 0\)) to obtain a clean RGB training sample is that labels on parameter boundaries tend to be harder to estimate, and expanding the boundaries to main-sequence stars would move the labels of the giant stars of interest away from the boundaries. Second, samples are selected using the following criteria as displayed in Table 2 to obtain both reliable stellar labels from APOGEE and to eliminate stars without meaningful astrophysical parameters: The condition \(5\). identifies stars that are either on the main-sequence or on the RGB. Third, we further apply the following conditions to select 142,130 stars with good APOGEE labels, and we display the S/N distributions of coefficients in Figure 1. Here, the S/N is defined as the L2 norm of the given coefficients divided by the L2 norm of the corresponding errors. We find: 1. Most of the stars have high global S/N (\(>100\)) values for the coefficients. 2. Although the S/N of high-order coefficients (11-55), which contains abundance information, is lower than the first-order coefficients (1-10), most of them are larger than 1, indicating that there is still valuable information present. For the balance of [M/H] and [\(\alpha\)/M], we uniformly weight [\(\alpha\)/M] and [M/H] in two dimensions. We re-sample stars into various [M/H]-[\(\alpha\)/M] bins with specific bin edges. The bin edges for [M/H] are as follows: 2.0, -1.8, -1.6, -1.4, -1.2, -1.0, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0.0, 0.1, 0.2, 0.3, 0.4, and 0.6. The bin edges for [\(\alpha\)/M] are -0.2, -0.1, 0.0, 0.1, 0.2, 0.3, and 0.4. In each bin, we perform random sampling of 1,000 stars with replacement from the selected sample, resulting in a total of 111,000 training data points. This re-sampling process ensures a flat distribution of stars in the [M/H]_vs_ [\(\alpha\)/M] space, allowing for better representation and balance in the training set. ### Preprocessing Then we pre-processing the data before we train the AspGap. We first concatenate the BP and RP coefficients to a 110-element array. Then we use the value of \(10^{0.5(15-G)}\) (\(G\) is the Gaia \(G\)-band magnitude) as the normalization value, and divided by the 110-element XP array. In cases where the Gaia XP coefficients exhibit a wide range of values spanning four orders of magnitude from first to high coefficients, it is found that the higher-order coefficients tend to cluster around similar values. In such scenarios, using the median and interquartile range as normalization measures can yield more robust results. Some works of data-driven methods to pre-process the spectra use standard normalization, performed by removing the mean and scaling to a unit variance (e.g., Zhang et al., 2020). However, outliers can often negatively affect the sample statistics, especially for noisy data. Hence we adopt the median value of each \(j\)th coefficient, \(\mu_{j}\) as the centering value, and the coefficient range (the difference between the 25th quantile and 75th quantile) as the scale. Let \(x_{i,j}\) be the \(j\)th coefficient of \begin{table} \begin{tabular}{l l} **Criterion** & **Range/Condition** \\ \hline 1. & \(3000\leq\mathrm{T_{eff}}\leq 7000\) K \\ 2. & \(0\leq\log g\leq 5.5\) dex \\ 3. & \(-2.5\leq\mathrm{[M/H]}\leq 0.6\) dex \\ 4. & \(-0.1\leq\mathrm{[\alpha/M]}\leq 0.6\) dex \\ 5. & \(\log g>4.2-1.5\log(\mathrm{T_{eff}}/5777)\) and \(\log g<4.8\) dex \(\vee\) \\ & \(\log g<(5/3000)(\mathrm{T_{eff}}-3000)+0.5\) and \(\log g<(5/3000)(\mathrm{T_{eff}}-3000)-1.9\) \\ 6. & \(\sigma_{\mathrm{T_{eff}}}\leq 50\) K \\ 7. & \(\sigma_{\log g}\leq 0.05\) dex \\ 8. & \(\sigma_{\mathrm{[M/H]}}\leq 0.02\) dex \\ 9. & \(\sigma_{\mathrm{[\alpha/M]}}\leq 0.02\) dex \\ 10. & S/N of APOGEE \(>200\) \\ \end{tabular} \end{table} Table 1: Criteria to deem stellar labels from SDSS/APOGEE reliable, and eliminate stars without meaningful astrophysical parameters. Figure 1: Distribution of the signal-to-noise (S/N) Gaia XP coefficients for the sample, shown for the BP portion of the spectrum on the left, for the RP poertion of the spectrum on the right. The black dashed histograms represent the sample’s distribution when the S/N is averaged across all 55 coefficients in BP and RP, respectively. The blue and red histogram show the analogous distributions considering only the first 10 coefficients (blue) and the remaining higher-order coefficients (red), respectively. The vertical lines of different colors in the figure represent the S/N thresholds employed in our training sample. Specifically, we consider spectra with S/N values greater than 100 for both the BP and RP coefficients (1-10), while for coefficients 11-55, a S/N threshold of 1 is applied. the \(i\)th spectra, then we define the scaling coefficient \(\hat{x}_{i,j}\) for \(j\)th coefficient as \[\hat{x}_{i,j}=\frac{x_{i,j}-x_{50}}{x_{75}-x_{25}}. \tag{1}\] Stellar labels are also scaled in the same way. Because the label measurements are more robust than high-order XP coefficients, here we choose the label range of scale as \(y_{97.5}-y_{2.5}\), where \(y_{m}\) is the \(m\)-th percentile value of label \(y\). ## 3 Method: Building the AspGap MODEL Our goal is to create a data-driven model that estimates stellar labels from XP spectra. To achieve this, we have developed a neural-network-based model, AspGap, which leverages the rich abundance information of high-resolution APOGEE spectra. The AspGap model is constructed in four blocks as shown in Figure 3: a pre-trained APOGEE decoder, an XP encoder, a component dubbed the _hallucinator_, and an XP decoder. The key idea behind the AspGap architecture is to fully exploit the rich abundance information contained in APOGEE spectra to improve stellar parameter estimates from Gaia XP spectra. In a vanilla approach, we might only use AspGap XP spectra as input to predict the stellar labels with an encoder and decoder. While this approach is valid, it does not impose any constraints on how the prediction is made, e.g. that the prediction draws on physically meaningful parts of the XP spectra. Training a neural network involves determining the weights and biases of a flexible model that fits the training data. Within the set of possible parameter combinations that fit the data well, some may result in overfitting, exhibiting unphysical behavior outside the training data. Our model requires the introduction of extra constraints that entail matching the APOGEE spectra, resulting in more robust results on unseen data1. This forces the model to learn a representation of the AspGap spectra that is consistent with how stellar parameters are derived from APOGEE spectra. Footnote 1: The authors are aware that a formal distinction between a method that “predicts” and one that determines or “measures” stellar labels is a subtle and far reaching issue, beyond the scope of the paper Figure 2: AspGap training set for this application, taken from SDSS APOGEE DR17. The left panel shows the \(T_{\rm eff}\) – log \(g\) Kiel diagram of the training data, color-coded by [M/H]. The right panel shows the number density distribution of the [M/H]-[\(\alpha\)/M]abundance diagnostics in the training set. AspGap is designed to estimate \(T_{\rm eff}\), log \(g\), [M/H], and [\(\alpha\)/M] values from XP spectra that are consistent with the SDSS APOGEE training data. By incorporating the hallucinator component, we effectively modify the XP encoder component (that is common to both subsequent branches), as it must be able to allow the prediction of APOGEE-like spectra, where stellar label are determined from physically well-defined spectral features. We anticipate that this discourages the model from overfitting when mapping XP spectra to stellar labels alone, thereby improving the test-step performance, when applied to new XP spectra. By leveraging both XP and APOGEE spectra simultaneously during training, AspGap exploits explicitly the rich information in APOGEE spectra. As we show below, this indeed substantially improves the stellar label estimates, in particular the [\(\alpha\)/M] abundance measurements, from the low-resolution XP spectra. As Figure 3 illustrates, AspGap starts with the XP encoder that maps the low-resolution XP spectra to a latent space. Then, the _hallucinator_ network generates an APOGEE-like spectrum from the this latent space representation. This spectrum then gets mapped to stellar labels, using a decoder that has been pre-trained on real APOGEE spectra. In a second branch of AspGap, the initial latent embedding from the encoder gets mapped to the stellar labels directly via an XP decoder. Note that we do not require explicitly that the spectral features in the hallucinated APOGEE-like vary with changes in label exactly as expected "from physics". Instead, the hallucinated spectra must only generate accurate stellar labels when inputted into the pre-trained APOGEE pipeline decoder, originally trained on genuine APOGEE spectra. Nonetheless, the hallucinated gradient spectra (i.e. the spectra's derivatives with respect to some stellar label), show that the hallucinated APOGEE spectra vary, say with [M/H], at the Figure 3: The architecture of the AspGap model, consisting of four blocks: a pre-trained APOGEE decoder that takes APOGEE-like spectra to generate predictions for stellar labels; an XP encoder that generates 1024 shared latent variables from the \(2\times 55\) XP coefficients; the _hallucinator_, which generates APOGEE-like spectra from these embedded variables to be fed into the pre-trained APOGEE decoder; and the XP decoder that generates stellar label predictions from the XP encoder’s 1024 latent variables. During the training process, both sets of stellar label predictions, obtained from the XP decoder and the _hallucinator_/APOGEE decoder, are utilized in the objective function of AspGap. The fundamental concept behind the design of AspGap is the prospect that incorporating the stellar label generation through the _hallucinator_ route will lead to a more robust and physically meaningful XP encoder following the training phase. In the testing phase and for inference on other data, we only use the labels generated by the XP decoder. This is because, once the model is well trained, the mean-square error between the _hallucinator_’s predictions and the ground truth, as well as between the XP decoder’s predictions and the ground truth, becomes approximately the same. wavelengths where physics-based spectral models expect them to. This is shown in Fig. 4, and demonstrates the hallucinator's ability to generate realistic synthetic spectra containing meaningful spectral information related to the stellar parameters. In each block, we chose to use Multilayer Perceptron (MLP) as our feature extractor, rather than other powerful feature extraction techniques like convolutional neural networks (CNNs). We did this mainly because MLP is both versatile and simple. MLP is capable of learning non-linear relationships between input features, which makes it well-suited for problems where the relationships between the features are complex and difficult to capture using linear models. While other feature extraction techniques, such as CNNs, can be very powerful, they may not always be necessary for every problem. In our case, we found that MLP provided a good balance of performance and simplicity, allowing us to achieve good results without adding unnecessary complexity to our model. While we may have qualitative reason to expect that the _hallucinator_ enforces a better latent embedding in the XP encoder in training, any improvements in the prediction of stellar test labels can only be estimated empirically, as we do below. ### Objective function During the training process, we optimize an objective function and determine the model parameters \(\theta\). We denote the \(i^{th}\) target's APOGEE labels as \(\hat{y}_{i}\), and their uncertainties as \(\sigma_{i}\); and we denote the predicted labels from the XP decoder as \(y_{\text{model}1,i}\), and those from the hallucinator and pre-trained APOGEE decoder as \(y_{\text{model}2,i}\). The overall data-model variance term \(s_{i}\) described as \[s_{i}^{2}=\max{(\epsilon,{\sigma_{i}}^{2}+s_{\text{model}1,i}^{2})}. \tag{2}\] For stability in the training, we introduce a floor \(\epsilon\) (\(10^{-6}\)) to prevent values of excessive \(1/s_{i}\). With these definitions, we can spell out the loss function, which penalizes deviations of predicted labels from true labels, and incorporates regularization terms to encourage model parameter sparsity: \[L(\hat{y},y_{\text{model}1},y_{\text{model}2}) =\frac{1}{N}\biggl{[}\sum_{i=1}^{N}\sum_{k=1}^{2}\frac{\alpha_{k }}{2}\frac{[\hat{y}-y_{\text{model}k,i}]^{2}}{s_{i}^{2}}\] \[\quad+\frac{1}{2}\sum_{i=1}^{N}\ln{s_{i}^{2}}+\Lambda_{1}\| \boldsymbol{\theta}\|_{\text{L}1}\] \[\quad+\Lambda_{2}\sum_{l=1}^{4}D_{\text{KL}}(\chi_{l}\|\mathcal{ N}(0,1))\biggr{]}. \tag{3}\] Here, \(N\) represents the size of the training data. \(\alpha_{1}\) and \(\alpha_{2}\) denotes the relative weights for the loss contributed by the XP decoder and the APOGEE decoder. A good representation requires the decoder to predict the same stellar label both directly from the XP spectra and first from the XP spectra to APOGEE spectra, then decode to stellar label, hence we have both terms in our loss. The loss function comprises a weighted total of the losses computed by both decoders, the weights of which are determined by the relative importance of each decoder for the task at hand. In actuality, these weights are obtained through a hyperparameter grid search, resulting in assigned values of 0.4 for the _Hallucinator_ and 0.6 for the XP decoder. The term with \(\ln{s_{i}}\) is to ensure the model does not just arbitrarily increase the variance of the model to reduce the loss. \(\Lambda_{1}\) is a regularization parameter, and \(\|\boldsymbol{\theta}\|_{\text{L}1}\) is the L1-norm of sum of absolute values of the components of model parameter \(\theta\), and regularization term \(\Lambda_{1}\|\boldsymbol{\theta}\|_{\text{L}1}\) encourages parameters to take on zero values. To further encourage the model to estimate accurate uncertainties, we add a Kullback-Leibler (KL) divergence penalty to the loss function. Specifically, we measure the divergence between the distribution of the uncertainty-normalized difference \(\chi\equiv(\hat{y}-y_{\text{model}1})/\sqrt{\sigma^{2}+s_{\text{model}}^{2}}\), and the standard normal distribution \(\mathcal{N}(0,1)\). Ideally, the probability density distribution of \(\chi\) should be distributed as \(\mathcal{N}(0,1)\), if the predicted \(s_{\text{model}}\) are accurate. We assume the uncertainty-normalized difference \(\chi\) follows a Gaussian distribution \(\mathcal{N}(\mu_{\text{res}},\sigma_{\text{res}})\), with \(\mu_{\text{res}}\) and \(\sigma_{\text{res}}\) as the mean and standard deviation of the Gaussian distribution. In the case at hand, the KL divergence is then given by: \[D_{\text{KL}}(\chi|\mathcal{N}(0,1))=\ln{\frac{1}{\sigma_{\text{res}}}}+\frac {\sigma_{\text{res}}^{2}+\mu_{\text{res}}^{2}}{2}-\frac{1}{2}, \tag{4}\] By adding the KL divergence term to the loss function, the model is penalized when the predicted distribution of uncertainty-normalized residuals deviates from the standard normal distribution. This encourages the model to produce more accurate uncertainties, as it must minimize the KL divergence in addition to minimizing the deviation between the predicted and true values. Overall, adding a KL divergence penalty to the loss function can help the AspGap to estimate accurate uncertainties by ensuring that the predicted uncertainties are as close as possible to the true uncertainties, and that the predicted distribution of uncertainty-normalized residuals follows a standard normal distribution, and we will show the the error analysis in Section 4. The implementation of AspGap is numerically straightforward and stable. The core AspGap model is built using PyTorch and can be trained on a single NISTIA V100 GPU. Prediction of stellar labels on new XP spectra during inference is efficient, taking approximately 0.2 milliseconds per star on a single GPU. We have open-sourced the full AspGap code at [https://github.com/jiadonglee/aspgap](https://github.com/jiadonglee/aspgap) to allow others to replicate our results, extend the model for new applications, and deploy it for making predictions on large datasets. During training, we use the Adam optimizer with an initial learning rate of 0.001. Training is run for 1,000 epochs with early stopping based on the validation loss. We do not find any significant numerical instabilities during training or inference. The model implementation and train/inference procedures are encapsulated in Python scripts, making it easy to apply AspGap to new datasets from a programmatic interface or script. ## 4 Results and Validation In this Section we present the results of AspGap the training and application just described. After an initial internal validation, we present a stellar label catalog for a large sample of RGB stars, where we deem our results to be particularly robust and pertinent to astrophysical applications. We then present an extensive comparison with external data sets, followed by some "astrophysical" plausibility tests that imply that the determination of \([\alpha/M]\) (presented here for the first time for XP spectra) for giants is meaningful. ### Self-validation and error analysis For internal validation, we evaluate the performance of our AspGap method by repeated two-fold cross-validation (CV) on the APOGEE training sample. This involves dividing the training data into two equal subsets, and then iteratively swapping the subsets as train Figure 4: Example of a predicted APOGEE-like gradient spectrum produced inside the hallucinator, compared to an actual APOGEE gradient spectrum. During training, the hallucinator predictions are not directly compared to real APOGEE spectra. Rather, the hallucinated spectra only need to produce the correct stellar labels when fed into the pre-trained APOGEE pipeline decoder. This allows the hallucinator to learn a mapping that captures relevant spectral features correlated with the stellar labels. ing and validation sets. Fig. 5 displays these CV results for our four stellar labels ( \(T_{\rm effXP}\), \(\log~{}g_{\rm XP}\), [M/H]\({}_{\rm XP}\), and [\(\alpha\)/M]\({}_{\rm XP}\)) inferred by AspGap from XP coefficients, plotted against the corresponding APOGEE labels, taken to be the ground truth. The _rms_ scatter of the four labels are 48 K for \(T_{\rm eff}\), 0.12 dex for \(\log~{}g\), 0.07 dex for [M/H], and 0.03 dex for [\(\alpha\)/M], respectively. The comparison immediately affirms that the performance of our stellar label prediction from the Gaia, low-resolution spectra is comparable to the quality of data-driven labels from e.g. LAMOST (Ho et al., 2017). For bright stars (\(G<14\)) the scatter is even slightly lower for all four labels (in particular for [\(\alpha\)/M]\({}_{\rm XP}\)): 46 K, 0.10 dex, 0.06 dex, and 0.021 dex, respectively. For all labels, we define an outlier rate \(\eta\) over the full sample of size \(N\) via \[\eta=\frac{1}{N}~{}\times\mid\{\rm deviation_{i}:\mid\rm deviation_{i}\mid>1\}\mid \tag{5}\] where deviation\({}_{i}\) is the difference between the ground truth and the predicted values for the \(i^{\rm th}\) data point divided by a specified threshold for determining outliers, out_bound. The determination of the threshold for \(T_{\rm eff}\) and \(\log~{}g\) is based on residuals exceeding 15% of the ground truth values. Concerning [M/H], outliers are identified when the residuals deviate by 0.15 dex from the ground truth. Similarly, for [\(\alpha\)/M], outliers are detected when the residuals deviate by 0.05 dex from the ground truth. Remarkably, the outlier rate for the predicted \(T_{\rm eff}\) stands impressively low at 0.05%, signifying that only a minute fraction of the predicted \(T_{\rm eff}\) values deviate significantly from the ground truth values. On the other hand, the outlier rate for the predicted \(\log~{}g\) is higher at 6.67%, suggesting a relatively larger proportion of predicted \(\log~{}g\) values that fall outside the specified threshold for determining outliers. We also show the comparison color-coded by AspGap-derived uncertainties in Figure 6. We find that the AspGap-inferred labels that differ more from the true Figure 5: Validation of AspGap-derived labels from Gaia XP spectra (XP) _vs._ APOGEE labels (AP). From left to right, the panels show the results of two-fold cross-validation drawn from the training set for \(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M], respectively. The (small) scatter and (negligible) bias are indicated in each panel, and \(\eta\) is defined as the outlier rate given be Eq. 5. The pairs of the dashed lines indicate the one-sigma deviation in the difference between the compared labels. The color denotes the logarithm of the number density. Figure 6: Validation of AspGap-derived labels from Gaia XP spectra (XP) _vs._ APOGEE labels (AP). This Figure is analogous to Fig. 5, except that the color coding now denotes AspGap’s estimate of the label prediction’s uncertainty. Most objects with more discrepant AspGap _vs._ APOGEE label estimates have correctly identified larger AspGap label uncertainties. Figure 8: Fidelity of the label uncertainty estimates. The panels show the probability density distribution of the uncertainty-normalized differences (\(\chi\)) between \(\mathrm{A}\mathrm{p}\mathrm{G}\mathrm{a}\mathrm{p}\) and \(\mathrm{A}\mathrm{p}\mathrm{G}\mathrm{e}\mathrm{E}\) labels. In each panel, the blue dashed curve represents the standard normal distribution \(\mathcal{N}(0,1)\), and the orange solid line denotes the best-fitting normal distribution of \(\chi\). The formal uncertainties are meaningful – also for [\(\alpha\)/M] – but about 30% underestimated. By conducting these validations, we have access to the ground truth labels, which allows us to evaluate various performance metrics such as MAE (\(\chi\)), and others. Figure 7: Quality of the labels generated by \(\mathrm{A}\mathrm{p}\mathrm{G}\mathrm{a}\mathrm{p}\), as a function of the input XP spectras’ S/N, shown for \(T_{\mathrm{eff}}\), \(\log\ g\), [M/H], and [\(\alpha\)/M]from left to right. In all panels, the solid black curves denote the formal errors generated by \(\mathrm{A}\mathrm{p}\mathrm{G}\mathrm{a}\mathrm{p}\). The teal, green, orange and red lines denote the cross-validation values for different measures of precision and accuracy: the root-mean-square deviation (RSME), the mean absolute deviation (MAE), the scatter (standard deviation), and the bias, respectively. The thin dashed line indicates the naive 1/(S/N) expectation. The top row shows this as a function of the BP S/N, the bottom row as a function of the RP S/N. The cross-validation results approximately follow the S/N-scaling expectations for \(T_{\mathrm{eff}}\), \(\log\ g\), [M/H], but less so for [\(\alpha\)/M]. Figure 9: Dependence of the uncertainties from \(\mbox{\it Asp}\mbox{\it Gap}\) for [M/H] (right) and [\(\alpha\)/M] (left), shown as a function of the sources’ G magnitude and S/N of the RP coefficients (see Fig. 1. The color indicates the mean variance in [\(\alpha\)/M] and [M/H], while the contours represent the number density of the sample, derived using a Gaussian kernel density estimate. APOGEEhave been assigned larger AspGap uncertainties, affirming that these error estimates are meaningful. Fig.7 further illustrates how the uncertainties change with signal-to-noise ratio (S/N), showing the formal AspGap uncertainties, as well as RMSE, MAE, scatter, and bias at different S/N. As the S/N increases, the AspGap uncertainties as well as the RMSE, MAE and scatter decrease rapidly, as expected. The scatters are \(\sim 40\) K, \(0.10\) dex, \(0.05\) dex, and \(0.02\) dex for \(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M] for a (RP coefficient) S/N of \(\sim 800\)). For the best S/N (\(>\)1500) in the RP coefficients, the scatters are \(30\) K, \(0.06\) dex, \(0.04\) dex, \(0.01\) dex for \(T_{\rm eff}\), \(\log~{}g\), [M/H]and [\(\alpha\)/M], respectively. These scatters are comparable to those published for data-driven stellar labels derived from high-resolution APOGEE spectrum (Ness et al., 2015; Casey et al., 2016; Ting et al., 2019). The label uncertainties derived by AspGap (formal error) as a function of S/N are also shown in Fig.7. The formal errors from AspGap decrease with S/N, similar to MAE. Beyond S/N of \(\gtrsim 500\) the uncertainties of all labels reach a floor, indicating that they are dominated by systematic errors rather than random uncertainties. We further validated the uncertainty estimates by looking at the \(\chi^{2}\) statistic. Fig.8 presents the uncertainty-normalized difference (\(\chi\)) between the true labels and those predicted by the model. Compared to the standard normal distribution \(\mathcal{N}(0,1)\), we find that the AspGap uncertainty for \(T_{\rm eff}\), \(\log~{}g\), and [M/H]is underestimated by about 30% if we assume that the APOGEE labels are exact. The uncertainties of AspGap predictions are influenced by both the signal-to-noise ratio (S/N) and the \(G\)-band magnitude of the stars, as shown in Figure 9. For stars with high S/N and bright magnitudes (\(G<12\)), the uncertainties are generally low. However, it is important to note that label precision is best predicted by the spectral S/N rather than by the magnitude itself. This implies that the quality of the spectra, as indicated by the S/N, plays a crucial role in determining the accuracy of the AspGap predictions. Therefore, when aiming to select samples with small uncertainties in their AspGap predictions, it is essential to take into account not only the \(G\)-band magnitude of the stars, but focus on ensuring a sufficient S/N. ### The Catalog of RGB Stars We now present stellar labels (\(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M]) derived in this study, and summarized a catalog (Table 2). We first use the ADQL query as presented in Appendix A to obtain \(\sim 220\) million stars with available XP data. Then we employ one of the trained models from the 2-fold training set as the inference model. The labels and their corresponding uncertainties generated by the XP decoder serve as the estimated labels provided by XP. Additionally, we exclude hot stars by selecting those with \(G_{\rm BP}-G_{\rm RP}>0.8\). Second, we follow the conditions 1-5 described in subsection 2.2 to select physically reasonable stellar labels. Third, we apply the following conditions to select a high-quality sample of approximately 28 million RGB stars, defined by: 1. \(T_{\rm eff}/\sigma_{T_{\rm eff}}>10\); 2. \(\log~{}g/\sigma_{\log~{}g}>10\); 3. \(\sigma_{[\alpha/{\rm M}]}<0.5\); 4. \(\sigma_{[{\rm M}/{\rm H}]}<1\); We note that such S/N-based sample cuts produce clean and easy to work with sub-samples, but may cause some difficulties if these cuts become part of a modelling selection function (Rix et al., 2021). As the last steps we make simple cuts in the HR (or 'Kiel') diagram, as shown in Fig. 10, to distill a relatively pure RGB sample. We adopt a pseudo-luminosity (\(L_{\rm pseudo}\)) (Anderson et al., 2018) to select giant stars in HRD, instead of absolute magnitude \(M_{G}\) to avoid losing sample members with negative parallax measurements (about \(\sim 2\%\)): \[L_{\rm pseudo}=\varpi\cdot 10^{G/5}<10^{(-0.003T_{\rm eff}+19)/5+10}, \tag{6}\] where \(L_{\rm pseudo}\) is a scaling value of luminosity that equals to \(10^{M_{G}/5+2}\). This cut is effectively equal to \(M_{G}>-0.003T_{\rm eff}+19\) for high-quality parallax measurements. This final cut leaves \(\sim 23\) million stars, which we illustrate in Fig. 11. We deem this RGB sample to be most suitable for astrophysical analyses where [\(\alpha\)/M] may play a role, and we present it as a catalog in Table.4. It can be accessed via [https://zenodo.org/record/8002699](https://zenodo.org/record/8002699). Figure 12 illustrates the larger coverage of the galactocentric Cartesian X-Y plane by the RGB catalog compared to APOGEE. The all-sky nature and sample size of our RGB catalog from Gaia XP enhances our capabilities in Galactic Archaeology, particularly when it comes to obtaining alpha abundance estimates. ### Accuracy evaluated by external stellar labels #### 4.3.1 Comparison with A23 We compared the labels provided by AspGap with those derived by Andrae et al. (2023) (A23) for 13,300,628 stars in the RGB catalog. To start, we note that both AspGap and A23 utilize XP spectra and train on APOGEE data. However, they use quite different approaches, as A23 employs a direct discriminative machine learning trained on APOGEE. As shown in Fig. 13, the scatter between AspGap and A23 for \(T_{\rm eff}\), \(\log~{}g\), and [M/H] were found to be 73 K, 0.2 dex, and 0.1 dex, respectively, for all the cross-matched stars, with nearly no biases for the labels from the two catalogs, only 7 K, 0.02 dex, and 0 dex. The higher quality labels given by AspGap, with smaller than 50 K, \(\sigma_{\rm log~{}g}\) less than 0.1 dex, and \(\sigma_{\rm[M/H]}<0.1\) dex, were also shown in Fig.13. We find that the scatter in \(T_{\rm eff}\), \(\log~{}g\), and [M/H]decreases to 45 K, 0.12 dex, and 0.07 dex. The [M/H]\({}_{\rm XP}\) are systematically higher from [A23] for metal-poor regime ([M/H]\(<-2\)), this is due to the limitation of our training sample, the majority of [M/H] from the APOGEE label as traininig dataset is larger than -2 dex, but [A23] add very-metal-poor star training sample [Li23] to benefit the estimation of the metal-poor regime. To calculate the combined uncertainty when comparing the two catalogs, we can simply use \(\delta_{\rm comb}=\sqrt{\delta_{1}^{2}+\delta_{2}^{2}}\), where \(\delta_{1}\) and \(\delta_{2}\) are the uncertainties of the two catalogs. Given the typical error of 50 K, 0.12 dex, and 0.07 dex for \(T_{\rm eff}\), \(\log~{}g\), and [M/H]for AspGap, 50 K, 0.08 dex and 0.1 dex for [A23], the combined uncertainties for three labels ( \(T_{\rm eff}\), \(\log~{}g\), and [M/H]) are 70 K, 0.14 dex and 0.12 dex, assuming the errors are uncorrelated. The rough estimates are consistent with the comparison as displayed in Fig. 13 for \(T_{\rm eff}\) and [M/H], the scatter of \(\log~{}g\) between [A23] and AspGap are larger 0.06 dex. After the quality cut of AspGap, the scatters are nearly the same as shown in the self-validation value reported in subsection 4.1. We find that 88% of the [A23] stars are contained in our catalog, which implies that much of the _RGB_ sample of [A23] also overlaps. However, it should be noted that our catalog is based only on Gaia DR3 data, while the parent catalog of [A23] includes Gaia DR2, 2MASS, and ALLWISE surveys, resulting in a smaller parent catalog size compared to the AspGap catalog of approximately 23 million stars, mostly caused by ALLWISE incompleteness. It is plausible that the main difference in catalog Figure 10: Definition of the _RGB sample_, for which the AspGap label estimates are most precise and robust. The sample cuts are illustrated in the H-R diagram (color-coded by the number density of the Gaia XP sample) with two conditions \(T_{\rm eff}<5300\,K\) and \(\varpi\cdot 10^{G/5}<10^{(-0.003T_{\rm eff}+19)/5+10}\), where pseud-luminosity (left-hand side) in the second cut was chosen to remain well defined even for \(\varpi\leq 0\). They lead to a sample of 23 million objects. content between A23 and this work is attributable to the S/N cuts we used for selecting stars. #### 4.3.2 Comparision with LAMOST-LRS For an independent validation with R\(\gtrsim\) 2000 spectroscopy, we use datasets beyond the APOGEE survey to verify the performance of our results. In Figure 14, we compare the labels derived by AspGap for red giant branch (RGB) stars from LAMOST Low-Resolution (\(\mathcal{R}\sim 1800\)) Spectroscopic Survey (LRS) DR5 (Zhang et al., 2020) using the Stellar Label Machine (SLAM). The labels of LAMOST RGB stars are trained by transfering approximately 90,000 common stars between LAMOST DR 5 and APOGEE DR 15. We first select stars with S/N\(>\) 100 (in SDSS \(g\)-band) from the SLAM catalog2, with the flag set to K giant, resulting in 142,608 stars. The S/N cut ensures that the errors of the SLAM labels are less than \(\sim\) 50 K, 0.1 dex, 0.037 dex, and 0.026 dex for \(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M], respectively (Zhang et al., 2020). We then cross-matched the selected LAMOST K giant stars with our catalog, resulting in 116,061 common stars. The comparisons between AspGap labels from Gaia XP and SLAM labels from LAMOST are shown in Fig. 14. Generally, the differences between the AspGap-labels and LAMOST are similar to the results of the validation result shown in Fig. 5. For \(T_{\rm eff}\), the difference is larger at the edge of the label range, i.e., \(T_{\rm eff}<4200\) and \(T_{\rm eff}>5000\). Similarly, the scatter of \(\log~{}g\) is higher in the range of \(\log~{}g\)\(<1.5\). One reason for this could be that the K-giant stars from the LAMOST catalog are selected using a hard cut on \(T_{\rm eff}\) and \(\log~{}g\) (Fig. 6 in Zhang et al., 2020), which may erroneously include main-sequence stars mixed with RGBs. To validate our conjecture, we further cut the sample with \(T_{\rm eff}\) difference less than 200K, the number of outliers is 2,006 outliers. We find that the number of stars with a large [M/H] difference from LAMOST decreased after the cut. Our comparison with LAMOST illustrates the remarkable consistency of the AspGap labels with the literature, even outside of APOGEE observation. However, we should note that both AspGap and SLAM are limited in its truncation in the APOGEE training dataset. Footnote 2: [https://github.com/hypergravity/paperdata](https://github.com/hypergravity/paperdata) For [M/H] and [\(\alpha\)/M], we find some spurious differences in the metal-poor regime ([M/H]\(<-1\)). We suspect that the AspGap-derived [M/H] suffers from sig Figure 11: Summary of the stellar labels derived via AspGap for the \(\sim\) 23 million members of the RGB catalog (see Fig. 10 Left: \(T_{\rm eff}\)–\(\log~{}g\) diagram color coded by [M/H]. Right: [M/H]-[\(\alpha\)/M] abundance diagnostic diagram color coded by logarithmic density. nificant errors for metal-poor stars, which is consistent with our findings in the validation results shown in Fig. 11. Additionally, we find large discrepancies for [\(\alpha\)/M]\(>\) 0.2, but that might be a result of large errors in [M/H] for metal-poor stars, where [M/H] and [\(\alpha\)/M] are strongly degenerate in this challenging regime for abundance estimation. Although the scatters of [M/H] between AspGap and SLAM are only 0.07 dex, however, there is a 0.06 dex bias between AspGap and SLAM. The bias might come from the updated pipeline between APOGEE DR 17 and APOGEE DR 14. We found similar bias for [\(\alpha\)/M], the bias between AspGap and SLAM is -0.03 dex. #### 4.3.3 Comparison with LAMOST-MRS We further compare our results with Cycle-StarNet from Wang et al. (2023), which utilized MARCS model atmospheric theoretical synthetic spectra combined with a domain-adaptation method to estimate the fundamental stellar parameters (\(T_{\rm eff}\), \(\log\ g\), [Fe/H]) and 11 chemical abundances for 1.38 million stars from the Medium-Resolution (\(\mathcal{R}\)\(\sim\) 6500) Spectroscopic Survey (MRS) in LAMOST-II DR8. Footnote 8: [https://nadc.china-vo.org/res/r101242](https://nadc.china-vo.org/res/r101242) To perform our comparison, we cross-match our RGB catalog with the dataset from Wang et al. (2023) and obtain a sample of 301,478 stars. We then apply specific selection criteria based on the flags provided by Wang et al. (2023), namely Flag_Teff = 0, Flag_logg = 0, Flag_FeH = 0, Flag_MgFe = 0, SN_blue > 100, SN_red > 100, and constrain the errors of [Fe/H] and [Mg/Fe] to be smaller than 0.05 dex and 0.04 dex, respectively. After applying these data quality cuts, we obtain a final sample of 40,035 stars for our comparative analysis as shown in Fig. 14. As depicted in Figure 14, the comparison results with LAMOST MRS exhibit similarities to those obtained from LAMOST LRS, despite the application of different methods and spectra. Regarding the effective temperature (\(T_{\rm eff}\)), the scatter shows a marginal difference, attributed to a slightly larger bias ranging from 0 K to 83 K. In terms of \(\log\ g\), the scatter is slightly larger, measuring up to 0.17 dex. For metallicity ([M/H]), the scatter remains comparable to the comparison with LAMOST MRS; however, there appears to be a smaller representation of metal-poor stars ([M/H]\(<-1\)) in the LAMOST MRS sample. Since Wang et al. (2023) does not provide an overall \(\alpha\) abundance, we compare the AspGap-derived [\(\alpha\)/M] with [Mg/Fe] as a reference. We find that the scatter is similar to that obtained for SLAM. In summary, we find good agreement between \(T_{\rm eff}\), \(\log\ g\), [M/H], and [\(\alpha\)/M] derived from AspGap and those obtained from LAMOST MRS. #### 4.3.4 Comparision with GALAH We further conducted a comparison with the results obtained by Buder et al. (2021). The GALAH DR3 consists of 768,423 high-resolution (R \(\sim\) 28,000) optical Figure 12: The left panel shows the 2D distribution of the median galactocentric Cartesian X position _vs_ Cartesian Y position for Gaia XP. On the right panel, a similar 2D distribution is displayed for APOGEE data. By utilizing the sample provided by XP, we can assess whether the larger and all-sky Gaia XP sample covers the same physical extent as the APOGEE data. spectra obtained from 342,682 stars. The stellar parameters in GALAH DR3 were estimated using the Spectroscopy Made Easy (SME) model-driven approach in combination with 1D MARCS model atmospheres. Additionally, Buder et al. (2021) incorporated astrometric information from Gaia DR2 and photometric data from 2MASS to mitigate spectroscopic degeneracies, accounting for LTE/non-LTE effects in their computations. To perform a comparative analysis, we cross-matched our results with GALAH DR3 and identified 13,504 stars with corresponding stellar parameters. Quality flags were applied to ensure the reliability of the the GALAH labels, including flag_sp=0, flag_fe_h=0, flag_guess=0, and red_flag=0. Moreover, we imposed constraints on the errors of \(T_{\rm eff}\), \(\log~{}g\), [Fe/H], and [\(\alpha\)/Fe] given by GALAH DR3, setting them to be smaller than 200 K, 0.25 dex, 0.2 dex, and 0.05 dex, respectively. After applying these quality criteria, we obtained a subset of 13,504 stars for the comparative analysis as displayed in Fig. 14. The comparison between AspGap and GALAH reveals a consistent pattern, although the level of consistency is relatively weaker compared to A13, LAMOST LRS, and LAMOST MRS. The scatter values for \(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M] amount to 77 K, 0.18 dex, 0.13 dex, and 0.07 dex, respectively. In terms of bias, the four labels show a slight deviation, with a bias of -30 K for effective temperature (\(T_{\rm eff}\)). The comparative analysis of various surveys' similarity in labeling is presented in Wang et al. (2023). LAMOST MRS and APOGEE exhibit the highest level of consistency in their labels, followed by GALAH. The SLAM labels (derived from LAMOST LRS) are trained using labels from APOGEE DR14, hence the expected consistency between them. In conclusion, we performed independent validation by comparing the labels derived by AspGap with the LAMOST Low-Resolution Spectroscopic Survey (LRS), the Medium-Resolution Spectroscopic Survey (MRS) datasets, and GALAH DR3. The comparison results demonstrate the remarkable consistency of the AspGap \begin{table} \begin{tabular}{l l} \hline **Column Name** & **Description (Units)** \\ \hline source\_id & Unique identifier for star \\ ra & Right Ascension (deg) \\ dec & Declination (deg) \\ teff\_xp & Effective temperature (K) from AspGap \\ logg\_xp & Surface gravity (dex) from AspGap \\ moh\_xp & Metallicity (dex) from AspGap \\ aom\_xp & \(\alpha\)-element abundance (dex) from AspGap \\ e\_teff\_xp & Error in effective temperature (K) from AspGap \\ e\_logg\_xp & Error in surface gravity (dex) from AspGap \\ e\_moh\_xp & Error in metallicity (dex) from AspGap \\ e\_aom\_xp & Error in \(\alpha\)-element abundance (dex) from AspGap \\ snr\_rp & Signal-to-noise ratio in \(G_{RP}\) \\ l & Galactic longitude (deg) \\ b & Galactic latitude (deg) \\ parallax & Parallax (mas) \\ parallax\_error & Parallax uncertainty (mas) \\ pmra & Proper motion in RA (mas/yr) \\ pmra\_error & Proper motion uncertainty in RA (mas/yr) \\ pmdec & Proper motion in Dec (mas/yr) \\ pmdec\_error & Proper motion uncertainty in Dec (mas/yr) \\ ruwe & Renormalized unit weight error \\ phot\_g\_mean\_mag & Mean apparent magnitude in \(G\) band \\ phot\_bp\_mean\_mag & Mean apparent magnitude in \(BP\) band \\ phot\_rp\_mean\_mag & Mean apparent magnitude in \(RP\) band \\ bp\_rp & Color index \\ radial\_velocity & Radial velocity (km s\({}^{-1}\)) \\ radial\_velocity\_error & Radial velocity uncertainty (km s\({}^{-1}\)) \\ \hline \end{tabular} \end{table} Table 2: Table descriptions for \(\sim 23\) million RGB stars predicted by AspGap. labels with the literature, indicating the accuracy and reliability of the AspGap model in providing stellar labels. However, it is important to note that both AspGap have limitations in their training datasets, which are based on the truncation of the APOGEE dataset. ### The accuracy of metallicity assessed from clusters To assess the accuracy of the abundances estimated in our work, we explore the abundance derived with AspGap for stars in open clusters. Stars in open clusters serve as benchmark stars, which are due to the approximately chemically homogeneous. In Figure 15, we compare [M/H] from AspGap with the literature values for 67 known open clusters (Donor et al., 2020). The difference between the AspGap and the literature values, as well as the APOGEE [M/H] compared to the literature values, are shown in Fig.15. We find that although the deviations of the estimates from AspGap are larger than those of APOGEE, 66 out of 67 clusters have AspGap [M/H]within 0.2 dex. However, the selected 12 clusters with more than 3 member stars, the deviation of the AspGap derived [M/H] from the literature values is within 0.07 dex as shown in Figure 15. There is no metallicity dependence for the deviations of estimates from AspGap. Testing AspGap on open clusters shows that the error of the [M/H] estimate is mainly the random error when compared to APOGEE abundances. ### Verification of the [\(\alpha\)/M]-abundances The task of determining the [\(\alpha\)/\(M\)]-abundance is challenging for low resolution spectra like Gaia XP. This is due to the potential degeneracy between the [\(\alpha\)/M]-abundance and other parameters, such as the "metallicity" ([M/H]), as shown in studies by Ting et al. (2017); Gavel et al. (2021). In this section, we evaluate the precision and accuracy of our [\(\alpha\)/\(M\)]-abundance estimates, thereby validating the [\(\alpha\)/M] derived from AspGap. We conduct three main tests to validate the [\(\alpha\)/\(M\)]-abundance prediction. These tests in particular include circumstances where we know how completely indepen Figure 13: Comparison of the stellar labels \(T_{\rm eff}\), \(\log\ g\), and [M/H] (left to right), derived from essentially the same XP data but with two different approaches AspGap and XGBoost as implemented by A23(Andrae et al., 2023). (Note that A23 used also band-pass magnitudes derived from spectra and additional external photometry, such as WISE.) The figures are color-coded by the logarithmic sample number density. The first row of figures represents the comparison for the entire cross-matched RGB sample, consisting of 10,825,736 stars. In the second row, the comparison is restricted to the subset of stars with a signal-to-noise ratio (S/N) of RP coefficients larger than 1,000, comprising 917,921 stars. dent properties of the stars (such as their positions or velocities) correlate with \([\alpha/M]\); we then check whether these correlations are seen at the expected level. First, we examine how well AspGap can distinguish between different chemical components of the disk, specifically the so-called low-\(\alpha\) and high-\(\alpha\) disk, within the same range of [M/H]. This analysis is discussed in detail in subsection 4.5.1. Then we explore the relationship between the orbit dynamics and [\(\alpha\)/M]-abundance for disk stars : do high-\(\alpha\) stars (at a given [M/H]) form a hotter disk that low-\(\alpha\) stars of the same [M/H]. For a more detailed discussion, please refer to Subsection 4.5.2. While our comparison with LAMOST and GALAH data demonstrates agreement between AspGap-derived \([\alpha/M]\)-abundance and these surveys, it is important to note that the majority of LAMOST and GALAH samples consist of disk stars with \(\sigma_{\rm[M/H]}\)\(>-0.8\), leaving the question open, whether our [\(\alpha\)/M] estimates can differentiayte stellar population also in the metal-poor regime. To offer further independent validation, we specifically chose stars from halo substructures and the Large Magellanic Cloud (LMC), which exhibit distinct star formation histories in comparison to the disk stars within the Milky Way. The validation process and the corresponding results will be thoroughly discussed in Subsection 4.5.3. #### 4.5.1 Assessing accuracy through the bimodality of disk Stars The well-established bimodality of \([\alpha/M]\)-abundance in disk stars serves as the initial validation of our ability to separate chemically low-\(\alpha\) (or "thin disk") from high-\(\alpha\) (or, "thick disk") stars. To accomplish this, we employ a two-fold validation approach using AspGap la Figure 14: Comparison between AspGap-derived labels and those from three different ground-based survey datasets: LAMOST-LRS, LAMOST-MRS, and GALAH. Each row corresponds to a specific parameter: \(T_{\rm eff}\), \(\log\ g\), [M/H], and [\(\alpha\)/M]. The top panel represents the comparison with Gaia XP (AspGap) and LAMOST-LRS (SLAM), color-coded by the number density. The middle panel showcases the comparison with LAMOST-MRS dataset (Cycle-StarNet), and the bottom panel depicts the comparison with GALAH DR3. In each panel, the scatter and the median bias for each stellar label are marked. The black dotted line represents the one-to-one line, reflecting perfect agreement between the surveys. The pairs of grey dashed lines in the plot represent the deviation of one sigma in the difference between the compared labels. bels. Specifically, we compare a selected sample of disk stars with \(-0.9<\)[M/H]\(<0\) against the APOGEE training labels to evaluate the effectiveness of differentiating between low-alpha and high-alpha disk populations, the details can be found in Appendix B. As shown in Table 3, for the low-alpha disk, we find that 97% of the XP-identified group corresponds to the low-alpha class according to APOGEE labels, while 96% of the entire low-alpha disk sample identified by APOGEE is correctly classified by XP. Similarly, for the high-alpha disk, 93% of the XP-identified group represents the high-alpha class, and 94% of the complete high-alpha sample identified by APOGEE is accurately recognized by XP. These validation results demonstrate that the AspGap-[\(\alpha\)/M] exhibits a high level of precision and recall in distinguishing between low-alpha and high-alpha disk populations. Further details can be found in Appendix B. #### 4.5.2 [\(\alpha\)/M] validation via orbit dynamics From the RGB catalog compiled from GaiaXP, we focused on stars within two fixed [M/H] groups: \(-0.7<\) [M/H] \(<-0.5\) and \(-0.5<\) [M/H] \(<-0.3\). We cross-matched these groups with the sample from Kordopatis et al. (2023) to obtain orbital parameters based on Gaia DR3. We obtained a total of 960,661 and 1,644,704 stars in the two [M/H] groups, respectively. Figure 16 illustrates a clear trend: within the range \(-0.8<\) [M/H] \(<-0.3\), the "high-\(\alpha\) disk" is vertically hotter (higher vertical action, \(J_{z}\)), has older age, and lower angular momentum (\(L_{z}\)), while the "low-\(\alpha\) disk" is vertically cooler (lower \(J_{z}\)), has younger ages, and higher angular momentum. For reference, we overlaid the analogous APOGEE results in Figure 16. In general, Gaia XP and APOGEE exhibit similar trends on the [\(\alpha\)/M]-\(J_{z}\) diagram, indicating a consistent relationship between \(\alpha\)-abundance and vertical motion. Gaia XP also allows for a distinction between different [M/H] groups. On the \(\alpha\)-abundance - \(L_{z}\) panel, APOGEE consistently yields higher values compared to Gaia XP. This offset may be attributed to a selection effect, as Gaia XP contains a larger number of stars in the inner disk with lower values of angular momentum (\(L_{z}\)). Furthermore, we compared the distributions of \(\alpha\)-abundance and Galactocentric Cartesian X positions Figure 15: Comparison between literature [Fe/H], AspGap, and APOGEE DR16 [M/H] for open clusters. We compare the difference between AspGap [M/H] and [Fe/H] in Donor et al. (2020). The blue circles represent the AspGap [M/H], and the red triangles represent the APOGEE [M/H], with 1\(\sigma\) as the error bar. The metallicity difference \(-0.2<\Delta\)[M/H]\(<0.2\) is shown by the gray shaded areas. All cluster samples cross-matched with literature are shown in the left panel, and 12 selected clusters with more than five identified members in APOGEE are shown in the right panel. The marker size indicates the number of members in each cluster. \begin{table} \begin{tabular}{c c c c} \hline \hline & Class & Precision & Recall \\ \hline low-[\(\alpha\)/M] & disk & 0.97 & 0.96 \\ high-[\(\alpha\)/M] & disk & 0.93 & 0.94 \\ \hline \hline \end{tabular} \end{table} Table 3: Precision and recall for low-[\(\alpha\)/M] disk and high-[\(\alpha\)/M] disk between Gaia XP and APOGEE for the group with \(-0.5<[\mathrm{M}/\mathrm{H}]<-0.3\). We observed that the sampling in Gaia XP is more uniform across galactocentric X positions, particularly for stars in the inner disk, which have lower angular momentum (\(L_{z}\)). This finding further validates our assumption. Overall, the results confirm _astrophysically_ that our \(\alpha\)-abundance estimates from Gaia XP are meaningful: they clearly can identify the different dynamical properties of stars with different [\(\alpha\)/M] at a given [M/H], revealing known structural properties of the disk. 5.3 [\(\alpha\)/M] validation using the "Gaia-Enceladus/Sausage" and LMC stellar population properties While our comparison with LAMOST and GALAH data demonstrates agreement between AspGap-derived [\(\alpha/M\)]-abundance and these surveys, it is important to note that the majority of LAMOST and GALAH samples consist of disk stars with [Fe/H] \(>-0.8\). To offer further independent validation, we specifically choose stars from halo substructures and the Large Magellanic Cloud (LMC), which exhibit distinct star formation histories compared to the disk stars within the Milky Way. Verifying the accuracy of [\(\alpha\)/M] measurements in extreme environments, such as Gaia-Enceladus/Sausage (GES) and the Large Magellanic Cloud (LMC), holds significant importance. These regions are unique testing grounds for our [\(\alpha\)/M] estimates as they have distinct stellar populations from the disk, which dominates our training set. GES stars are uncommon halo stars that are beyond the range of thin and thick disk stars, which dominate the training sample. Similarly, LMC stars are positioned at significantly greater distances than most training examples. Moreover, general associations observed in typical Milky Way stars, such as a tendency for farther stars to exhibit a higher [\(\alpha\)/M], cannot be presumed to be applicable to these extragalactic populations. The GSE and LMC stars present novel challenges to the model and extend it beyond the standard training data. If AspGap is still able to recover accurate [\(\alpha\)/M] values for these outliers, it presents compelling proof that the actual abundance is being measured instead of being only inferred from bulk correlations in the training set. The significance of these tests is pivotal in demonstrating that XP determines elemental abundances and not simply correlates labels. Figure 16: Astrophysical validation of AspGap ’s [\(\alpha\)/M] determinations, based on the fact that – at a given [M/H] – the \(\alpha\)-enhanced (or ’thick’) disk has more vertical motions (e.g. Bovy et al., 2012). The left panel illustrates the relationship between the mean root-square vertical action \(J_{z}\) (quantifying the vertical kinematics) and AspGap-derived [\(\alpha\)/M] in two narrow bins of [M/H], \(-0.7<\)[M/H]\(<-0.5\) (red) and \(-0.5<\)[M/H]\(<-0.3\) (blue). The solid line with circles shows the result of AspGap-labels, while the dashed lines with triangles represent those from APOGEE for reference. The AspGap results show the expected trend that agrees quite closely with that seen in APOGEEdata; subtle offsets may simply reflect the different spatial selection function of the sample. The right panel shows a related astrophysical validation test, based on the known fact that – again at a given [M/H] – the \(\alpha\)-enhanced disk is much more centrally concentrated (e.g. Bovy et al., 2016). The panel displays the mean angular momentum \(L_{z}\) as a function of [\(\alpha\)/M] in the same two [M/H]-bins, again showing the expected trend and good agreement with APOGEE. Figure 17: Three astrophysical validation tests for the quality of the AspGap-based [\(\alpha\)/M] estimates in the metal-poor regime. All three validations are based on the idea that in some regimes the spatial or kinematic selection of stars leads to clear (externally derived) expectations for \(p\)([\(\alpha\)/M]) in a given [M/H]-regime. _Top left_: Validation using the _Poor Old Heart_ of the Milky Way (Rix et al., 2022), showing the fraction of high-[\(\alpha\)/M]\({}_{\rm XP}\) stars as a function of \(R_{\rm apo}\) and eccentricity for stars with [M/H]\({}_{\rm XP}<-1.1\). At high eccentricities (\(>\)0.75)and large \(R_{\rm apo}\) (\(>\) 10 kpc) the population should be dominated by the Gaia-Enceladus/Sausage (GSE) population, known to be low-\(\alpha\)(Helmi et al., 2018; Hasselquist et al., 2021) ; this is exactly what our [\(\alpha\)/M]\({}_{\rm XP}\) values show. _Top right_: Distribution of stars in the [M/H]-[\(\alpha\)/M]plane that have been selected (_purely kinematically_) as likely GSE members, expected to lie below the (black dashed) high-\(\alpha\)_vs._ low-\(\alpha\) dividing line in the diagram. The stars with AspGap-determined [M/H] and [\(\alpha\)/M] labels (colored density) lie in the expected position, and also agree with APOGEE(green points); this latter coincidence is not a trivial consequence of the training, given that these stars are a tiny and unusual subsample. _Bottom_: Validation in LMC, using a similar [M/H]-[\(\alpha\)/M]diagonal diagram as the top left panel. The stars with AspGap-determined labels (colored density) lie at low [\(\alpha\)/M], as expected for the LMC (Russell & Dopita, 1992); they again agree with the APOGEE labels for this peculiar subset. First, we select high-confidence GES member stars on the basis of their radial orbits. Specifically, we select stars with \(E_{\rm tot}>-1.2\times 10^{5}\,\rm km^{2}\,s^{-2}\) and \(|L_{Z}|<0.5\times 10^{3}\,\rm kpc\,km\,s^{-1}\), which should confidently exclude the disk and select the most energetic and radial GES stars. This selection is motivated by the expectation that GES stars, which were formed in the smaller potential of a former dwarf galaxy, would exhibit lower alpha abundances at fixed [M/H] (e.g., Hasselquist et al., 2021.) We apply further cuts to the sample, requiring \(\sigma_{\rm[M/H]}\)\(<0.1\), \(\sigma_{\rm[\alpha/M]}\)\(<0.05\), and \(\log~{}g\)\(<2\) to ensure good quality labels. The results for this sample obtained from AspGap is depicted in Fig. 17, with APOGEE labels overplotted as reference. This Figure shows that the vast majority of the stars kinematically selected as GES members, indeed lie below the diagonal line, where high-resolution studies and APOGEE analyses expect them to reside.This represents a confident recognition of the [\(\alpha\)/M] values derived by AspGap for these stars. We also find that there are a few stars that exhibit chemical characteristics similar to those of the high-alpha disk (above the diagonal line) in both the APOGEE and Gaia XP. This phenomenon can be attributed to the dynamical selection of GES stars based on the energy-angular momentum plane, which results in a purity of approximately 24% and completeness of 41% as demonstrated in simulations (Carrillo et al., 2023). I.e. the stars' location in the [\(\alpha\)/M]- moh plane may reflect merely the limitations of the kinematic selections. Second, we cross-matched our AspGap catalog with the catalog of Rix et al. (2022) to analyze the [\(\alpha\)/M] abundances of stars in the _Poor Old Heart_ in the Milky Way. This cross-matching revealed a common sample of 1,144,026 stars. For the analysis of metal-poor stars in the inner disk, we specifically selected 92,975 stars with [M/H]\(<-1.1\) based on the results of Rix et al. (2022). We anticipated observing two distinct regimes within this metal-poor sample (Rix et al., 2022). The first regime consists of the tighly bound metal-poor stars that have a broad eccentricity distribution ranging from 0.1 to 0.8 (i.e. are approximately isotropic). These stars are members of the _Poor Old Heart_. Our AspGap estimates show them to be predominately high-[\(\alpha\)/M] stars, as expected. The other regime is that of the loosely bound, radially anisotropic stars (with eccentricities greater than 0.75 and apocenter radii \(R_{\rm apo}\) exceeding 10 kpc: these represent the pericenter members of GSE. Consequently, we expect them to be metal-poor, low-[\(\alpha\)/M] stars. Both aspects are affirmed by the top left panel of Figure 17, where we plotted eccentricity versus \(R_{\rm apo}\) in Figure 17 with the color-coding representing the AspGap-derived [\(\alpha\)/M]\({}_{\rm XP}\) values. Notably, we observed that the stars classified as non-isotropic displayed slightly lower [\(\alpha\)/M]\({}_{\rm XP}\) values compared to the isotropic stars, as shown in the top-left panel of Figure 17. The color bar accompanying the plot indicates the number ratio of high-[\(\alpha\)/M]\({}_{\rm XP}\) stars to low-[\(\alpha\)/M]\({}_{\rm XP}\) stars, with high-[\(\alpha\)/M]\({}_{\rm XP}\) stars defined as those above the diagonal dashed line and low-[\(\alpha\)/M]\({}_{\rm XP}\) stars below it. The results depicted in Figure 17 provide additional evidence supporting the reliability of the AspGap-derived [\(\alpha\)/M]\({}_{\rm XP}\) values. We observe that the non-isotropic stars, which are likely associated with the debris from the Galactic Enceladus/Sausage (GES) merger, exhibit relatively lower [\(\alpha\)/M]\({}_{\rm XP}\) values. This chemical behavior is consistent with the expected signature of the GES debris, further strengthening the validation of the [\(\alpha\)/M]\({}_{\rm XP}\) in accurately determining \(\alpha\)-element abundances. Finally, we validated the [\(\alpha\)/M] predictions in the Large Magellanic Cloud (LMC), whihc is a population that is fairly metal-poor with exceptionally low [\(\alpha\)/M]. We cross-matched our sample with the catalog of Jimenez-Arranz et al. (2023), who employed a supervised Neural Network classifier to distinguish LMC stars from foreground Milky Way stars based on Gaia DR3 kinematics data. From the common sample of 92,063 stars with high member probability \(>0.9\), we selected a subset of 77,277 stars with precise label estimates (\(\sigma_{\rm[M/H]}\)\(<0.1\), \(\sigma_{\rm[\alpha/M]}\)\(<0.05\), and \(\log~{}g\)\(<2\)). We also made sure to include 1,822 stars from APOGEE DR17. The comparison of high-probability LMC member stars with AspGap [\(\alpha\)/M]\({}_{\rm XP}\) predictions and APOGEE [\(\alpha\)/M] measurements in Figure 17 yields consistent results. The [\(\alpha\)/M] predictions from AspGap align with the expected values for the LMC (Russell & Dopita, 1992) and are in agreement with the measurements obtained by APOGEE. This agreement provides further confirmation of the accuracy of the [\(\alpha\)/M] predictions, even in challenging validation scenarios. ## 5 Discussion and Conclusions ### Caveats We have demonstrated how well the low-resolution XP spectra can be used to predict stellar labels, now also including [\(\alpha\)/M]. However, our model is based - inevitably - on a series of assumptions, and we remind the reader here of the caveats when using our catalog. * Our approach is based on the assumption that the low-resolution spectra are single-star spectra set by only four labels \(T_{\rm eff}\), \(\log~{}g\), [M/H], and [\(\alpha\)/M]. We presume stellar rotation, detailed chemical abundances to be negligible, and interstellar extinction to be a nuisance parameter. To break the degeneracy between temperature and extinction, additional infrared photometry, such as 2MASS and ALLWISE (Andrae et al., 2023), can be incorporated. However, our main objective is to create an RGB catalog solely from Gaia data. Consequently, in regions with high extinction, we may encounter challenges in accurately estimating stellar labels. Although Rix et al. (2022) has shown that [M/H] estimation remains unbiased even in the presence of significant extinction (e.g., \(A_{V}=3\)), we acknowledge the potential limitations of our stellar label estimates in high-extinction regions. In Appendix C, we present a comparison with the results of Andrae et al. (2023) in relatively low-extinction regions characterized by Galactic latitudes \(\mid b\mid>30\), as well as high-extinction regions with \(\mid b\mid<10\). We find no systematic offset between the two Galactic latitude groups, indicating that any discrepancies in label estimation primarily arise from the increased scatter caused by extinction. The presence of extinction introduces an inherent latent variable that is intertwined within the spectra, which we neglected. Unlike the derivation of stellar labels through direct forward modeling, it is crucial to consider the existence of extinction as an important parameter. The AspGap simplifies the model by choosing to ignore the presence of extinction, as it does not significantly influence our conclusions, as demonstrated in the validation process. We refer users whose scientific focus revolves around the extinction of Gaia XP to the work of Zhang et al. (2023) and related references there. * We make the assumption that the training data, obtained from the overlap of Gaia and APOGEE datasets, consitutes a representative and sufficiently diverse sample of stars. Our main focus is on deriving accurate stellar parameters and abundances for red giant branch (RGB) stars, which are well-covered by APOGEE observations. To ensure the purity of the RGB catalog, we exclude white dwarfs and hot stars from our training set. Although this exclusion is expected to have minimal impact on the training process, there is a possibility that some hot stars may be inadvertently included in the RGB catalog if they fall within the predefined boundaries. However, we have taken precautionary measures by examining the \(G_{\rm BP}-G_{\rm RP}\) color index for the RGB catalog, and the fraction of misclassified hot stars is found to be negligible. * We impose a cutoff at [M/H] values larger than -2 for all stars in our analysis. This decision is based on the limited coverage of the training sample of APOGEE in terms of [M/H] values, which excludes very metal-poor stars. The fraction of stars in the inner disk with [M/H] values below -2 is very small, approximately 0.003% (Rix et al., 2022). If the analysis requires a focus on very metal-poor stars with [M/H]\(<-2\), we recommend referring to works such as (Li et al., 2022; Yao et al., 2023), which specifically address the very metal-poor population. * Our assumption of all sources being single stars overlooks the presence of a significant fraction of stars in binary systems. While our model can accurately predict the labels for the primary star in binaries with large mass (and light) ratios, the performance may be impacted for systems with close to equal mass ratios. This limitation is inherent in many data-driven methods. A possible strategy to tackle this issue is to explicitly consider each spectrum as a binary system. This approach involves comparing the goodness-of-fit, such as the reduced \(\chi^{2}\), between single-star and binary solutions for each spectrum (El-Badry et al., 2018). By doing so, it becomes feasible to identify potential binary systems (Niu et al. _in preparation_). This avenue holds promise for further refining our understanding of binaries within the XP sample. * For metal-poor ([M/H]\(<-1\)) stars, the [\(\alpha\)/M] uncertainties are relatively higher for stars with higher surface gravity (\(\log~{}g\)\(>2.5\)) compared to metal-rich stars. This is especially true for \(\alpha\)-poor stars, as observed in the validation process described in Subsection 4.5.3. This could be attributed to the fact that high \(\log~{}g\) values may result in pressure-broadened wings of strong metal lines (e.g., Mg I and Ca I, see Gray 2008 for details). In addition to line broadening effects, the inherent weak metal features in metal-poor stars can pose challenges in accurately estimating [\(\alpha\)/M], given the existing correlation between [\(\alpha\)/M] and [M/H]. For studies focusing on metal-poor stars, we recommend selecting stars with \(\log~{}g\) values lower than 2.5 and carefully considering the error estimation provided by AspGap to ensure the accuracy and validity of the derived parameters. ### Conclusion This paper presents the AspGap model, a data-driven approach for performing non-linear regressions that estimate stellar labels from the low-resolution spectroscopic data provided by the BP/RP (XP) spectra from Gaia DR3. Our approach has two new aspects compared to published analyses: it also yields precise estimates for [\(\alpha\)/M], and it employs the _hallucinator_. By utilizing a pre-trained model based on high-resolution APOGEE spectra, AspGap with the _hallucinator_ component, achieves remarkable accuracy in predicting the effective temperature, surface gravity, metallicity, and \(\alpha\)-abundance of stars. Through 2-fold cross-validation, the model demonstrates accuracies of approximately 50 K, 0.12 dex, 0.07 dex, and 0.02 dex, respectively. Our study results in a comprehensive catalog containing fundamental parameters (\(T_{\rm eff}\) and \(\log~{}g\)) and abundance prediction ([M/H] and [\(\alpha\)/M]) for approximately 23 million RGB stars. This extensive dataset, accompanied by the open-source code, is publicly accessible at [https://zenodo.org/record/8002699](https://zenodo.org/record/8002699). The extensive catalog of stellar labels for \(\sim\)23 million RGB stars generated in this study provides the astronomical community with a valuable multi-purpose data resource. The unprecedented scale of all-sky [\(\alpha\)/M]measurements will facilitate novel insights into the formation history and chemodynamics of the Milky Way. Additionally, the public release of the open-source AspGap code will promote further methodological advancements in analyzing low-resolution spectra. This work demonstrates the meaningful [\(\alpha\)/M] abundance information that can be extracted from Gaia's low-resolution spectroscopic data. By developing innovative techniques tailored to these spectra, as with AspGap, we can unleash the full potential of the vast XP dataset to illuminate our Galaxy's chemical evolution. The catalog and methods presented here will enable a diverse range of Galactic archaeology research and upcoming spectroscopic surveys. It is a pleasure to thank Chao Liu (NAOC), Haibo Yuan (BNU), Rene Andrae (MPIA), Dongwei Fan (NAOC), Bo Zhang (NAOC), Hao Tian (NAOC) for help with the project. This project was developed in part at the 2023 Gaia XPloration, hosted by the Institute of Astronomy, Cambridge University. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. Funding for the Sloan Digital Sky Survey IV and V has been provided by the Alfred P. Sloan Foundation, the U.S Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV and SDSS-V have been managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe/University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam, Max-Planck-Institut fur Astronomie (Heidelberg), Max-Planck-Institut fur Astrophysik (Garching), Max-Planck-Institut fur Extraterrestrische Physik, National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional/MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work made use of the Third Data Release of the GALAH Survey. The GALAH Survey is based on data acquired through the Australian Astronomical Observatory, under programs: A/2013B/13 (The GALAH pilot survey); A/2014A/25, A/2015A/19, A2017A/18 (The GALAH survey phase 1); A2018A/18 (Open clusters with HERMES); A2019A/1 (Hierarchical star for mation in Ori OB1); A2019A/15 (The GALAH survey phase 2); A/2015B/19, A/2016A/22, A/2016B/10, A/2017B/16, A/2018B/15 (The HERMES-TESS program); and A/2015A/3, A/2015B/1, A/2015B/19, A/2016A/22, A/2016B/12, A/2017A/14 (The HERMES K2-follow-up program). We acknowledge the traditional owners of the land on which the AAT stands, the Gamilararay people, and pay our respects to elders past and present. This paper includes data that has been provided by AAO Data Central (datacentral.org.au). This work has made use of the Python package GaiaXPy, developed and maintained by members of the Gaia Data Processing and Analysis Consortium (DPAC), and in particular, Coordination Unit 5 (CU5), and the Data Processing Centre located at the Institute of Astronomy, Cambridge, UK (DPCI). We conduct an analysis to evaluate the discriminative power of AspGap in distinguishing between different chemical components of the disk, namely the low-alpha and high-alpha disk, while considering stars within the same range of metallicity. By comparing the AspGap classifications with the given APOGEE labels, we can assess how effectively AspGap can differentiate between these two disk components based on their [\(\alpha\)/M]. To select high-alpha and low-alpha disk stars based on APOGEE labels, we use a specific metallicity range (\(-0.9<\mathrm{[M/H]}<0\)) as shown in Fig 18. All of the selected stars belonging to the low and high disk groups, as classified by APOGEE, have undergone 2-fold cross-validated estimations of [M/H] and [\(\alpha\)/M] using AspGap. To evaluate the performance of AspGap in distinguishing between high-alpha and low-alpha disk stars, we employ precision and recall metrics. Precision measures the proportion of correctly classified stars out of all stars classified as a particular category, while recall measures the proportion of correctly classified stars out of all stars belonging to that category. As illustrated in Fig 18, our validation results highlight the remarkable performance of AspGap in differentiating between the low-alpha and high-alpha disk populations. For the low-alpha disk stars, 97% of the stars identified by AspGap correspond to the low-alpha class as defined by APOGEE labels. Furthermore, AspGap accurately classifies Figure 18: Left: Selected stars classified into low-[\(\alpha\)/M] and high-[\(\alpha\)/M] groups with \(-0.9<\)[M/H]\(<0\), representing the ground truth from APOGEE. Right: Confusion matrix showing the Gaia XP-predicted low/high [\(\alpha\)/M] group of APOGEE group. 96% of the entire low-alpha disk sample identified by APOGEE. Similarly, for the high-alpha disk, 93% (8,808 out of 9,434) of the stars identified by AspGap represent the high-alpha class, and 94% (8,808 out of 9,343) of the complete high-alpha sample identified by APOGEE are correctly recognized by AspGap. ## Appendix C Validation by Different Galactic latitudes We present a detailed comparison of the \(T_{\rm eff}\), \(\log\ g\) and [M/H] divided into two different Galactic latitudes (\(b\)) (\(|b|<10\) and \(|b|>30\)) as illustrated by Fig.19.
2309.06848
Hopf Galois structures, skew braces for groups of size $p^nq$: The cyclic Sylow subgroup case
Let $n\geq 1$ be an integer, $p$, $q$ be distinct odd primes. Let ${G}$, $N$ be two groups of order $p^nq$ with their Sylow-$p$-subgroups being cyclic. We enumerate the Hopf-Galois structures on a Galois ${G}$-extension, with type $N$. This also computes the number of skew braces with additive group isomorphic to $G$ and multiplicative group isomorphic to $N$. Further when $q<p$, we give a complete classification of the Hopf-Galois structures on Galois-$G$-extensions.
Namrata Arvind, Saikat Panja
2023-09-13T09:51:36Z
http://arxiv.org/abs/2309.06848v1
# Hopf Galois structures, skew braces for groups of size \(p^{n}q\): the cyclic Sylow subgroup case ###### Abstract. Let \(n\geq 1\) be an integer, \(p\), \(q\) be distinct odd primes. Let \(G\), \(N\) be two groups of order \(p^{n}q\) with their Sylow-\(p\)-subgroups being cyclic. We enumerate the Hopf-Galois structures on a Galois \(G\)-extension, with type \(N\). This also computes the number of skew braces with additive group isomorphic to \(G\) and multiplicative group isomorphic to \(N\). Further when \(q<p\), we give a complete classification of the Hopf-Galois structures on Galois-\(G\)-extensions. Key words and phrases:Hopf-Galois structures; Field extensions; Holomorph 2020 Mathematics Subject Classification: 12F10, 16T05 The first named author is partially supported by the IMSc postdoctoral fellowship and the second author has been partially supported by HRI postdoctoral fellowship. Given a group \(G\), the \(Holomorph\) of \(G\) is defined as \(G\rtimes\operatorname{Aut}(G)\), via the identity map. It is denoted by \(\operatorname{Hol}(G)\). Let \(G\) and \(N\) be two finite groups of the same order. By \(e(G,N)\) we mean the number of Hopf-Galois structures on a finite Galois field extension \(L/K\) with Galois group isomorphic to \(G\), and the type isomorphic to \(N\). In [12], the authors gave a bijection between Hopf-Galois structures on a finite Galois extension with Galois group \(G\) and regular subgroups in \(\operatorname{Perm}(G)\), which are normalised by \(\lambda(G)\). Further in [7], N. Byott showed that \[e(G,N)=\frac{|\operatorname{Aut}(G)|}{|\operatorname{Aut}(N)|}\cdot e^{\prime }(G,N), \tag{1.1}\] where \(e^{\prime}(G,N)\) is the number of regular subgroups of \(\operatorname{Hol}(N)\) isomorphic to \(G\). Here a subgroup \(\Gamma\) of \(\operatorname{Hol}(N)\) is called regular if it has exactly one element \((e_{G},\zeta)\in\Gamma\) with \(\zeta=I\), the identity automorphism. We will use this condition to check regular embeddings of the concerned groups in the article. It turns out that \(e^{\prime}(G,N)\) also gives the number of Skew-Braces with the additive group isomorphic to \(N\) and the multiplicative group isomorphic to \(G\). The number \(e(G,N)\) has been computed for several groups. For example, N. Byott determined \(e(G,N)\) when \(G\) is isomorphic to a cyclic group [6]; C. Tsang determined \(e(G,N)\) when \(G\) is a quasisimple group [17]; N. K. Zenouz consider the groups of order \(p^{3}\)[22] to determine \(e(G,N)\) ; T. Kohl determined \(e(G,N)\) when \(G\) is a dihedral group [13]. Previously in [2], the authors computed \(e(G,N)\) whenever \(G\) and \(N\) are isomorphic to \(\mathbb{Z}_{n}\rtimes\mathbb{Z}_{2}\), where \(n\) is odd and its radical is a Burnside number. Groups of order \(p^{2}q\) with cyclic Sylow subgroups have been considered in [8]. We can show that any group of order \(p^{n}q\) with cyclic Sylow subgroups, when \(p\) and \(q\) are distinct primes, is a semidirect product of two cyclic groups (see Section 2). In this article, we compute \(e(G,N)\) (and \(e^{\prime}(G,N)\)), whenever \(G\) and \(N\) are groups of order \(p^{n}q\) with cyclic Sylow-\(p\) subgroup, where \(p\) and \(q\) are distinct odd primes. We do this by looking at the number of regular subgroups of \(\operatorname{Hol}(N)\) which are isomorphic to \(G\). Finally whenever \(q<p\) we give a necessary and sufficient condition on when the pair \((G,N)\) is realizable. We now fix some notations. For a ring \(R\), we will use \(R^{\times}\) to denote the set of multiplicative units of \(R\). For a group \(G\), the identity element will be sometimes denoted by \(e_{G}\) and mostly by \(1\), when the context is clear. The automorphism group of a group \(G\) will be denoted by \(\operatorname{Aut}(G)\), and the holomorph \(G\rtimes_{\operatorname{id}}\operatorname{Aut}(G)\) will be denoted by \(\operatorname{Hol}(G)\). The binomial coefficients will be denoted by \(\binom{l}{m}\). The Euler totient function will be denoted by \(\varphi\). We will use \(\mathbb{Z}_{m}\) to denote the cyclic group of order \(m.\) We will use \(\mathbb{Z}_{m}\) as a group as well as a ring, which will be clear from the context. Now, we state the two main results of this article. To state the second result we use notations from Section 2. **Theorem 1.1**.: _Let \(p>q\) be odd primes and \(q|p-1\). Let \(G\) denote the nonabelian group of the form \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) and \(C\) denote the cyclic group of order \(p^{n}q\). Then the following are true:_ 1. \(e^{\prime}(G,G)=e(G,G)=2+2p^{n}(q-2)\)_,_ 2. \(e^{\prime}(G,C)=q-1\)_, and_ \(e(G,C)=p^{n}\)_,_ 3. \(e^{\prime}(C,G)=p^{2n-1}\)_, and_ \(e(C,G)=2p^{n-1}(q-1)\)_._ **Theorem 1.2**.: _Let \(p<q\) be odd primes and \(p^{a}||q-1\). For \(1\leq b\leq\min\{n,a\}\), let \(G_{b}\) denote the unique nonabelian group of the form \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) determined by \(b\), and \(C\) denote the cyclic group of \(p^{n}q\). Then the following results hold;_ 1. \(e^{\prime}(G_{b},G_{b})=e(G_{b},G_{b})=2\left(p^{n-b}+q\left(\varphi(p^{n})-p^ {n-b}\right)\right)\)_,_ 2. \(e^{\prime}(G_{b_{1}},G_{b_{2}})=2qp^{n+b_{1}-b_{2}-1}(p-1)\)_, and_ \(e(G_{b_{1}},G_{b_{2}})=2qp^{n-1}(p-1)\) _for_ \(b_{1}\neq b_{2}\)_,_ 3. \(e^{\prime}(C,G_{b})=2p^{n-b}q\)_, and_ \(e(C,G_{b})=2(p-1)p^{n-1}\)_,_ 4. \(e^{\prime}(G_{b},C)=p^{n+b-2}(p-1)\)_, and_ \(e(G-b,C)=p^{n-1}b\)_._ The rest of the article is organised as follows. In Section 2, we give a detailed description of the groups under consideration and determine their automorphism groups. Next, in Section 3 and Section 4 we will prove Theorem 1.1 and Theorem 1.2 respectively. Lastly, in Section 5 we discuss the realizability problem and solve them for some of the groups mentioned in this article. ## 2. Preliminaries ### The groups under consideration In this subsection we will describe the groups under consideration and fix some notations. Let \(p\) and \(q\) be distinct odd primes. We look at groups of order \(p^{n}q\) whose Sylow-\(p\)-subgroups are cyclic. These come under two families, depending on whether \(p>q\) or \(p<q\). In case \(p>q\), the groups are isomorphic to \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\), since \(\mathbb{Z}_{p^{n}}\) is normal. Indeed all these groups \(G\) fits into the short exact sequence \(1\longrightarrow\mathbb{Z}_{p^{n}}\longrightarrow G\longrightarrow\mathbb{Z }_{q}\longrightarrow 1\). Thus by the well known Schur-Zassenhaus theorem \(G\) is isomorphic to \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). Since \(\operatorname{Aut}(\mathbb{Z}_{p^{n}})\) is cyclic, the semidirect product is either trivial (in this case the group is cyclic) or uniquely nontrivial. Let \(G\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). If \(q\nmid p-1\) then \(G\) is cyclic. In case \(q|p-1\), let \(\phi:\mathbb{Z}_{q}\rightarrow\operatorname{Aut}(\mathbb{Z}_{p^{n}})\) be the homomorphism defined as \(\phi(1)=k\). Here \(k\) is an element of \(\operatorname{Aut}(\mathbb{Z}_{p^{n}})\) of order \(q\). Hereafter, we denote \(\mathbb{Z}_{p^{n}}\rtimes_{\phi}\mathbb{Z}_{q}\) by \(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q}\). Let \[\{x,y|x^{p^{n}}=y^{q}=1,yxy^{-1}=x^{k}\}\] be a presentation of \(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q}\). Note that since \(e(G,G)\) is already known whenever \(G\) is cyclic, we will assume \(q|p-1\) for our calculations. Now if \(p<q\) we need to use a result of W. Burnside from [5], which states that for a finite group \(G\), all the Sylow subgroups are cyclic if and only if \(G\) is a semidirect product of two cyclic groups of coprime order. Applying this to our situation, we get that \(G\) is either a cyclic group or a non-trivial semidirect product of the form \(\mathbb{Z}_{q}\rtimes\mathbb{Z}_{p^{n}}\). Next, we elaborate on different possible semidirect products of the form \(\mathbb{Z}_{q}\rtimes\mathbb{Z}_{p^{n}}\). Once again in this case we assume that \(p|q-1\). Let \(p^{a}||q-1\) and for \(b\leq\min\{n,a\}\) fix \(\psi_{b}:\mathbb{Z}_{p^{n}}\longrightarrow\operatorname{Aut}(\mathbb{Z}_{q})\) to be a homomorphism, such that \(|\text{Im }\psi_{b}|=p^{b}\). Take \(G_{b}=\mathbb{Z}_{q}\rtimes_{\psi_{b}}\mathbb{Z}_{p^{n}}\). The group \(G_{b}\) is unique up to isomorphism. The presentation of this group can be taken to be \[\langle x,y|x^{p^{n}}=1,y^{q}=1,xyx^{-1}=y^{k}\rangle,\] where \(k\) is an element of order \(p^{b}\) in \(\operatorname{Aut}(\mathbb{Z}_{q})=\mathbb{Z}_{q}^{\times}\). From now on we denote \(\mathbb{Z}_{q}\rtimes_{\psi_{b}}\mathbb{Z}_{p^{n}}\) by \(\mathbb{Z}_{q}\rtimes_{k}\mathbb{Z}_{p^{n}}\). ### The basic lemmas In this subsection we note down the basic group-theoretic results, which will be used throughout the article. **Lemma 2.1**.: _Let \(p\) be a positive odd integer. Take \(a=bp^{c}\) where \(p\nmid b\). Then we have that \((1+p)^{a}\equiv 1+dp^{c+1}\pmod{p^{c+2}}\) for some \(p\nmid d\), for all integer \(c\geq 0\)._ Proof.: We prove it by induction on \(c\). If \(c=0\), then \((1+p)^{a}=1+ap+a^{\prime}p^{2}\), for some \(a^{\prime}\in\mathbb{Z}\). Hence \((1+p)^{a}\equiv 1+ap\pmod{p^{2}}\) with \(d=a\). Next, assume it to be true for all \(l\leq c\) and in particular for \(l=c\). Hence \((1+p)^{bp^{c}}=1+dp^{c+1}+d^{\prime}p^{c+2}\) for some \(d^{\prime}\in\mathbb{Z}\). Then we have \[(1+p)^{bp^{c+1}}=\left(1+dp^{c+1}+d^{\prime}p^{c+2}\right)^{p}=(1+d^{\prime \prime}p^{c+1})^{p},\] for some \(d^{\prime\prime}\in Z\) and \((d^{\prime\prime},p)=1\). Hence it follows that \((1+p)^{bp^{c+1}}\equiv 1+d^{\prime\prime}p^{c+2}\pmod{p^{c+3}}\), which also finishes the induction, and hence the proof. **Lemma 2.2**.: _Let \(G\) be the non-abelian group isomorphic to \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). We have \(\operatorname{Aut}(G)\cong\operatorname{Hol}(\mathbb{Z}_{p^{n}})\)._ Proof.: We first embed \(G\) as a normal subgroup of \(\operatorname{Hol}(\mathbb{Z}_{p^{n}})\). Take the homomorphism \(\psi\) defined as \[\psi(x)=\begin{pmatrix}1&k\\ 0&1\end{pmatrix},\ \psi(y)=\begin{pmatrix}k&1\\ 0&1\end{pmatrix}.\] This embedding can be shown to be injective. Now consider the following map \[\Phi:\operatorname{Hol}(\mathbb{Z}_{p^{n}})\longrightarrow\operatorname{Aut}(G )\text{ defined as }\Phi(z)(w)=zwz^{-1}\] for all \(z\in\operatorname{Hol}(\mathbb{Z}_{p^{n}})\) and \(y\in G\) is an injective group homomorphism, since \(\ker\Phi\) consists only of the identity matrix. From [21, Theorem B] we have \(|\operatorname{Aut}(G)|=|\operatorname{Hol}(\mathbb{Z}_{p^{n}})|\). Thus \(\Phi\) is an isomorphism. **Lemma 2.3**.: _Let \(p,q\) be primes such that \((p,q)=1\) and \(q|p-1\). Let \(k\) be a multiplicative unit in \(\mathbb{Z}_{p^{n}}\), of multiplicative order \(q\). Then \(k-1\) is a multiplicative unit in \(\mathbb{Z}_{p^{n}}\)._ Proof.: Suppose \(k-1\) is not a unit in \(\mathbb{Z}_{p^{n}}\), then \(k-1=mp\) for some \(m\in\mathbb{Z}_{p^{n}}\). Since \((k)^{q}\equiv 1\pmod{p^{n}}\), we get \[(mp+1)^{q}\equiv 1+\binom{q}{1}mp+\binom{q}{2}(mp)^{2}+\cdots+(mp)^{q}\equiv 1 \pmod{p^{n}},\] which in turn implies that \[mp\cdot\left(q+\binom{q}{2}mp+\binom{q}{3}(mp)^{2}+\cdots+(mp)^{q-1}\right) \equiv 0\pmod{p^{n}}.\] We note that \[t=q+\binom{q}{2}mp+\binom{q}{3}(mp)^{2}+\cdots+(mp)^{q-1}\] is a unit since \(q\) is a unit and \(t-q\) is a nilpotent element. Thus \(mp\equiv 0\pmod{p^{n}}\), which implies \(k-1\equiv 0\pmod{p^{n}}\). This is a contradiction since \(k\) is an element of order \(q\). **Lemma 2.4**.: _Let \(G_{b}\cong\mathbb{Z}_{q}\rtimes_{k}\mathbb{Z}_{p^{n}}\), where \(k\) is an element of order \(p^{b}\) in \(\mathbb{Z}_{q}^{\times}\). Assume \(p|q-1\), then for \(b>0\), we have that \(\operatorname{Aut}(G_{b})\cong\mathbb{Z}_{p^{n-b}}\times\operatorname{Hol}( \mathbb{Z}_{q})\)._ Proof.: The proof will be divided into two steps. First, we calculate the size of the automorphism group. In the next step, we will determine the group's description in terms of generators and relations, from which the result will follow. Let us take an automorphism \(\Psi\) of \(G_{b}\). Since an automorphism is determined by its value on the generator, assume that \(\Psi(x)=y^{\alpha}x^{\gamma}\) and \(\Psi(y)=y^{\beta}x^{\delta}\), where \(0\leq\alpha,\beta\leq q-1\) and \(0\leq\gamma,\delta\leq p^{n}-1\). Note that we have \(\Psi(y)^{q}=y^{\beta(1+k^{\delta}+k^{2\delta}+k^{(q-1)\delta})}x^{q\delta}\). Since \(\Psi(y)^{q}=1\), we must have \(\delta=0\). Thus \(\beta\) should be a unit in \(\mathbb{Z}_{q}\). Now consider the equation \(\Psi(x)\Psi(y)=\Psi(y)^{k}\Psi(x)\). This imposes the condition that \(y^{\alpha+\beta k^{\gamma}}x^{\gamma}=y^{\beta k+\alpha}x^{\gamma}\). Hence we should have \(\beta k^{\gamma}\equiv\beta k\pmod{q}\), whence \(k^{\gamma-1}\equiv 1\pmod{q}\), as \(\beta\) is a unit in \(\mathbb{Z}_{q}\). Since \(k\) is an element of order \(p^{b}\), we get that \(\gamma\in\{Rp^{b}+1:0\leq R<p^{n-b}\}\). Next considering the equation \(\Psi(x)^{p^{n}}=1\), we have that \(y^{\alpha(1+k^{\gamma}+k^{2\gamma}+\ldots+k^{(p^{n}-1)\gamma})}x^{p^{n}\gamma}=1\). Since \(x^{p^{n}\gamma}=1\), we have that \(\alpha(1+k^{\gamma}+k^{2\gamma}+\ldots+k^{(p^{n}-1)\gamma})=0\pmod{q}\). Regardless of the value of \(k\), any \(0\leq\alpha\leq q\) satisfies the last congruence. Hence the group is of order \(p^{n-b}q(q-1)\). Hereafter we denote \(\Psi\) by \((\gamma,\beta,\alpha)\). Consider the following three elements of the group given by \[\Psi_{1}=\left((1+p)^{p^{b-1}},1,0\right),\Psi_{2}=(1,t,0)\,,\Psi_{3}=(1,1,1),\] where \(1\leq t\leq q-1\) satisfies \(\mathbb{Z}_{q}^{\times}=\langle\overline{t}\rangle\). Since \(\overline{(1+p)}\in\mathbb{Z}_{p^{n}}^{\times}\) is of order \(p^{n-1}\), we get that \(\Psi_{1}\) is an element of order \(p^{n-b}\). Given that, \(\overline{t}\) is an element of order \(q-1\), the element \(\Psi_{2}\) is of order \(q-1\). Lastly, \(\Psi_{3}\) is an element of order \(q\). Note that \(\Psi_{1}\Psi_{2}=\Psi_{2}\Psi_{1}\), follows from an easy calculation. Now, \(\Psi_{1}\Psi_{3}(x)=yx^{(1+p)^{p^{b-1}}}\). Further, we have, \[\Psi_{3}\Psi_{1}(x)=(yx)^{(1+p)^{p^{b-1}}}=y^{1+k+...+k^{(1+p)^{p^{b-1}}}}x^{(1+p )^{p^{b-1}}}=y^{1+\left(\frac{k^{(1+p)^{p^{b-1}}}-1}{k-1}\right)}x^{k^{(1+p)^{p^ {b-1}}}-1}.\] Since \((1+p)^{p^{b-1}}-1\equiv 1\pmod{p^{b}}\) and \(\overline{k-1}\) is a unit in \(\mathbb{Z}_{q}\), we conclude that \(\Psi_{1}\Psi_{3}(x)=\Psi_{3}\Psi_{1}(x)\). Since \(\Psi_{1}\Psi_{3}(y)=\Psi_{3}\Psi_{1}(y)\), we conclude that \(\Psi_{1}\Psi_{3}=\Psi_{3}\Psi_{1}\). We now take the subgroup generated by \(\Psi_{2}\) and \(\Psi_{3}\). In this group \(\langle\Psi_{3}\rangle\) is normal as \(\Psi_{2}\Psi_{3}\Psi_{2}^{-1}=\Psi_{3}^{t}\in\langle\Psi_{3}\rangle\). Also \(\langle\Psi_{2}\rangle\cap\langle\Psi_{3}\rangle\) contains only identity. Hence \(|\langle\Psi_{2},\Psi_{3}\rangle|=q(q-1)\). Take the map \(T:\langle\Psi_{2},\Psi_{3}\rangle\longrightarrow\operatorname{Hol}(\mathbb{Z }_{q})\), defined as \[T(\Psi_{2})=\begin{pmatrix}t&0\\ 0&1\end{pmatrix}\text{ and }T(\Psi_{3})=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}.\] This determines a homomorphism since \(T(\Psi_{2})T(\Psi_{3})T(\Psi_{2})^{-1}=T(\Psi_{3})^{t}\). For any \(\begin{pmatrix}u&v\\ 0&1\end{pmatrix}\in\operatorname{Hol}(\mathbb{Z}_{q})\), we have that \(T(\Psi_{2}^{w_{1}}\Psi_{3}^{w_{2}})=\begin{pmatrix}u&v\\ 0&1\end{pmatrix}\), where \(w_{1}\) satisfies \(t^{w_{1}}=u\) and \(w_{2}=v/u\). Since the order of the groups are the same, we conclude that \(\langle\Psi_{2},\Psi_{3}\rangle\cong\operatorname{Hol}(\mathbb{Z}_{q})\). Now we will show that \(\langle\Psi_{1}\rangle\cap\langle\Psi_{2},\Psi_{3}\rangle\) has only the identity element. Indeed, if \(\Psi_{1}^{d}=\Psi_{2}^{e}\Psi_{3}^{f}\) (for some \(0\leq d<p^{n-b}\), \(0\leq e<q-1\) and \(0\leq f<q\)), then \(e=0\), comparing the evaluation of both the functions at \(y\). Finally, if we consider \(\Psi_{1}^{d}(x)=\Psi_{3}^{f}(x)\), we get that \({x^{p^{d}}}=y^{f}x\) where \(p^{\prime}=(1+p)^{p^{b-1}}\). This forces us to have \(f=0\), consequently \(d=0\). Thus \(\langle\Psi_{1},\Psi_{2},\Psi_{3}\rangle\cong\langle\Psi_{1}\rangle\times \langle\Psi_{2},\Psi_{3}\rangle\) and is of order \(p^{n-b}q(q-1)\). Hence we have proved that \(\operatorname{Aut}(G_{b})\cong\mathbb{Z}_{p^{n-b}}\times\operatorname{Hol}( \mathbb{Z}_{q})\). We denote the elements of \(\operatorname{Aut}(G_{b})\) by \(\left(\gamma,\begin{pmatrix}\beta&\alpha\\ 0&1\end{pmatrix}\right)\in\mathbb{Z}_{p^{n}}^{\times}\times\operatorname{Hol}( \mathbb{Z}_{q})\), such that \(\gamma^{p^{n-b}}=1\). **Remark 2.5**.: We note down the action of the automorphism group of \(G_{b}\) on the group \(G_{b}\), by means of generators. This will be necessary for counting the Hopf-Galois structures concerning \(G_{b}\)'s. For \(b>0\), the action is as follows. \[\left(\gamma,\begin{pmatrix}\beta&\alpha\\ 0&1\end{pmatrix}\right)\cdot x=y^{\alpha}x^{\gamma}\text{ and, }\left(\gamma, \begin{pmatrix}\beta&\alpha\\ 0&1\end{pmatrix}\right)\cdot y=y^{\beta}.\] **Remark 2.6**.: For \(b=0\), the group \(G_{b}\cong\mathbb{Z}_{p^{n}}\times\mathbb{Z}_{q}\). Since \((p,q)=1\) and both are abelian groups, it follows from [4, Theorem 3.2] that \(\operatorname{Aut}(G_{b})\cong\mathbb{Z}_{p^{n-1}(p-1)}\times\mathbb{Z}_{q-1}\) in this case. The action is defined to be component-wise. ## 3. The case \(p>q\) This section is devoted to the proof of Theorem 1.1. As discussed in Section 2, upto isomorphism there are precisely two groups of order \(p^{n}q\) whenever their Sylow subgroups are cyclic. Counting the number of skew braces with multiplicative group \(G\) and additive group \(N\) is equivalent to (up to multiplication by a constant; see [2, Proof of Proposition 3.2]) counting the number of regular embedding of \(G\) in \(\operatorname{Hol}(N)\). Then using Eq. (1.1), we are able to conclude about the number of Hopf-Galois structures on \(G\)-extensions of type \(N\). We will use the regularity criterion given in Section 1. This section will be divided into three subsections, depending on the isomorphism types of \(G\) and \(N\). From Lemma 2.2, we have that \(\operatorname{Aut}(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q})\cong \operatorname{Hol}(\mathbb{Z}_{p^{n}})\), where the action is given by, \[\begin{pmatrix}\beta&\alpha\\ 0&1\end{pmatrix}\cdot x^{i}y^{j}=x^{\beta i+\alpha k^{-1}-\alpha k^{j-1}}y^{j}.\] Embedding of \(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q}\) into \(\operatorname{Hol}(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q})\) Let \(\Phi:\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q}\longrightarrow\operatorname{ Hol}(\mathbb{Z}_{p^{n}}\rtimes_{k}\mathbb{Z}_{q})\) be a regular embedding. Let \[\Phi(x)=\left(x^{i_{1}}y^{j_{1}},\begin{pmatrix}\beta_{1}&\alpha_{1}\\ 0&1\end{pmatrix}\right),\Phi(y)=\left(x^{i_{2}}y^{j_{2}},\begin{pmatrix}\beta_ {2}&\alpha_{2}\\ 0&1\end{pmatrix}\right).\] From \((\Phi(x))^{p^{n}}=1\) we get \[j_{1}\equiv 0\pmod{q}, \tag{3.1}\] since \(p^{n}j_{1}\equiv 0\pmod{q}\) and \((p,q)=1\), \[\beta_{1}^{p^{n}} \equiv 1\pmod{p^{n}}, \tag{3.2}\] \[i_{1}(1+\beta_{1}+\beta_{1}^{2}+\ldots+\beta_{1}^{p^{n}-1}) \equiv 0\pmod{p^{n}},\] (3.3) \[\alpha_{1}(1+\beta_{1}+\beta_{1}^{2}+\ldots+\beta_{1}^{p^{n}-1}) \equiv 0\pmod{p^{n}}. \tag{3.4}\] Similarly from \(\Phi(yxy^{-1})=\Phi(x^{k})\) we get \[\beta_{1}^{k-1}\equiv 1\pmod{p^{n}}, \tag{3.5}\] which implies \(\beta_{1}=1\) from Eq. (3.2), Eq. (3.5) and using Lemma 2.3; furthermore, \[\beta_{2}\alpha_{1}+\alpha_{2} \equiv\beta_{1}^{k}\alpha_{2}+\alpha_{1}\pmod{p^{n}}, \tag{3.6}\] \[ki_{1}\left(k^{j_{2}-1}\beta_{2}-1\right) \equiv\alpha_{1}\left(1-k^{j_{2}}\right)\pmod{p^{n}}. \tag{3.7}\] Further taking \(\beta_{1}=1\) in Eq. (3.6) and Eq. (3.7) we get that, \[\alpha_{1}\cdot(k-\beta_{2})\equiv 0\pmod{p^{n}}, \tag{3.8}\] \[ki_{1}\cdot(k^{j_{2}-1}\beta_{2}-1)\equiv\alpha_{1}\cdot(1-k^{j_{ 2}})\pmod{p^{n}}. \tag{3.9}\] We note that in general, \[\Phi(y)^{k}=\left(x^{\ell_{k}}y^{kj_{2}},\begin{pmatrix}\beta_{2}^{k}&\alpha_{2}(1 +\beta_{2}+\beta_{2}^{2}+\cdots+\beta_{2}^{k-1})\\ 1\end{pmatrix}\right),\] where \[\ell_{k}=i_{2}\left(\sum_{t=0}^{k-1}\left(\beta_{2}k^{j_{2}}\right)^{t}\right) +\left(\alpha_{2}k^{j_{2}-1}-\alpha_{2}k^{2j_{2}-1}\right)\left(1+\sum_{u=1}^{ k-2}\left(\sum_{v=0}^{u}\beta_{2}^{v}\right)k^{uj_{2}}\right). \tag{3.10}\] Using \(\Phi(y)^{q}=1\) we get \[\beta_{2}^{q} \equiv 1\pmod{p^{n}}, \tag{3.11}\] \[\alpha_{2}(1+\beta_{2}+\beta_{2}^{2}+\ldots+\beta_{2}^{q-1}) \equiv 0\pmod{p^{n}} j_{2}\neq 0,\] (3.12) \[\ell_{q} \equiv 0\pmod{p^{n}}. \tag{3.13}\] From Eq. (3.11) we get \(\beta_{2}=k^{a}\), for some \(0\leq a\leq q-1\), since \(\mathbb{Z}_{p^{n}}^{*}\) has a unique subgroup of order \(q\) and is generated by \(k\). First let us show that, in any regular embedding \(j_{2}\neq 0\). If possible let \(j_{2}=0\). Then we get that \(\beta_{2}=k\). This forces that for any \(0\leq\omega_{1}\leq p^{n}-1\) and \(0\leq\omega_{2}\leq q-1\) \[\Phi(x)^{\omega_{1}}\Phi(y)^{\omega_{2}}=\left(x^{\omega_{1}i_{1}+i_{2}\left(1 +k+\cdots+k^{\omega_{2}-1}\right)},\begin{pmatrix}k^{\omega_{2}}&\star\\ 0&1\end{pmatrix}\right). \tag{3.14}\] Since \(i_{1}\) is a unit, making a suitable choice of \(\omega_{1}\) and \(\omega_{2}\) we get that this embedding will not be regular. Indeed note that \(1-k\) and \(1-k^{\omega_{2}}\) are both units and so is \(1+k+\cdots+k^{\omega_{2}-1}\). We now divide the possibilities of \(a\) into \(3\) cases. #### 3.1.1. **Case I: \(a=0\)** Using Eq. (3.7) and Eq. (3.8), we conclude that \(\alpha_{1}\equiv 0\pmod{p^{n}}\), \(j_{2}\equiv 1\pmod{q}\) and, \(\alpha_{2}\equiv 0\pmod{p^{n}}\). Since \(i_{1}\) is a unit in \(\mathbb{Z}_{p^{n}}\) and \(i_{2}\in\mathbb{Z}_{p^{n}}\) can take any value, the total number of embeddings in this case is given by \(p^{n}\varphi(p^{n})\). Moreover, all of these embeddings are regular. We remark that all the above embedding corresponds to the canonical Hopf-Galois structure. #### 3.1.2. **Case II: \(a=1\)** Note that using Eq. (3.9) we get that \(ki_{1}\equiv-\alpha_{1}\pmod{p^{n}}\). We deal with this in two subcases depending on the value of \(j_{2}\). First, we consider the case \(j_{2}\) being equal to \(q-1\). In this case using \(\ell_{q}=0\), we get that \(i_{2}\) gets determined by the value of \(\alpha_{2}\) since \(\left(\sum\limits_{t=0}^{k-1}\left(\beta_{2}k^{j_{2}}\right)^{t}\right)=q\) is a unit in \(\mathbb{Z}_{p^{n}}\). Hence the number of embedding in this subcase is given by \(p^{n}\varphi(p^{n})\). For the other case, since the element \(k^{j_{2}}(1-k^{a})\) is a unit and \(j_{2}+a\neq 0\pmod{q}\) we get \[1+\sum_{s=1}^{q-2}\left(\sum_{t=0}^{s}k^{ta}\right)k^{sj_{2}}=\frac{1}{k^{j_{2 }}(1-k^{a})}\left\{\sum_{t=1}^{q-1}\left(1-k^{ta}\right)k^{tj_{2}}\right\}= \frac{1}{k^{j_{2}}(1-k^{a})}\cdot(1-1)=0,\] Thus \(\Phi(y)^{q}=1\) does not impose any conditions on \(i_{2}\) and \(\alpha_{2}\). Hence, in this subcase, the total number of possibilities is \(p^{2n}\varphi(p^{n})(q-2)\). Since \(j_{2}\neq 0\), we conclude that all the embeddings are regular. #### 3.1.3. **Case III:**\(a\geq 2\) This conditions together with Eq. (3.8) and Eq. (3.9), imply that \(\alpha_{1}=0\) and \(j_{2}\equiv a-1\pmod{q}\). Since \(a+j_{2}\not\equiv 0\pmod{q}\), a mutatis mutandis of Case II gives that \(i_{2}\) and \(\alpha_{2}\) can be chosen independently, whence each of them has \(p^{n}\) possibilities. Thus, in this case, the total number of possibilities is given by \(p^{2n}\varphi(p^{n})(q-2)\). Similar to the previous case, all the embeddings are regular. Summarizing the above cases we get the following result. **Lemma 3.1**.: _The total number of regular embeddings of \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) inside \(\operatorname{Hol}(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\) is given by \(2p^{n}\varphi(p^{n})+2p^{2n}\varphi(p^{n})(q-2)\)._ **Proposition 3.2**.: _Let \(G\) be a non-abelian groups of the form \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\), where \(p\) and \(q\) are primes satisfying \(q|p-1\). Then \(e(G,G)\) is given by \(2+2p^{n}(q-2)\)._ Proof.: From Lemma 3.1 we get the total number of regular embeddings. Dividing this number by the Automorphism of \(G\) will give us the total number of Hopf-Galois structures. Embedding of \(G=\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) in the \(\operatorname{Hol}(\mathbb{Z}_{p^{n}}\times\mathbb{Z}_{q})\) Next we consider the case of regular embedding of \(G=\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) in the \(\operatorname{Hol}(\mathbb{Z}_{p^{n}}\times\mathbb{Z}_{q})\). Let us fix the presentation of \(C=\mathbb{Z}_{p^{n}}\times\mathbb{Z}_{q}\) to be \(\langle r,s|r^{p^{n}}=s^{q}=1,rs=sr\rangle.\) Then it can be shown that \(\operatorname{Hol}(C)\equiv\operatorname{Hol}(\mathbb{Z}_{p^{n}})\times \operatorname{Hol}(\mathbb{Z}_{q})\). We take a typical element of \(\operatorname{Hol}(C)\) to be \(\left(\begin{pmatrix}b&a\\ 0&1\end{pmatrix},\begin{pmatrix}d&c\\ 0&1\end{pmatrix}\right)\), where \(a\), \(c\) are elements of \(\mathbb{Z}_{p^{n}}\), \(\mathbb{Z}_{q}\) respectively and \(b\), \(d\) are elements of \(\mathbb{Z}_{p^{n}}^{\times}\), \(\mathbb{Z}_{q}^{\times}\) respectively. Starting with an embedding \(\Phi\) of \(G\) inside \(\operatorname{Hol}(C)\) and assuming that \[\Phi(x)=\left(\begin{pmatrix}b_{1}&a_{1}\\ 0&1\end{pmatrix},\begin{pmatrix}d_{1}&c_{1}\\ 0&1\end{pmatrix}\right),\Phi(y)=\left(\begin{pmatrix}b_{2}&a_{2}\\ 0&1\end{pmatrix},\begin{pmatrix}d_{2}&c_{2}\\ 0&1\end{pmatrix}\right).\] From \(\Phi(x)^{p^{n}}=e_{\operatorname{Hol}(C)}\) we get the equations \[b_{1}^{p^{n}} \equiv 1\pmod{p^{n}}, \tag{3.15}\] \[a_{1}\left(1+b_{1}+\cdots+b_{1}^{p^{n}-1}\right) \equiv 0\pmod{p^{n}},\] (3.16) \[d_{1}^{p^{n}} \equiv 1\pmod{q},\] (3.17) \[c_{1}\left(1+d_{1}+\cdots+d_{1}^{p^{n}-1}\right) \equiv 0\pmod{q}. \tag{3.18}\] Note that \(d_{1}^{q-1}\equiv 1\pmod{q}\) and \((q-1,p^{n})=1\). Combining this with Eq. (3.17), we get that \(d_{1}=1\). Then plugging \(d_{1}=1\) in Eq. (3.18), conclude that \(c_{1}=0\). For ensuring regularity, we need to take \(a_{1}\) is a unit in \(\mathbb{Z}_{p^{n}}\). Using the equation \(\Phi(y)^{q}=1\) we get the equations \[b_{2}^{q} \equiv 1\pmod{p^{n}}, \tag{3.19}\] \[a_{2}\left(1+b_{2}+\cdots+b_{2}^{q-1}\right) \equiv 0\pmod{p^{n}},\] (3.20) \[d_{2}^{q} \equiv 1\pmod{q},\] (3.21) \[c_{2}\left(1+d_{2}+\cdots+d_{2}^{q-1}\right) \equiv 0\pmod{q}. \tag{3.22}\] Since the order of \(d_{2}\) divides \(q-1\), we get \(d_{2}=1\) from Eq. (3.21). Finally comparing both sides of the equation \(\Phi(x)^{k}\Phi(y)=\Phi(y)\Phi(x)\) we get (using the conclusions of the preceding discussions) \[b_{1}^{k-1}\equiv 1\pmod{p^{n}} \tag{3.23}\] \[b_{2}a_{1}+a_{2}\equiv b_{1}^{k}a_{2}+a_{1}\left(1+b_{1}+\cdots+ b_{1}^{k-1}\right)\pmod{p^{n}}. \tag{3.24}\] Using Lemma 2.3, Eq. (3.15) and Eq. (3.23) we conclude that \(b_{1}=1\). Putting the value of \(b_{1}\) in Eq. (3.24) we get that \(b_{2}=k\). Further to ensure regularity we need to impose \(c_{2}\neq 0\) (using a similar argument in the discussion after Eq. (3.14)). Thus the total number of regular embeddings in this case is given by \(\varphi(p^{n})p^{n}(q-1)\). **Proposition 3.3**.: _Let \(C\) be the cyclic group of order \(p^{n}q\) and \(G\) be the nonabelian group isomorphic to \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\), where \(p\) and \(q\) are primes. Then \(e(G,C)=p^{n}\) and \(e^{\prime}(G,C)=q-1\)._ Embedding of \(C=\mathbb{Z}_{p^{n}}\times\mathbb{Z}_{q}\) in the \(\operatorname{\mathbf{Hol}}(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\) Recall the description of \(\operatorname{\mathrm{Hol}}(G)\) from Section 3.1 and the presentation for \(C\) from Section 3.2. Consider a homomorphism \(\Phi:C\longrightarrow\operatorname{\mathrm{Hol}}(G)\) determined by \[\Phi(r)=\left(x^{i_{1}}y^{j_{1}},\begin{pmatrix}\beta_{1}&\alpha_{1}\\ 0&1\end{pmatrix}\right),\Phi(s)=\left(x^{i_{2}}y^{j_{2}},\begin{pmatrix}\beta_ {2}&\alpha_{2}\\ 0&1\end{pmatrix}\right).\] Given that \(\Phi(r)\) has to be an element of order \(p^{n}\) and the embedding is regular, using a similar argument as in Section 3.1 we conclude that \(j_{1}=0\), \(i_{1}\) is a unit in \(\mathbb{Z}_{p_{n}}\) and, \(j_{2}\) is a unit in \(\mathbb{Z}_{q}\). From \(\Phi(r)^{p^{n}}=1\), we get that \[i_{1}\left(1+\beta_{1}+\cdots+\beta^{p^{n}-1}\right) \equiv 0\pmod{p^{n}},\] \[\alpha_{1}\left(1+\beta_{1}+\cdots+\beta^{p^{n}-1}\right) \equiv 0\pmod{p^{n}},\] \[\beta_{1}^{p^{n}} \equiv 1\pmod{p^{n}}.\] From the last equation above and [2, Corollary 2.2] we get that \(\beta_{1}\equiv 1\pmod{p}\). Hence the first two equations will always be satisfied irrespective of choices of \(i_{1}\) and \(\alpha_{1}\). From the equation \(\Phi(s)^{q}=1\), we get \[\beta_{2}^{q} \equiv 1\pmod{p^{n}}, \tag{3.25}\] \[\alpha_{2}(1+\beta_{2}+\beta_{2}^{2}+\ldots+\beta_{2}^{q-1}) \equiv 0\pmod{p^{n}}\qquad\qquad\qquad,\] (3.26) \[\ell_{q} \equiv 0\pmod{p^{n}}, \tag{3.27}\] where \(\ell_{q}\) is as defined in Section 3.1. Furthermore \(\Phi(r)\Phi(s)=\Phi(s)\Phi(r)\) gives that \[\alpha_{2}(\beta_{1}-1) \equiv\alpha_{1}(\beta_{2}-1),\pmod{p^{n}} \tag{3.28}\] \[i_{1}+\beta_{1}i_{2}+\alpha_{1}k^{-1}\left(1-k^{j_{2}}\right) \equiv i_{2}+k^{j_{2}}\beta_{2}i_{1}\pmod{p^{n}}. \tag{3.29}\] Let \(\beta_{2}=k^{a}\) for some \(a\geq 0\). We divide this into two cases \(a=0\) and \(a\neq 0\). #### 3.3.1. Case I: \(a\)=0 In this case we get \(\alpha_{2}=0\) from Eq. (3.26). Hence Eq. (3.28) is always satisfied. Note that Eq. (3.27) holds true, since \(j_{2}+a\neq q\) by using similar arguments as of Section 3.1. Putting \(\beta_{2}=1\) in Eq. (3.29) we get \(\left(i_{1}+\alpha_{1}k^{-1}\right)\left(1-k^{j_{2}}\right)\equiv i_{2}\left( 1-\beta_{2}\right)\pmod{p^{n}}\). Hence the choice of \(\alpha_{1}\) gets determined by those of \(i_{1}\), \(i_{2}\), \(\beta_{1}\) and, \(j_{2}\). Hence the total number of embedding in this case becomes \(\varphi(p^{n})p^{2n-1}(q-1)\). #### 3.3.2. Case II: \(a\neq 0\) From Eq. (3.28), substituting \(\alpha_{1}=\alpha_{2}(\beta_{1}-1)(k^{a}-1)^{-1}\) in Eq. (3.29) we get \[i_{1}\left(k^{a}-1\right)\left(1-k^{j_{2}+a}\right)\equiv(1-\beta_{1})\left( i_{2}\left(k^{a}-1\right)+\alpha_{2}k^{-1}\left(1-k^{j_{2}}\right)\right) \pmod{p^{n}}. \tag{3.30}\] We claim that \(j_{2}+a=q\). Indeed, if \(j_{2}+a\neq q\), we have that the LHS of Eq. (3.30) is a unit in \(\mathbb{Z}_{p^{n}}\), whereas \((1-\beta_{1})\) is never a unit (since \(\beta_{1}\equiv 1\pmod{p^{n}}\)). Next, putting \(j_{2}+a=q\) in Eq. (3.30), the LHS becomes 0. Substituting \(j_{2}+a=q\) in Eq. (3.27) we get \(i_{2}\equiv-\alpha_{2}k^{-1}(1-k^{j_{2}})(k^{j_{2}}q^{-1})(1+(1+k^{a})k^{j_{2} }+\cdots+(1+k^{a}+\cdots+k^{(q-2)a})k^{(q-2)j_{2}})\pmod{p^{n}}\). Further substituting this value of \(i_{2}\) to Eq. (3.30), we get that both sides of the equation become zero. Hence we get that in this case, the total number of regular embedding of \(C\) in \(\operatorname{Hol}(G)\) is given by \(\varphi(p^{n})p^{2n-1}(q-1)\). **Proposition 3.4**.: _Let \(C\) be the cyclic group of order \(p^{n}q\) and \(G\) be the nonabelian group isomorphic to \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). Then \(e(C,G)=2p^{n-1}(q-1)\) and \(e^{\prime}(C,G)=p^{2n-1}\)._ Now Theorem 1.1 follows from Proposition 3.2, Proposition 3.3, and Proposition 3.4. ## 4. The case \(p<q\) In this section, we prove Theorem 1.2. We use methods, described in the beginning of Section 3. In this case, there are exactly \(b+1\) types of groups up to isomorphism, where \(b=\min\{a,n\}\) with \(p^{a}||q-1\). This section will be divided into four subsections, depending on the isomorphism types of \(G=G_{b_{1}}\) and \(N=G_{b_{2}}\), where \(0\leq b_{1},b_{2}\leq n\). ### Isomorphic type First, we consider the isomorphic case. Let \(G=\mathbb{Z}_{q}\rtimes_{k}\mathbb{Z}_{p}^{n}\), where \(k\) is an element of order \(p^{b}\). We are looking at \(e(G,G)\). #### 4.1.1. The case \(b=0\) In this case, the groups are cyclic and \(e^{\prime}(G,G)\), \(e(G,G)\) have been enumerated in [6, Theorem 2]. #### 4.1.2. The case when \(0<b\leq n\) Let us take a group homomorphism \(\Phi:G_{b}\longrightarrow\operatorname{Hol}(G_{b})\) defined by \[\Phi(x)=\left(y^{j_{1}}x^{i_{1}},\left(\gamma_{1},\begin{pmatrix}\beta_{1}& \alpha_{1}\\ 0&1\end{pmatrix}\right)\right),\text{ and }\Phi(y)=\left(y^{j_{2}}x^{i_{2}}, \left(\gamma_{2},\begin{pmatrix}\beta_{2}&\alpha_{2}\\ 0&1\end{pmatrix}\right)\right).\] From \(\Phi(y)^{q}=1\) and from \(\Phi(xy)=\Phi(y^{k}x)\), we get the relations \(i_{2}=0\), \(\beta_{2}=1\), \(\gamma_{2}=1\) and \[\alpha_{2}(k-\beta_{1})\equiv 0\pmod{q}, \tag{4.1}\] \[j_{2}(k^{i_{1}-1}\beta_{1}-1)\equiv\alpha_{2}(1+k+k^{2}\cdots k^ {i_{1}-1})\pmod{q}. \tag{4.2}\] Thus if \(\alpha_{2}=0\), then \(\beta_{1}=k^{1-i_{1}}\). If \(\alpha_{2}\neq 0\), then \(\beta_{1}=k\) and \(\alpha_{2}=j_{2}(k-1)\). From \(\Phi(x)^{p^{n}}=1\), we get the following equivalences in \(\mathbb{Z}_{q}\). \[\beta_{1}p^{{}^{n}}=1 \tag{4.3}\] \[\alpha_{1}(1+\beta_{1}+\beta_{1}{}^{2}\cdots\beta_{1}^{p^{n}-1})=0. \tag{4.4}\] By explicit calculations, we can show that, the exponent of \(y\) in \(\Phi(x)^{p^{n}}\) is given by \[\operatorname{Exp}_{y}\left(\Phi(x)^{p^{n}}\right)=j_{1}\left(\sum_{u=0}^{p^{ n}-1}m^{u}\right)+\frac{\alpha_{1}}{m(k^{\gamma_{1}}-1)}\left\{\sum_{v=1}^{p^{n}-1}m ^{p^{n}-v}\left(k^{i_{1}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1}\right)}-k^ {i_{1}}\right)\right\},\] where \(m=\beta_{1}k^{i_{1}}\). Using Eq. (4.1) and Eq. (4.2), we can show that \(m\in\{k,k^{i_{1}+1}\}\) First, let us take \(m=k\). Then \(\sum\limits_{u=0}^{p^{n}-1}m^{u}\equiv 0\pmod{q}\). We aim to show that the other summand is also zero in \(\mathbb{Z}_{q}\). We have in \(\mathbb{Z}_{q}\) \[\sum_{v=1}^{p^{n}-1}m^{p^{n}-v}\left(k^{i_{1}\left(1+\gamma_{1}+\ldots+\gamma_ {1}^{v-1}\right)}-k^{i}\right)=\sum_{v=1}^{p^{n}}k^{i_{1}\left(1+\gamma_{1}+ \ldots+\gamma_{1}^{v-1}\right)-v}.\] Note that here \(i_{1}\) and \(\gamma_{1}\) are fixed. Denote by \(\Gamma(v)=i_{1}(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1})-v\pmod{p^{n}}\). Suppose for \(1\leq v_{1}\neq v_{2}\leq p^{n}\) we have \(\Gamma(v_{1})\equiv\Gamma(v_{2})\pmod{p^{n}}\). Then we have \(i(\gamma_{1}^{v_{1}}-\gamma_{1}^{v_{2}})\equiv(v_{1}-v_{2})(\gamma_{1}-1)\pmod {p^{n}}\). Since the Sylow-\(p\)-subgroup of \(\mathbb{Z}_{p^{n}}^{\times}\) is generated by \((1+p)\) and \(\gamma_{1}\) is an element having \(p\)-power order, say an element of order \(p^{g}\). Then \(p^{n-g}||\gamma_{1}-1\). Thus \(v_{1}-v_{2}\equiv 0\pmod{p^{g}}\), using Lemma 2.1. Conversely if \(v_{1}-v_{2}\equiv 0\pmod{\operatorname{ord}\gamma_{1}}\), then \(i(\gamma_{1}^{v_{1}}-\gamma_{1}v_{2})\equiv(v_{1}-v_{2})(\gamma_{1}-1)\pmod {p^{n}}\). Thus \(\Gamma\) gives rise to a function from \(\mathbb{Z}_{p^{n}}\) to the subset \(\{p^{g},2p^{g},3p^{g},\ldots,p^{n}\}\). Thus the sum is reduced to \(p^{g}\sum\limits_{t=1}^{p^{n-g}}k^{tp^{g}}\). If \(k^{p^{g}}=1\), we get the sum to be zero. Otherwise this sum is \(p^{g}\dfrac{k^{p^{n}}-1}{k^{p^{g}}-1}=0\). This finishes the proof. Now, take the case when \(m=k^{i_{1}+1}\). Then again the multiplier of \(j_{1}\) is zero in \(\mathbb{Z}_{q}\). We claim that the other summand is also zero in the above expression. We have in this case, \[\sum\limits_{v=1}^{p^{n}-1}m^{p^{n}-v}\left(k^{i_{1}\left(1+\gamma _{1}+\ldots+\gamma_{1}^{v-1}\right)}-k^{i_{1}}\right)=\sum\limits_{v=1}^{p^{n }}k^{i_{1}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1}-v\right)-v}-\sum\limits_{ v=1}^{p^{n}}k^{i_{1}(1-v)-v}\] \[=\begin{cases}\sum\limits_{v=1}^{p^{n}}k^{i_{1}\left(1+\gamma_{1 }+\ldots+\gamma_{1}^{v-1}\right)-(i_{1}+1)v}&\text{when $i_{1}+1\neq 0\pmod{p}$}\\ \sum\limits_{v=1}^{p^{n}}k^{i_{1}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1}-v \right)-v}-\sum\limits_{v=1}^{p^{n}}k^{i_{1}(1-v)-v}&\text{otherwise.}\end{cases}\] We start by considering the first subcase, i.e. \(i_{1}+1\) being a unit in \(\mathbb{Z}_{p^{n}}\). Again denote by \(\Gamma(v)=i_{1}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1}-v\right)-(i_{1}+1)v\). Then \(\Gamma(v_{1})\equiv\Gamma(v_{2})\pmod{p^{n}}\) implies that \(i_{1}(\gamma_{1}^{v_{1}}-\gamma_{1}^{v_{2}})\equiv(i_{1}+1)(\gamma_{1}-1)(v_{ 1}-v_{2})\pmod{p^{n}}\). Then proceeding as before, we get the result. Next, consider the second subcase. In this case, we show that both of the sums are zero. Take \(\Gamma_{1}(v)=i_{1}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1}-v\right)-(i_{1} +1)v\) and \(\Gamma_{2}(v)=i_{1}(1-v)-v\). Assume \(p^{h}||i_{1}+1\), then \(\Gamma_{2}(v^{\prime})=\Gamma_{2}(v^{\prime\prime})\) iff \(v^{\prime}\equiv v^{\prime\prime}\pmod{p^{n-h}}\), using Lemma2.1. Thus \(\Gamma_{2}\) determines a function to the subset \(\{p^{n-h},2p^{n-h},\ldots,p^{n}\}\) and hence the second term of the expression above vanishes. An argument similar to the previous cases of \(\Gamma(v)\), shows that the first term is \(0\) as well in \(\mathbb{Z}_{q}\). Thus we have proved the following lemma. **Lemma 4.1**.: _In \(\operatorname{Exp}_{y}\left(\Phi(x)^{p^{n}}\right)\), if the coefficient of \(j_{1}\) is zero in \(\mathbb{Z}_{q}\), then so is the coefficient of \(\alpha_{1}\)._ We claim that \(i_{1}\) is a unit. Suppose \(i_{1}\) is not a unit. We note that \(\Phi(x)^{p^{n-1}}=\left(y^{J},\begin{pmatrix}1,\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\end{pmatrix}\right)\), for some \(J\). Note that if \(\beta_{1}=0\) then \(\alpha_{1}=0\), otherwise \(1+\beta_{1}+\ldots+\beta_{1}^{p^{n}-1}=0\), whence the matrix entry is justified. Now, if \(J=0\) then this map is not regular. Otherwise when \(J\neq 0\), we get \(J\) is a unit in \(\mathbb{Z}_{q}\). Since \(p\) is a unit is \(\mathbb{Z}_{q}\), we get that \(\Phi(x_{1})^{p^{n}}\) is not identity element. This proves claim. Now we are ready to count the number of Hopf-Galois structures on extensions, whose group is of the form \(G_{b}\) for some \(0<b<n\). This will be divided into four cases. Before proceeding, we note that none of the cases, impose any condition on \(j_{2}\) and \(\gamma_{1}\). _Case \(1\): The case \(\beta_{1}=1\)._ This implies \(\alpha_{2}=0\). Since if \(\alpha_{2}\neq 0\), then \(\beta_{1}=k\neq 1\). From \(\alpha_{2}=0\) we get that \(i_{1}\equiv 1\pmod{p^{b}}\), from which we get that \(i_{1}\) has \(p^{n-b}\) possibilities. Further \(\alpha_{1}=0\) from Eq.4. In this case, \(j_{1}\) has \(q\) possibilities since \(m\neq 1\), using Lemma4.1. Thus in this case we get \(\varphi(q)qp^{2(n-b)}\) embedding. _Case \(2\): The case \(\beta_{1}\neq 1\), and \(\alpha_{2}=0\)._ Note that \(\alpha_{2}=0\) implies that \(\beta_{1}=k^{1-i_{1}}\). Also, \(\beta_{1}\neq 1\) imposes the condition that \(i_{1}\) has \(\varphi(p^{n})-p^{n-b}\) possibilities. In this case, \(j_{1}\) and \(\alpha_{1}\) have \(q\) possibilities each. Thus in this case we have \(\varphi(q)\left(\varphi(p^{n})-p^{n-b}\right)q^{2}p^{n-b}\) embeddings. _Case \(3\): The case \(\beta_{1}\neq 1\), \(\alpha_{2}\neq 0\), and \(1+i_{1}\equiv 0\pmod{p^{b}}\)._ Since \(1+i_{1}\equiv 0\pmod{p^{b}}\), we get \(m=1\). Hence the value of \(j_{1}\) gets fixed. Thus in this case, we have \(\varphi(q)qp^{2(n-b)}\) embeddings. _Case \(4\): The case \(\beta_{1}\neq 1\), \(\alpha_{2}\neq 0\), and \(1+i_{1}\not\equiv 0\pmod{p^{b}}\)._ In this case \(i_{1}\) has \(\varphi(p^{n})-p^{n-b}\) possibilities. Similar to Case \(2\), \(j_{1}\) has \(q\) possible values. Thus in this case, we have \(\varphi(q)\left(\varphi(p^{n})-p^{n-b}\right)q^{2}p^{n-b}\) embeddings. In all of the cases above, the embeddings are regular, which is guaranteed by the conditions that \(i_{1}\) and \(j_{2}\) are units. Furthermore, In conclusion, we have proved the following result. **Proposition 4.2**.: _Let \(G_{b}=\mathbb{Z}_{q}\rtimes_{k}\mathbb{Z}_{p^{n}}\), where \(k\in\mathbb{Z}_{q}\) is of order \(p^{b}\) for some \(0<b\leq n\). Then \(e^{\prime}\left(G_{b},G_{b}\right)=e\left(G_{b},G_{b}\right)=2\left(p^{n-b}+q \left(\varphi(p^{n})-p^{n-b}\right)\right)\)._ ### Non-isomorphic type This case will be divided into three cases, depending on the values of \(b_{1}\) and \(b_{2}\). #### 4.2.1. The case \(1\leq b_{1}\neq b_{\leq}n\) We will need a variation of Lemma 4.1, for dealing with this case. We start with a presentation of these two groups. For \(t=1\) and \(2\), let us fix \[G_{b_{t}}=\left\langle x_{t},y_{t}\Big{|}x_{t}^{p^{n}}=y_{t}^{q}=1,x_{t}y_{t}x _{t}^{-1}=y_{t}^{k_{t}}\right\rangle,\] where \(k_{t}\) is an element of order \(p^{b_{t}}\). Now we consider \(\Phi:G_{b_{1}}\longrightarrow\operatorname{Hol}\left(G_{b_{2}}\right)\) is an regular embedding and \(\Phi(x_{1})=\left(y_{2}^{i_{1}}x_{2}^{i_{1}},\left(\gamma_{1},\begin{pmatrix} \beta_{1}&\alpha_{1}\\ 0&1\end{pmatrix}\right)\right)\), then it can be proved that, \[\operatorname{Exp}_{y}\left(\Phi(x)^{p^{n}}\right)=j\left(\sum_{u=0}^{p^{n}-1 }m^{u}\right)+\frac{\alpha_{1}}{m(k_{2}^{\gamma_{1}}-1)}\left\{\sum_{v=1}^{p^ {n}-1}m^{p^{n}-v}\left(k_{2}^{i_{2}\left(1+\gamma_{1}+\ldots+\gamma_{1}^{v-1 }\right)}-k_{2}^{i_{2}}\right)\right\},\] where \(m=\beta_{1}k_{2}^{i_{1}}\). It can be shown that \(m\in\left\{k_{1},k_{1}k_{2}^{i_{1}}\right\}\) modulo \(q\), using Eq. (4.1) and Eq. (4.2). Note that in any of the cases \(b_{1}<b_{2}\) or \(b_{2}<b_{1}\), \(m\) is purely a power of \(k_{1}\) or \(k_{2}\), since \(\mathbb{Z}_{p^{n}}^{\times}\) is cyclic. Then a variation of the argument before Lemma 4.1, proves the following result. **Lemma 4.3**.: _In \(\operatorname{Exp}_{y}\left(\Phi(x_{1})^{p^{n}}\right)\) if the coefficient of \(j_{1}\) is \(0\) in \(\mathbb{Z}_{q}\), then so is the coefficients of \(\alpha_{1}\)._ Hoping that the reader is now familiar with the flow of arguments, without loss of generality in this case we will assume that the embedding is given by, \[\Phi(x_{1})=\left(y_{2}^{j_{1}}x_{2}^{i_{1}}\left(\gamma_{1},\begin{pmatrix} \beta_{1}&\alpha_{1}\\ 0&1\end{pmatrix}\right)\right),\Phi(y_{1})=\left(y_{2}^{j_{2}}\left(1,\begin{pmatrix} 1&\alpha_{2}\\ 0&1\end{pmatrix}\right)\right),\] where \(i_{1}\) is a unit in \(\mathbb{Z}_{p^{n}}\) (using same argument as in Section 4.1), \(\gamma_{1}\) is a unit in \(\mathbb{Z}_{p^{n}}\) satisfying \(\gamma_{1}^{p^{n-b_{2}}}=1\), and \(j_{2}\) is a unit in \(\mathbb{Z}_{q}\). Comparing the both sides of the equation \(\Phi(x_{1})\Phi(y_{1})=\Phi(y_{1})^{k_{1}}\Phi(x_{1})\), we get \[\alpha_{2}(k_{1}-\beta_{1})\equiv 0\pmod{q}, \tag{4.5}\] \[k_{2}^{i_{1}}\beta_{1}j_{2}\equiv j_{2}k_{1}+j_{2}\left(1+k_{2} +\ldots+k_{2}^{i_{1}-1}\right)\pmod{q}. \tag{4.6}\] From Eq. (4.5) either \(\alpha_{2}=0\) or \(\beta_{1}=k_{1}\). Irrespective of the cases \(\beta_{1}k_{1}^{i_{1}}\neq 1\). Thus from Lemma 4.3\(j_{1}\) can take any value from \(\mathbb{Z}_{q}\). Now, in the first case, \(\beta_{1}=k_{1}k_{2}^{-i_{1}}\) (from Eq. (4.6)). Also \(\gamma_{1}\) and \(\alpha_{1}\) have \(p^{n-b_{2}}\) and \(q\) many choices respectively. This gives that total number of embeddings in this case is given by \(\varphi(q)\varphi(p^{n})q^{2}p^{n-b_{2}}\). In the second case, \(\alpha_{2}=(k_{2}-1)j_{2}\) and \(\gamma_{1}\), \(\alpha_{1}\) have \(p^{n-b_{2}}\), \(q\) many choices respectively. Thus the total number of embeddings arising from this case is given by \(\varphi(q)\varphi(p^{n})q^{2}p^{n-b_{2}}\). Given that \(i_{1}\) and \(j_{2}\) are units, we get that the constructed map is regular. We now have the following result. **Proposition 4.4**.: _Let \(G_{b_{t}}=\mathbb{Z}_{q}\rtimes_{k_{t}}\mathbb{Z}_{p^{n}}\), where \(k_{t}\) is an element of \(\mathbb{Z}_{p^{n}}\) of order \(p^{bt}\), for \(t=1\), \(2\). Let \(0<b_{1}\neq b_{2}\leq n\). Then_ \[e^{\prime}\left(G_{b_{1}},G_{b_{2}}\right)=2qp^{n+b_{1}-b_{2}-1}(p-1),\ e\left(G_{b_{1}},G_{b_{2}} \right)=2qp^{n-1}(p-1).\] #### 4.2.2. The case \(0=b_{1}<b_{2}\leq n\) In this case \(G_{b_{1}}\) is cyclic and hence the presentations of the groups \(G_{b_{1}}\) and \(G_{b_{2}}\) are chosen to be \[G_{b_{1}}=\left\langle x_{1},y_{1}\Big{|}x_{1}^{p^{n}}=y_{1}^{q}=1,x_{1}y_{1}x _{1}^{-1}=y_{1}\right\rangle,G_{b_{2}}=\left\langle x_{2},y_{2}\Big{|}x_{2}^{ p^{n}}=y_{2}^{q}=1,x_{2}y_{2}x_{2}^{-1}=y_{2}^{k_{2}}\right\rangle,\] with \(k_{2}\in\mathbb{Z}_{p^{n}}\) being an element of multiplicative order \(p^{b_{2}}\). Fix a homomorphism \(\Phi:G_{b_{1}}\longrightarrow\operatorname{Hol}(G_{b_{2}})\) given by \[\Phi(x_{1})=\left(y_{2}^{j_{1}}x_{2}^{i_{1}}\left(\gamma_{1},\begin{pmatrix} \beta_{1}&\alpha_{1}\\ 0&1\end{pmatrix}\right)\right),\Phi(y_{1})=\left(y_{2}^{j_{2}}x_{2}^{i_{2}} \left(\gamma_{2},\begin{pmatrix}\beta_{2}&\alpha_{2}\\ 0&1\end{pmatrix}\right)\right).\] From the condition \(\Phi(y_{1})^{q}\), we get the conditions that \(i_{2}=0\), \(\gamma_{2}=0\) and \(\beta_{2}=1\). To ensure the regularity of the maps, we will need \(i_{1}\) and \(j_{2}\) to be units in \(\mathbb{Z}_{p^{n}}\) and \(\mathbb{Z}_{q}\) respectively (see Section 4.1). Equating the two sides of the equality \(\Phi(x_{1})\Phi(y_{1})=\Phi(y_{1})\Phi(x_{1})\), we get that \[\alpha_{2}(1-\beta_{1})\equiv 0\pmod{q}, \tag{4.7}\] \[\beta_{1}k_{2}^{i_{1}}j_{2}\equiv j_{2}+\alpha_{2}\left(1+k_{2}+ \ldots+k_{2}^{i_{1}-1}\right)\pmod{q}. \tag{4.8}\] Hence from Eq. (4.7) we have either \(\alpha_{2}=0\) or \(\beta_{1}=1\). In case \(\alpha_{2}=0\), plugging the value in Eq. (4.8) we get that \(\beta_{1}k_{2}^{i_{1}}=1\), whence \(j_{1}\) has fixed choice, once \(\alpha_{1}\) is fixed. Furthermore \(\alpha_{1}\), \(\gamma_{1}\) have \(q\), \(p^{n-b_{2}}\) choices. In the case \(\beta+1=1\), from Eq. (4.8) we get that \(\alpha_{2}=j_{2}(k_{2}-1)\) and \(\beta_{1}k_{2}^{i_{1}}\neq 1\). Hence Lemma 4.3 applies. Thus \(j_{1}\), and \(\gamma_{1}\) have \(q\), and \(p^{n-b_{2}}\) possibilities. We conclude that in both cases the number of regular embedding of the cyclic group of order \(p^{n}q\) in \(\operatorname{Hol}\left(G_{b_{2}}\right)\) is given by \(q\varphi(q)p^{n-b_{2}}\varphi(p^{n})\). We have the following result. **Proposition 4.5**.: _Let \(C\) denotes the cyclic group of order \(p^{n}q\) and \(G_{b}\cong\mathbb{Z}_{q}\rtimes_{k_{b}}\mathbb{Z}_{p^{n}}\), where \(k_{b}\in\mathbb{Z}_{q}\) is an element of multiplicative order \(p^{b}\). Then_ \[e^{\prime}(C,G_{b})=2p^{n-b}q,\text{ and }e(C,G_{b})=2(p-1)p^{n-1}\] #### 4.2.3. The case \(0=b_{2}<b_{1}\leq n\) Here we count the number \(e^{\prime}(G_{b_{1}},G_{b_{2}})\) (equivalently \(e(G_{b_{1}},G_{b_{2}})\)). Here \(G_{b_{2}}\) is a cyclic group of order \(p^{n}q\). In this case, we have, \[\operatorname{Hol}(G_{b_{2}})\cong\left\{\left(y_{2}^{j}x_{2}^{i},(\omega, \delta)\big{|}_{(\omega,\delta)\in\mathbb{Z}_{q}^{\times}\times\mathbb{Z}_{p} ^{n}}^{(j,i)\in\mathbb{Z}_{q}\times\mathbb{Z}_{p}^{\times}}\right)\right\}.\] We fix an embedding \(\Phi:G_{b_{1}}\longrightarrow\operatorname{Hol}\left(G_{b_{2}}\right)\) determined by \[\Phi(x_{1})=\left(y_{2}^{j_{1}}x_{2}^{i_{1}},(\omega_{1},\delta_{1})\right), \text{ }\Phi(x_{1})=\left(y_{2}^{j_{2}}x_{2}^{i_{2}},(\omega_{2},\delta_{2})\right).\] From \(\Phi(y_{1})^{q}=1\), we get \(\omega_{2}=1,\delta_{2}=1\) and \(i_{2}=0\). Considering \(\Phi(x_{1})^{p^{n}}=1\) we get that \(\omega_{1}^{p^{n}}=1\), \(\delta_{1}^{p^{n}}=1\), and \[j_{1}\left(1+\omega_{1}+\ldots+\omega_{1}^{p^{n}-1}\right) \equiv 0\pmod{q}, \tag{4.9}\] \[i_{1}\left(1+\delta_{1}+\ldots+\delta_{1}^{p^{n}-1}\right) \equiv 0\pmod{p^{n}}.. \tag{4.10}\] Finally comparing both sides of the equation \(\Phi(x_{1})\Phi(y_{1})=\Phi(y_{1})^{k_{1}}\Phi(x_{1})\), we get that \(\omega_{1}=k_{1}\) and hence Eq. (4.9) gets satisfied automatically. To ensure that the embedding is regular, we will need that \(i_{1}\) and \(j_{2}\) are units. Any choice of \(\delta_{1}\) satisfies Eq. (4.10). Thus \(j_{1}\), \(j_{2}\), \(i_{1}\), and \(\delta_{1}\) have \(\varphi(q)\), \(q\), \(\varphi(p^{n})\), and \(p^{n-1}\) possibilities respectively. We conclude with the following result. **Proposition 4.6**.: _Let \(G_{b}\cong\mathbb{Z}_{q}\rtimes_{k_{b}}\mathbb{Z}_{p^{n}}\), where \(k_{b}\) is an element of \(\mathbb{Z}_{q}\) of order \(p^{b}\), \(1\leq b\leq n\), and \(C\) denote the cyclic group of order \(p^{n}q\). Then we have_ \[e^{\prime}\left(G_{b},C\right)=p^{n+b-2}(p-1),\text{ }e\left(G_{b},C\right)=p^{n-1}q.\] The Theorem 1.2 follows from Proposition 4.2, Proposition 4.4, Proposition 4.5, and Proposition 4.6. ## 5. Realizable pair of groups Given two finite groups \(G\) and \(N\) of the same order, we say that the pair \((G,N)\) is _realizable_ if there exists a Hopf-Galois structure on a Galois \(G\)-extension, of type \(N\). In other words a pair \((G,N)\) is realizable if \(e(G,N)\neq 0\). This is equivalent to saying there exists a skew brace with its multiplicative group isomorphic to \(G\) and its additive group isomorphic to \(N\). This problem is not well studied since given an integer \(n\), the classification of all the groups of size \(n\) is not known. However, they have been studied for a variety of groups. When \(G\) is a cyclic group of odd order and the pair \((G,N)\) is realizable then in [6], the author showed that if \(N\) is abelian then it is cyclic. If \(N\) is a non ableian simple group and \(G\) is a solvable group with the pair \((G,N)\) being realizable, then in [18]\(N\) was completely classified. Whenever \(N\) or \(G\) is isomorphic to \(\mathbb{Z}_{n}\rtimes\mathbb{Z}_{2}\) for an odd \(n\) then their realizabilities were studied in [1]. Among a few available techniques, the notion of bijective crossed homomorphism to study realizability problems for a pair of groups of the same order, was introduced by Tsang in the work [19]. Given an element \(\mathfrak{f}\in\operatorname{Hom}(G,\operatorname{Aut}(N))\), a map \(\mathfrak{g}\in\operatorname{Map}(G,N)\) is said to be a _crossed homomorphism_ with respect to \(\mathfrak{f}\) if \(\mathfrak{g}(ab)=\mathfrak{g}(a)\mathfrak{f}(a)(\mathfrak{g}(b))\) for all \(a,b\in G\). Setting \(Z_{\mathfrak{f}}^{1}(G,N)=\{\mathfrak{g}:\mathfrak{g}\) is bijective crossed homomorphism with respect to \(\mathfrak{f}\}\), we have the following two results. **Proposition 5.1**.: _[_19_, Proposition 2.1]_ _The regular subgroups of \(\operatorname{Hol}(N)\) which are isomorphic to \(G\) are precisely the subsets of \(\operatorname{Hol}(N)\) of the form \(\{(\mathfrak{g}(a),\mathfrak{f}(a)):a\in G\},\) where \(\mathfrak{f}\in\operatorname{Hom}(G,\operatorname{Aut}(N)),\mathfrak{g}\in Z_ {\mathfrak{f}}^{1}(G,N)\)._ **Proposition 5.2**.: _[_20_, Proposition 3.3]_ _Let \(G,N\) be two groups such that \(|G|=|N|\). Let \(\mathfrak{f}\in\operatorname{Hom}(G,\operatorname{Aut}(N))\) and \(\mathfrak{g}\in Z_{\mathfrak{f}}^{1}(G,N)\) be a bijective crossed homomorphism (i.e. \((G,N)\) is realizable). Then if \(M\) is a characteristic subgroup of \(N\) and \(H=\mathfrak{g}^{-1}(M)\), we have that the pair \((H,M)\) is realizable._ We will need the following two results, where the realizability of cyclic groups have been characterized. We will use modifications of these characterizations towards proving the realizability of groups of the form \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). **Proposition 5.3**.: _[_16_, Theorem 3.1]_ _Let \(N\) be a group of odd order \(n\) such that the pair \((\mathbb{Z}_{n},N)\) is realizable. Then \(N\) is a \(C\)-group (i.e. all the Sylow subgroups are cyclic)._ **Proposition 5.4**.: _[_14_, Theorem 1]_ _Let \(G\) be a group of order \(n\) such that \((G,\mathbb{Z}_{n})\) is realizable. Then \(G\) is solvable and almost Sylow-cyclic (i.e. its Sylow subgroups of odd order are cyclic, and every Sylow-\(2\) subgroup of G has a cyclic subgroup of index at most \(2\))._ **Theorem 5.5**.: _Let \(N\) be a group of order \(qp^{n}\), where \(q\) is a prime, \(q<p\) and \((q,p)=1\). Then the pair \((\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q},N)\) (or \((N,\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\)) is realizable if and only if \(N\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\)._ Proof.: Let \((\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q},N)\) be realizable. By Proposition 5.1 there exists a bijective crossed homomorphism \(\mathfrak{g}\in Z^{1}_{\mathfrak{f}}(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q},N)\) for some \(\mathfrak{f}\in\operatorname{Hom}(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}, \operatorname{Aut}(N))\). Let \(H_{p}\) be the Sylow-\(p\) subgroup of \(N\) (it is unique since \(q<p\)). Then using Proposition 5.2 the pair \((\mathfrak{g}^{-1}H_{p},H_{p})\) is realizable. Note that \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) has unique subgroup of order \(p^{n}\), which is cyclic. This implies that \((\mathbb{Z}_{p^{n}},H_{p})\) is realizable. Hence by Proposition 5.3 we get that \(H_{p}\) is isomorphic to \(\mathbb{Z}_{p^{n}}\) and therefore \(N\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). Conversely if \(N\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\) then the pair \((\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q},N)\) is realizable since \(e(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q},N)\) is non-zero from Section 3. Now if the pair \((N,\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\) is realizable, by Proposition 5.1 there exists a bijective crossed homomorphism \(\mathfrak{g}\in Z^{1}_{\mathfrak{f}}(N,\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\) for some \(\mathfrak{f}\in\operatorname{Hom}(G,\operatorname{Aut}(\mathbb{Z}_{n}\rtimes \mathbb{Z}_{2}))\). Since \(\mathbb{Z}_{p^{n}}\) is a characteristic subgroups of \(\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\), we get that \(\mathfrak{g}^{-1}(\mathbb{Z}_{p^{n}})\) is a subgroup of \(N\) and \((\mathfrak{g}^{-1}(\mathbb{Z}_{p^{n}}),\mathbb{Z}_{p^{n}})\) is realizable. Then by Proposition 5.4, we have that \(\mathfrak{g}^{-1}(\mathbb{Z}_{p^{n}})\) is almost Sylow-cylic and therefore isomorphic to \(\mathbb{Z}_{p^{n}}\). Hence \(N\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\). Conversely if \(N\cong\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q}\), then by Section 3 we have the pair \((N,\mathbb{Z}_{p^{n}}\rtimes\mathbb{Z}_{q})\) is realizable.
2307.16891
Foundational Models for Fault Diagnosis of Electrical Motors
A majority of recent advancements related to the fault diagnosis of electrical motors are based on the assumption that training and testing data are drawn from the same distribution. However, the data distribution can vary across different operating conditions during real-world operating scenarios of electrical motors. Consequently, this assumption limits the practical implementation of existing studies for fault diagnosis, as they rely on fully labelled training data spanning all operating conditions and assume a consistent distribution. This is because obtaining a large number of labelled samples for several machines across different fault cases and operating scenarios may be unfeasible. In order to overcome the aforementioned limitations, this work proposes a framework to develop a foundational model for fault diagnosis of electrical motors. It involves building a neural network-based backbone to learn high-level features using self-supervised learning, and then fine-tuning the backbone to achieve specific objectives. The primary advantage of such an approach is that the backbone can be fine-tuned to achieve a wide variety of target tasks using very less amount of training data as compared to traditional supervised learning methodologies. The empirical evaluation demonstrates the effectiveness of the proposed approach by obtaining more than 90\% classification accuracy by fine-tuning the backbone not only across different types of fault scenarios or operating conditions, but also across different machines. This illustrates the promising potential of the proposed approach for cross-machine fault diagnosis tasks in real-world applications.
Sriram Anbalagan, Deepesh Agarwal, Balasubramaniam Natarajan, Babji Srinivasan
2023-07-31T17:58:16Z
http://arxiv.org/abs/2307.16891v1
# Foundational Models for Fault Diagnosis of Electrical Motors ###### Abstract A majority of recent advancements related to the fault diagnosis of electrical motors are based on the assumption that training and testing data are drawn from the same distribution. However, the data distribution can vary across different operating conditions during real-world operating scenarios of electrical motors. Consequently, this assumption limits the practical implementation of existing studies for fault diagnosis, as they rely on fully labelled training data spanning all operating conditions and assume a consistent distribution. This is because obtaining a large number of labelled samples for several machines across different fault cases and operating scenarios may be unfeasible. In order to overcome the aforementioned limitations, this work proposes a framework to develop a foundational model for fault diagnosis of electrical motors. It involves building a neural network-based backbone to learn high-level features using self-supervised learning, and then fine-tuning the backbone to achieve specific objectives. The primary advantage of such an approach is that the backbone can be fine-tuned to achieve a wide variety of target tasks using very less amount of training data as compared to traditional supervised learning methodologies. The empirical evaluation demonstrates the effectiveness of the proposed approach by obtaining more than 90% classification accuracy by fine-tuning the backbone not only across different types of fault scenarios or operating conditions, but also across different machines. This illustrates the promising potential of the proposed approach for cross-machine fault diagnosis tasks in real-world applications. 1 Footnote 1: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Foundational Models, Fault Diagnosis, Electrical Motors, Convolutional Neural Networks, Predictive Maintenance. ## I Introduction Electrical motors are considered to be the workhorses of industries. They deliver reliable and efficient operation across diverse range of applications including commercial, industrial, aerospace, computer systems, robotics and defense [1]. However, these motors are susceptible to a variety of faults that can disrupt their performance, compromise system safety, and lead to unexpected downtime [2]. Efficient and timely detection of faults in electrical motors is crucial for ensuring their optimal operation, minimizing maintenance costs, and preventing catastrophic failures [3]. Robust fault diagnosis and degradation analysis methods for electrical motors have been developed over years of study and engineering work. The goal of fault diagnostics is to recognize and categorize various fault types, from electrical faults like inter-turn fault, stator winding failures to mechanical defects like bearing wear, shaft misalignment and broken rotor bars. The availability of machine condition monitoring data enhanced the development of many fault diagnosis methods. However, most of the existing approaches assume that the distribution of data is the same across training and testing datasets [4], and the model performs poorly when encountered with different fault types and varying severity levels. Based on the application domains, the electrical motors are designed to operate in wide variety of dynamic environments. Consequently, the distributions of the data in subsequent testing scenarios are expected to change from those of the data used in training the model [5]. Even though a large amount of condition monitoring data is collected, the manual annotation of data is costly, error-prone and labour-intensive [4]. In order to overcome the aforementioned drawbacks related to differences in distributions of training and testing subsets and limited availability of the labeled samples, this work proposes a framework to develop foundational model for fault diagnosis of electrical motors. The proposed framework comprises of two key steps: (i) building the backbone model; (ii) fine-tuning the backbone to achieve specific objectives. A neural network-based backbone is trained to extract high-level features via self-supervised learning approach, after which it is fine-tuned to capture finer details using only a limited amount of labelled data. The key advantage of this approach is that fine-tuning the backbone enables the model to adapt and perform effectively across a diverse range of target tasks, requiring significantly less training data than conventional supervised learning methodologies. This efficiency makes the proposed approach particularly valuable for real-world applications where labelled data may be scarce or expensive to acquire. The proposed foundational model addresses key challenges in building a real-time fault diagnosis model, such as transferability and adaptability, limited fault coverage, sensitivity to sensor noise, and computational capacity. By incorporating fine-tuning, the model overcomes the limitations of traditional transfer learning, making it versatile and effective in handling diverse fault diagnosis tasks across various industrial applications. Moreover, the ability of the model to work with minimal labeled samples enhances its practicality and usability in real-world scenarios, where data acquisition and labeling can be resource-intensive. ### _Related Work_ In the context of modern industry, machines and equipment are continuously evolving, aiming for higher precision, efficiency, automation, and complexity. However, these advancements also come with an increased risk of breakdowns and accidents. As a result, electrical motor fault diagnosis becomes paramount in ensuring the safety and reliability of industrial equipment. Over the last decade, significant efforts have been dedicated to developing efficient algorithms and innovative approaches to achieve superior diagnostic performance. The existing approaches used for electrical motor fault diagnosis can be broadly grouped into three categories. Firstly, advanced signal processing techniques [6, 7] are used to identify the types and locations of faults in machines. However, these methods heavily rely on specialized knowledge that is often lacking among maintenance personnel in engineering scenarios. Moreover, the diagnostic outcomes produced by signal processing techniques can be highly specialized and challenging for machine users to comprehend. As a result, contemporary industrial applications seek fault diagnosis methods that can automatically recognize the health status of machines. Secondly, with the help of probabilistic machine-learning and neural network-based techniques such as ANN, support vector machine (SVM), random forest (RF), [8], convolutional neural network (CNN), stacked autoencoder (SAE), deep belief network (DBN), and deep neural network (DNN) are the most popular approaches that have been widely deployed in electrical motor fault diagnosis [9][10]. However, these methods assume that the training and testing datasets are from the same distribution, making these diagnostic models less robust in varying working conditions of motors. Thirdly, transfer learning, which is a branch of machine learning, emphasizes acquiring common knowledge from one or more related but different application scenarios. It aids AI algorithms in achieving enhanced performance for a specific application of interest, making it a promising electrical motor fault diagnosis methodology [5, 11]. However, the effectiveness of transfer learning in generalizing to the target domain and maintaining diagnostic accuracy may reduce when there is a substantial difference between the source and target domains. The quality and size of the source domain dataset play a crucial role in transfer learning, and inadequate source data can lead to decreased performance. Existing fault diagnosis models have trouble adjusting to new motor configurations or operating circumstances that weren't covered by the training dataset [5]. Model transferability to various motor kinds, sizes, or manufacturers is still difficult and necessitates complex retraining or fine-tuning work. This article presents a novel approach to develop a foundational model-based electrical motor fault diagnosis, aiming to diagnose various faults in electrical motors under different working conditions and across different motor types. ### _Contributions_ This work, for the first time, presents a framework for developing foundational models for fault diagnosis of electrical motors. This problem is approached as a two-step process: 1) Developing a backbone model to learn high-level features via self-supervised learning. 2) Fine-tuning approach to capture the finer details using a limited amount of labelled data. The foundational model is evaluated based on the following aspects: (a) _Expressivity_ - the capacity to acquire and assimilate real-world data efficiently; (b) _Scalability_ - to manage large volumes of high-dimensional data effectively; and (c) _Generalizability_ - the ability to work in varying environmental conditions. The experimental evaluation reveals that the proposed model demonstrates remarkable performance in diagnosing electrical motor faults by achieving the above attributes of the foundational model. Such a study based on foundational models has not been presented yet in the literature related to fault diagnosis of electrical motors and is a novel contribution of this work. The remainder of this article is structured as follows. Section II briefly introduces electrical motor faults and the concept of foundational models. The proposed framework for developing foundational model for fault diagnosis of electrical motors is presented in Section III. The empirical evaluation is conducted and the results are discussed in Section IV. The article ends with concluding remarks in Section V. ## II Preliminaries ### _Faults in Electrical Motors_ An electrical motor has mechanical components like stator, rotor, bearings, and electrical components such as windings and end rings. These components work together to facilitate the motor's operation. Electrical motors are engineered to operate under varying industrial and environmental conditions, subjecting them to a wide range of stresses [2]. These stresses contribute to the emergence of various faults within the electrical motor, broadly categorized as mechanical and electrical faults. Among the major faults encountered in electrical motors are bearing faults, shaft misalignment, rotor unbalance, and inter-turn short circuit faults. According to the existing literature, the majority of electrical motor failures are attributed to mechanical faults [12]. In this paper, we address a majority of the mechanical faults that occur in electrical motors, including bearing faults of different sizes considered at various locations (i.e., inner-race, ball, outer-race), rotor unbalance faults and shaft misalignment faults at multiple severity levels. ### _Foundational Models_ Foundational models are technically enabled by transfer learning and scale [13]. The principle behind transfer learning is applying the knowledge gained from one work to another. Pretraining is the prevalent method of transfer learning in deep learning. A model is trained on a surrogate task and tuned to fit the downstream task of interest. Scale makes the foundational model powerful. Foundational models have been recently implemented in both natural language processing (NLP) and computer vision tasks. For instance, the masked language modelling task in Bidirectional Encoder Representations from Transformers (BERT) [14] involves predicting a missing word within a sentence based on its context. However, the true impact of these foundational models on NLP lies not solely in their raw generation capabilities but rather in their remarkable versatility and adaptability. A single foundational model can be effectively customized in multiple ways to perform various linguistic tasks, making them invaluable tools in the field of NLP. However, to the best of authors' knowledge, there is no existing implementation of a foundational model in the field of fault diagnosis of electrical motors. The following section presents a detailed description of our proposed foundational model scheme for fault diagnosis of electrical motors. ## III Methodology The proposed framework for developing a foundational model for fault diagnosis of electrical motors is illustrated in Figure 1. It is approached as a two-step process: 1. A CNN-based backbone model is developed to learn higher-level representations encompassing all potential electrical motor faults. 2. Based on the requirements of individual target tasks, the backbone is fine-tuned to achieve specific objectives. This systematic approach equips the foundational model with the ability to effectively diagnose various electrical motor faults with superior performance in a wide range of scenarios using small fractions of data. All the elements involved in building the foundational models are discussed next. ### _Training the Backbone Model_ The backbone model serves as the fundamental architecture, which is trained using data corresponding to various mechanical faults as well as healthy scenarios over constant speed and varying speed conditions. A recently published dataset [15] is used for training the backbone model. It consists of vibration, current, temperature and acoustic data for different faults, including bearing faults (inner and outer races) at constant and variable speeds, shaft misalignment faults, and rotor unbalance faults at a constant speed. A 1D CNN architecture is employed to develop the backbone model, known for its superior performance with raw time series data compared to other networks [16]. The proposed architecture comprises of 15 convolutional layers, a global average pooling layer, and a dense layer. Each convolutional layer consists of 64 filters with a kernel size of 3, employing LeakyReLU as the activation function. The input samples are processed through the initial CNN layer, and this architecture is maintained throughout the subsequent 14 layers. The final layer is a global average pooling layer, followed by a dense layer serving as the output layer with the softmax activation function. We utilize the Adam optimizer for optimisation, while the loss function is chosen as sparse categorical cross-entropy. These choices contribute to the overall effectiveness and efficiency of our model. ### _Defining Target Tasks_ The target tasks considered in this work are divided into two groups, to evaluate the performance of foundational model across (a) fault cases and operating conditions within same machine; and (b) different machines. A hierarchical approach is followed to design the tasks in first group. The initial few tasks are aimed at categorizing the samples into healthy vs. faulty. Once the fault is detected, the subsequent target tasks are designed to diagnose the type, location and severity of faults. Such a hierarchical approach enables a comprehensive fault diagnosis, offering detailed insights into the nature and location of potential faults. This is applied to both constant speed and varying speed conditions. The list of target tasks are tabulated in Table I. The second group of target tasks are designed to evaluate the performance of foundational model across different machines. Specifically, the data from a different motor (i.e., other than the one used for training the backbone model) is used for fine-tuning. Here, the model is tasked to perform bearing fault diagnosis at different speeds and loading conditions. These diverse set of target tasks validates the scalability and adaptability across varying motor datasets, showcasing its versatility to diagnose faults in different scenarios. Furthermore, we also subject the vibration measurements to white Gaussian noise in order to mimic measurement uncertainties and execute the same set of target tasks enlisted in Table I. This further helps to assess the generalizability and robustness of the proposed approach. Fig. 1: Proposed framework for developing foundational models for fault diagnosis of electrical motors. ### _Fine-tuning the Backbone Model_ In transfer learning, fine-tuning refers to the process of adapting a pre-trained neural network model to a new task or domain. The information and characteristics learnt from a pre-trained model are borrowed instead of training a model from scratch, saving a significant amount of time and eliminating the need of massive training data. There are various phases involved in fine-tuning. First, a pre-trained model is chosen as the starting point. Pre-trained models are often trained on a big dataset and relevant tasks. Next, the weights of the pre-trained model are frozen to avoid any changes during the preliminary training stage. The model maintains the learnt features and prevents overfitting on the sparse data of the new job by freezing the weights. The pre-trained model then gets augmented by a fresh set of fully connected layers or a few top layers. The pre-trained characteristics of the model must be adjusted to the new job or domain via these newly added top layers. The new layers are fine-tuned using a smaller dataset tailored to the new task during training. By fine-tuning, the model adapts to the unique properties of the new task or domain while inheriting and transferring the information and representations learnt during the pre-training phase. It is expected to achieve stellar performance on the target job while greatly reducing the training time and data requirements. In this work, we execute fine-tuning by focusing on initial and final dense layers of the pre-trained model. In the proposed model, we unfreeze the first three layers out of the fifteen layers utilized in the 1D CNN architecture, enabling them to be trainable during fine-tuning for various target tasks. By fine-tuning these frontal layers, we observed improved performance across various target tasks that we specifically designed for the machine dataset on which the foundational model was originally pre-trained. This unique fine-tuning approach, distinct from traditional transfer learning techniques, expands the capabilities of the foundational model from intra-machine fault diagnosis to inter-machine fault diagnosis with exceptional accuracy, even when utilizing a minimal amount of labelled samples. Section IV provides a comprehensive account of the robust performance of the foundational model when applied to intra- and inter-machine fault scenarios. ## IV Results In order to demonstrate the practical utility of the proposed foundational model in electrical motor fault diagnosis, we conduct a thorough evaluation by fine-tuning the backbone for multiple target tasks involving different fault cases over constant speed and variable speed conditions for same machine as well as other machines. We emphasize that the proposed framework achieves three key attributes, i.e., expressivity, scalability, and generalizability, which are essential for the development of a robust foundational model. In this work, we use three datasets from three distinct machines to train the backbone model and fine-tune it to perform various target tasks during intra- and inter-machine scenarios. A recently published dataset [15] is utilized to develop the backbone model. It consists of two parts: the first part includes vibration, acoustic, temperature, and driving current data collected under varying load conditions. The data comprises measurements under three loading conditions: 0 Nm, 2 Nm and 4 Nm. The sampling frequency for vibration, temperature, and driving current data was set at 25.6 kHz. This dataset contains 120 seconds of vibration data in normal states and 60 seconds in faulty states. The second part of the dataset focuses on vibration and current data acquired from the bearing, faults at different locations such as inner-race, outer-race, and ball. These data were collected under continuously varying speed conditions by modifying the motor speed between 680 and 2460 RPM. In this work, we specifically consider the vibration data for bearing faults, shaft misalignment faults, rotor unbalance faults at a constant speed, and bearing faults at varying speeds. The backbone model is trained by combining data from normal, inner race, outer race, shaft misalignment, rotor unbalance faults at constant speed, and normal, inner race, and outer race faults at variable speed conditions. We consider two additional machine datasets for performing target tasks on different machines: Jiangnan University (JNU) [17] and Case Western Reserve University Bearing Data Center (CWRU) [18]. The JNU dataset consists of three bearing vibration datasets captured at different rotating speeds, with data collected at a sampling frequency of 50 kHz. It contains one healthy state and three fault modes: inner race fault, outer race fault, and rolling element fault. The CWRU dataset consists of vibration signals collected from both normal bearings and damaged bearings with single-point defects. The data was collected under four different motor loads, and the sampling frequency varies between 12 kHz and 48 kHz. Within each working condition, single-point faults were intentionally introduced, with fault diameters of 0.007, 0.014, and 0.021 inches, targeting the rolling element, inner ring, and outer ring of the bearings, respectively. For this particular study, we focused on the data collected from the drive end, the sampling frequency used was set at 48 kHz, and the fault diameter of 0.014 inches was considered. ### _Expressivity_ The expressivity of a foundational model is defined as its capacity to efficiently acquire and assimilate real-world data. It involves the ability of a model to recognize and understand the intricate relationships, characteristics, and patterns found in the data. A highly expressive model is better able to handle a variety of complex situations because it can more precisely reflect the nuances of real-world occurrences. An expressive model should be able to capture the minute fluctuations in vibration, acoustic, temperature, and driving current signals that indicate various fault situations in the motor system when used for fault diagnosis. To enable reliable detection and classification, it should be able to learn and comprehend the distinctive patterns linked to both normal functioning and various faults of electrical motors. To demonstrate the expressive capability of our proposed model, we fine-tuned the backbone model to target tasks outlined in Table I using a machine dataset collected in real-time operation. The results of these target tasks, as presented in Table II, indicate that our model consistently achieves higher accuracy rates for most of the tasks, even when fine-tuned with much lesser labeled samples. A classification accuracy of more than 90% is obtained for almost all the target tasks with just 15% of the labeled samples. This illustrates the ability of the model to effectively capture and understand complex patterns in the data, leading to accurate classification and successful fault diagnosis. ### _Scalability_ Scalability is a crucial aspect of the foundational model, enabling it to manage large volumes of high-dimensional data effectively. This capability is essential for accommodating growing datasets and successfully generalizing them to new and unseen instances. A scalable model possesses the capacity to efficiently process and analyze extensive datasets, facilitating real-time or near-real-time fault diagnosis in practical applications. Furthermore, it should demonstrate the ability to handle diverse machines, varying operating conditions, and different fault types while maintaining high performance and accuracy levels. To demonstrate the scalability of our proposed model, we designed target tasks using datasets corresponding to different machines, namely JNU and CWRU. The fine-tuning procedure is executed based on the discussion presented in Section III-C. It can be observed that a classification accuracy of more than 90% is obtained for all the target tasks by using just 5% of the labeled data for fine-tuning. This accomplishes the ability of the model to scale effectively across diverse machine datasets while maintaining its performance in terms of classification accuracy. ### _Generalizability_ Generalizability is the capacity of a model to comprehend and extrapolate from complicated combinations of smaller components or attributes. It is essential for promoting successful generalization to new contexts and surroundings in the context of defect diagnostics. The ability of the model to reason about the combinations of various aspects or components and comprehend how they affect the overall state or condition being observed determines its level of generalizability. This enables the ability of the model to generalize its understanding and produce precise forecasts in unusual settings. The generalized model can manage fluctuations in the input and adapt to various environmental conditions. To demonstrate the expressive capability of our proposed model, we fine-tuned the backbone model to target tasks enlisted in Table I by introducing 10% white Gaussian noise to the raw vibration signal. The performance of the foundational model with the noisy signal is reported in Table V. It can be clearly observed that a classification accuracy of more than 90% is obtained for almost all the target tasks with just 15% of the labeled samples. This demonstrates the robustness of the model and highlights its ability to effectively handle and classify fault conditions even in the presence of noise. ## V Conclusion This work introduces a novel foundational model-based approach for fault diagnosis of electrical motors. It helps to overcome the limitations of different data distributions across working conditions and machines. In contrast to the existing fault diagnosis approaches, the proposed framework involves the development of a CNN-based backbone model that extracts higher-level features from the training dataset. The backbone is then fine-tuned to achieve specific objectives based on target tasks designed to perform diagnosis of different fault types and severity levels across multiple speeds and loading conditions. The proposed approach is evaluated based on three key attributes, namely, expressivity, scalability and generalizability, which are essential for the development of a robust foundational model. The empirical evaluation reveals that a good classification performance is obtained by fine-tuning the backbone with much lesser labeled samples as compared to supervised learning approaches. This approach gives excellent results, even when the fine-tuning step is executed for target tasks on another machine, thereby enhancing its scalability. The proposed model is evaluated in multiple target tasks by adding white Gaussian noise to the data. The results demonstrate the remarkable generalization ability of the model to handle different operating conditions effectively. Overall, the proposed foundational model-based approach offers a promising electrical motor fault diagnosis solution, accommodating variations in data distributions and achieving robust performance even with limited labelled samples. The scalability and generalization capabilities of the proposed model make it a valuable tool for real-world applications in fault diagnosis for electrical motors. The future extension of this work shall involve extensive testing across more target tasks and incorporating physics information within the foundational models.
2302.14680
Which One Are You Referring To? Multimodal Object Identification in Situated Dialogue
The demand for multimodal dialogue systems has been rising in various domains, emphasizing the importance of interpreting multimodal inputs from conversational and situational contexts. We explore three methods to tackle this problem and evaluate them on the largest situated dialogue dataset, SIMMC 2.1. Our best method, scene-dialogue alignment, improves the performance by ~20% F1-score compared to the SIMMC 2.1 baselines. We provide analysis and discussion regarding the limitation of our methods and the potential directions for future works. Our code is publicly available at https://github.com/holylovenia/multimodal-object-identification.
Holy Lovenia, Samuel Cahyawijaya, Pascale Fung
2023-02-28T15:45:20Z
http://arxiv.org/abs/2302.14680v2
# Which One Are You Referring To? ###### Abstract The demand for multimodal dialogue systems has been rising in various domains, emphasizing the importance of interpreting multimodal inputs from conversational and situational contexts. One main challenge in multimodal dialogue understanding is multimodal object identification, which constitutes the ability to identify objects relevant to a multimodal user-system conversation. We explore three methods to tackle this problem and evaluate them on the largest situated dialogue dataset, SIMMC 2.1. Our best method, scene-dialogue alignment, improves the performance by \(\sim\)20% F1-score compared to the SIMMC 2.1 baselines. We provide analysis and discussion regarding the limitation of our methods and the potential directions for future works. Our code is publicly available at [https://github.com/holylovenia/multimodal-object-identification](https://github.com/holylovenia/multimodal-object-identification). ## 1 Introduction Recent advancements in multimodal dialogue systems have gained more traction in various domains such as retail, travel, fashion, interior design, and many others. A real-world application of multimodal dialogue systems is situated dialogue, where a dialogue agent shares a co-observed vision or physical space with the user, and is responsible for handling user requests based on the situational context, which are often about the objects in their surroundings. This makes multimodal object identification from a dialogue (i.e., identifying objects that fit a dialogue context) an indispensable skill in multimodal dialogue understanding, built on cross-modal understanding to comprehend the relations between linguistic expressions and visual cues. Various methods have been proposed to perform multimodal object identification through different paradigms (Yu et al., 2016; Hu et al., 2016; Ilinykh et al., 2019; Kamath et al., 2021; Kuo and Kira, 2022). These efforts have established remarkable progress in solving this problem. However, aside from an observed gap between the performance of the existing works and human-level performance in multimodal object identification, prior works also rely on a presumption that the information given by the textual context will only lead to specific (i.e., unambiguous) objects, which does not conform to real-world multimodal conversations where ambiguity exists. Therefore, in this work, we explore three different solutions to enable multimodal object identification in the situated dialogue system, i.e., dialogue-contextualized object detection, object-dialogue alignment, and scene-dialogue alignment, without adopting the unambiguity assumption. Dialogue-contextualized object detection utilizes the spatial and object understanding capability of a pre-trained object detection model, to generate semantic representation containing both visual cues and the spatial understanding of the object. Object-dialogue alignment incorporates the image-text alignment capability of CLIP (Radford et al., 2021), which has been pre-trained on large image-text corpora to perform multimodal object identification from the given dialogue context. Scene-object alignment Figure 1: Multimodal object identification is the fundamental step required to enable multimodal dialogue systems to understand the object referred to by the user. Image is adapted from (Kottur et al., 2021). combines the spatial and object understanding capability of a pre-trained object detection model and a pre-trained textual understanding model to produce better semantic vision-language alignment. Our contributions are three-fold: * We introduce three different methods for handling multimodal object identification in situated dialogue, i.e., dialogue-contextualized object detection, object-dialogue alignment, and scene-dialogue alignment; * We show the dialogue-contextualized object detection method fails to outperform even the heuristic baselines despite having an acceptable performance on the object detection task; * We show the effectiveness of the other two methods which significantly outperform the SIMMC 2.1 baselines by \(\sim\)5% F1-score for object-dialogue alignment and \(\sim\)20% F1-score for scene-dialogue alignment; ## 2 Related Work Multimodal Dialogue SystemMultiple studies have attempted to enable the skills required for multimodal dialogue system, e.g., understanding visual (Antol et al., 2015; Das et al., 2017; Kottur et al., 2019) or visual-temporal (Alamri et al., 2019) content to answer user's questions, grounding conversations to images (Mostafazadeh et al., 2017; Shuster et al., 2020), interpreting multimodal inputs and responding with multimodal output to assist users with their goal (Saha et al., 2018) or as a means to converse (Sun et al., 2022), and perceiving the shared environment to grasp situational context to enable proper navigation, adaptation, and communication (Lukin et al., 2018; Brawer et al., 2018; Kottur et al., 2021). At the core of these efforts, the ability to understand language and vision, as well as integrate both representations to align the linguistic expressions in the dialogue with the relevant visual concepts or perceived objects, is the key to multimodal dialogue understanding (Landragin, 2006; Loaiciga et al., 2021, 2018; Kottur et al., 2018; Utescher and Zarriess, 2021; Sundar and Heck, 2022; Dai et al., 2021). Multimodal Object IdentificationIdentifying objects or visual concepts related to a linguistic expression is an incremental exploration in vision-language research. It starts with identifying simple objects in a sanitized environment (Mitchell et al., 2010) based on image descriptions or captions. Then, multimodal object identification has been gradually increasing in complexity and realism by involving visual contexts with cluttered and diverse scenes (Kazemzadeh et al., 2014; Gkatzia et al., 2015; Yu et al., 2016; Mao et al., 2016; Hu et al., 2016; Ilinykh et al., 2019; Kamath et al., 2021; Kuo and Kira, 2022). While these works base their multimodal object identification on single-turn text contexts, another line of works explores the usage of multi-turn sequences as a textual context to enable identifying objects based on implicit constraints deduced through multi-round reasoning (Seo et al., 2017; Johnson et al., 2017; Liu et al., 2019; Moon et al., 2020). However, they focus on identifying only the specific (i.e., unambiguous) objects, in which only a certain object in the scene fits the corresponding linguistic context. This is quite dissimilar from real-world multimodal object identification, where multiple objects could fit a given textual context and induce ambiguity into the conversation (Kottur et al., 2021). For this reason, existing works are not equipped with the ability to identify all objects that _plausibly_ fit those constraints although this skill is required to perform multimodal object identification in situated dialogue. Multimodal and Cross-Modal LearningPast works have studied multimodal and cross-modal alignment, grounding, and generation to solve various vision-language tasks, e.g., image captioning (Hossain et al., 2019; Sharma et al., 2018), generating stories from image (Min et al., 2021; Lovenia et al., 2022), as well as multimodal object identification (Li et al., 2019; Wang et al., 2022). These attempts become more substantial and extensive after the rise of pre-trained vision-language models such as CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021), and FLAVA (Singh et al., 2022), which allows transfer knowledge obtained from the large-scale pre-training to downstream tasks. ## 3 Methodology In this section, we describe the preliminaries of our work (SS3.1) and extensively elaborate on each of our approaches, i.e., dialogue-contextualized object detection (SS3.2), object-dialogue alignment (SS3.3), and scene-dialogue alignment (SS3.4). ### Preliminaries The goal of multimodal object identification in situated dialogue is to identify objects from a given scene image that fulfill the user's request gathered from the user-system interactions. To identify the object(s) that could satisfy a user's request in a dialogue, it is crucial to match the objects and the implicit constraints interwoven in the dialogue, e.g., S: "_I do! Take a look at these. I have a brown coat towards the far end on the left wall, another brown coat on the left side of the front floor rack, and a black coat on the front of the same rack._", U: "_Awesome! Tell me the cost and label on that one._". Thus, it is essential for the system to understand the relation between the visual perception of the objects in the scenes and the natural language used to verbalize these constraints, which describe the target object(s) by visual attributes (e.g., color, object category or type, etc.), location (i.e., absolute or relative position), or the combination of both. We define a dialogue between a user and a system as \(D=\{u_{1},s_{1},u_{2},s_{2},\dots,u_{n},s_{n}\}\), a scene consisting of images corresponding to multiple viewpoints of the scene as \(\{I_{1}^{scene},I_{2}^{scene},\dots,I_{n}^{scene}\}\), and a set of objects in the scene as \(O^{scene}=\{(b_{1},c_{1}),(b_{2},c_{2}),\dots,(b_{n},c_{n})\}\), where \(u_{i}\) and \(s_{i}\) respectively denote the user utterance and the system utterance, and \(c_{i}\) and \(b_{i}\) denote the bounding box and the class category of an object. Given a user dialogue turn \(D_{i}^{user}=\{u_{1},s_{1},u_{2},s_{2},\dots,u_{i}\}\), \(i\leq n\), and a scene image \(I_{i}^{scene}\), the goal of the task is to select a subset of scene objects \(O^{match}\subseteq O^{scene}\) that could satisfy the referred criteria in \(D_{i}^{user}\). ### Approach 1: Dialogue-Contextualized Object Detection For dialogue-contextualized object detection, we frame the task of multimodal object identification as the contextualized object detection task. In object detection, given a scene image \(I^{scene}\), we aim to detect all objects \(O^{scene}\) in the scene by predicting their bounding box and class category. While in contextualized object detection, the aim is instead to select only a set of scene objects \(O^{match}\) that satisfy a given context. Our approach for dialogue-contextualized object detection extends a state-of-the-art object detection model, namely DETR (Carion et al., 2020), by injecting dialogue information as the context to guide the detection model to filter out unidentified objects. A similar solution has been proposed by Modulated DETR (MDETR) (Kamath et al., 2021). Despite its strong performance on text-contextualized object detection, MDETR requires an aligned annotation between the text phrase and the visual object for training. Such annotation is not available on SIMMC 2.1, hence we develop a new text-contextualized object detection model namely Situational Context for Multimodal DETR (**SitCoM-DETR**). Unlike MDETR which concatenates the textual representation along with the visual representation before feeding them into the transformer encoder of DETR (shown in Appendix A), **SitCoM-DETR** injects a dialogue-level semantic representation vector into the input query of the transformer decoder of DETR in order to guide the model to select objects that match the dialogue context. We incorporate the same loss functions as the original DETR model. The depiction of our **SitCoM-DETR** model is shown in Figure 2: The architecture of SitCoM-DETR. SitCoM-DETR consists of a scene encoder and a dialogue encoder to extract multimodal content, respectively. The dialogue representation is used to guide the object detector module to judiciously filter out unrelated scene objects. Figure 2. ### Approach 2: Object-Dialogue Alignment For object-dialogue alignment, we frame the task of multimodal object identification as the alignment between a target object \(O_{i}^{match}\) and a user dialogue turn \(D_{i}^{user}\) pair. Given a user dialogue turn \(D_{i}^{user}\) and its corresponding scene image \(I_{i}^{scene}\), we first preprocess \(I_{i}^{scene}\) to extract the object images of \(O^{match}\). Each of the object images is paired with \(D_{i}^{user}\) as the positive pairs. We obtain the visual embeddings from the image by feeding it to an image encoder, and the textual embeddings from the dialogue turn by feeding it to a text encoder. After these embeddings pass through a linear projection, we calculate the similarity using the dot product between the two resulting vectors. Utilizing the contrastive learning objective, on a batch of object-dialogue pairs, this cross-modal alignment architecture learns by maximizing the similarity of the positive pairs and minimizing the similarity of the negative pairs (Figure 3). **Object-Dialogue Similarity Learning Strategy** The original contrastive learning approaches the object-dialogue alignment task as a one-to-one function, where the positive sample of \(D_{i}\) is only \(O_{i}\) in Figure 3. This is different from the actual nature of multimodal object identification, where more than one object could be relevant to a dialogue turn. For this reason, in addition to the original contrastive learning, we explore two modifications of the learning objective, where: 1) the positive samples of \(D_{i}\) include \(O_{i}\) (image pair) and similar objects1 to \(O_{i}\); and 2) the positive samples of \(D_{i}\) include \(O_{i}\) and other supposedly identified objects in \(D_{i}\). For simplicity, we refer to these methods as **CLIPPER (v1)** and **CLIPPER (v2)**. Footnote 1: We define similar objects to \(O_{i}\) as any other objects in the corresponding scene that use the same prefabricated design as \(O_{i}\) in the SIMMC 2.1 dataset. ### Approach 3: Scene-Dialogue Alignment For scene-dialogue alignment, we aim to combine the spatial understanding learned from object detection training with the image-text matching for multimodal similarity learning to solve multimodal object identification. For this approach, we utilize a pre-trained object detection model, i.e., DETR, and two pre-trained language models, i.e., BERT and GPT2. The resulting models are referred to as **DETR-BERT** and **DETR-GPT2**, respectively. We illustrate the overview of this approach in Figure 4. In this approach, we first frame our dataset as an object detection task, where a data instance consists of a scene image \(I_{i}^{scene}\) and its object annotations \(O^{scene}=\{(b_{1},c_{1}),(b_{2},c_{2}),...,(b_{m},c_{m})\}\), and train an object detection model (DETR) on it. The resulting model is then used to extract the visual representations of all objects in the scene image \(I^{scene}\) by matching the object queries with \(O^{scene}\) using Hungarian matching (Stewart et al., 2016). For the next step, we frame our dataset as a binary classification task, where a data instance consists of a user dialogue turn \(D_{i}^{user}\), an object \(O_{j}^{scene}\) in a corresponding scene \(I_{i}^{scene}\), and a binary label (i.e., whether the object is identified by the user dialogue turn or not). We utilize a dialogue encoder to extract textual representation from a user dialogue turn \(D_{i}^{user}\). The textual representation of \(D_{i}^{user}\) and the visual representation of \(O_{j}^{scene}\) are projected into a latent space. We compute the dot product of the two and use the resulting vector as Figure 3: Learning objectives of the original CLIP (Radford et al., 2021), CLIPPER (v1), and CLIPPER (v2) for the object-dialogue alignment approach. The similarities of the positive pairs (blue) are maximized while the similarities of the negative pairs (white) are minimized. the prediction logits for training and inference. ## 4 Experiment ### Dataset For all of our experiments, we utilize the ambiguous candidate identification task from the SIMMC 2.1 dataset Kottur et al. (2021). The dataset studies conversational scenarios where the system shares a co-observed vision (i.e., the same scene) with the user. The dataset focuses on improving the shopping experience in two domains: fashion and furniture. In the setting of SIMMC 2.1, the system is able to access the ground truth meta information of all objects (e.g., object price, size, material, brand, etc.) in the scene \(O^{scene}\), while the user observes objects only through the scene viewpoints \(\{I_{1}^{scene},I_{2}^{scene},\dots,I_{n}^{scene}\}\) to describe a request. Each dialogue in the dataset can utilize different scene viewpoints at different dialogue turns throughout the session. This represents scenarios where the user navigates the scene during the interaction in a real physical store. Therefore, the multimodal dialogue system needs to understand user requests using both the dialogue history and the scene image as a unified multimodal context. The statistics of the ambiguous candidate identification of SIMMC 2.1 dataset is presented in Table 1.2 Footnote 2: We use the devtest split of SIMMC 2.1 dataset as the test set in our experiment. ### Baselines We incorporate various baselines including simple heuristics and deep learning based multimodal matching methods from SIMMC 2.1.3 For the heuristic methods, we incorporate uniform random prediction (**Random**), empty prediction (**No object**), and all objects prediction (**All objects**) as our baselines. For the deep learning approaches (**ResNet50-BERT** and **ResNet50-GPT2**), we apply cosine similarity between the feature extracted from ResNet-50 He et al. (2016)4 and two widely-used pre-trained LMs, i.e., BERT Devlin et al. (2019)5 and GPT2 Radford et al. (2019)6. Footnote 3: SIMMC 2.1 repository: [https://github.com/facebookresearch/simmc2](https://github.com/facebookresearch/simmc2). Footnote 4: We use the pre-extracted visual feature provided in the SIMMC 2.1 repository. Footnote 5: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased). Footnote 6: [https://hf-proxy-cf.effarig.sitem/gpt2](https://hf-proxy-cf.effarig.sitem/gpt2). In addition to these baselines, we incorporate several additional baselines: 1) pre-trained CLIP Radford et al. (2021)7, which serves as a baseline for the object-dialogue alignment approach and 2) pre-trained MDETR Kamath et al. (2021)8, which represents a text-conditioned object detection baseline trained with an explicit align \begin{table} \begin{tabular}{c c c c} \hline \hline **Split** & **\# Sample** & **\# Dialogue** & \(\frac{O^{match}}{O^{scene}}\) \\ \hline Train & 4239 & 3983 & 28.74\% \\ Validation & 414 & 371 & 24.72\% \\ Test & 940 & 905 & 30.78\% \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the ambiguous candidates identification of the SIMMC 2.1 dataset. Figure 4: Scene-dialogue alignment. We pre-extract the visual embeddings from an object detection model trained on our dataset. The visual embeddings are used together with dialogue embeddings in the next training to perform multimodal object detection as a binary classification task. ment between phrases and objects. For CLIP, we report both zero-shot (**CLIP (zero-shot)**) and direct fine-tuning (**CLIP**) performances, while for MDETR, we only use the zero-shot performance (**MDETR (zero-shot)**) due to the unavailability of the explicit alignment between objects and dialogues in the dataset. ### Models We propose three different approaches to solve the multimodal object identification task SS3. For the dialogue-contextualized object detection approach, we incorporate one model, namely **SitCoM-DETR** which will be compared to the MDETR baseline. For the object-dialogue alignment approach, we incorporate two model variants, i.e., **CLIPPER (v1)** and **CLIPPER (v2)**. For the scene-object alignment approach, we incorporate two model variants, i.e., **DETR-BERT** and **DETR-GPT2**. ### Evaluation Given a label set \(L\) and a prediction set \(P\), we define the number of true positive \(N^{correct}\) as the objects that appear in both the prediction and the label sets. Using this definition, we evaluate the models' performance on the multimodal object identification task using three evaluation metrics, i.e., recall, precision, and F1-score. The definition of each metric is defined as: \[Recall=\frac{N^{correct}}{\|L\|} \tag{1}\] \[Precision=\frac{N^{correct}}{\|P\|} \tag{2}\] \[F1=\frac{2*Precision*Recall}{Precision+Recall} \tag{3}\] ### Implementation Details Dialogue PreprocessingIn all of our experiments, following prior works in end-to-end task-oriented dialogue system, we encode the last three utterances from the dialogue into a single text. For example a user dialogue turn \(D_{i}^{user}=\{u_{1},s_{1},u_{2},s_{2},\ldots,u_{i}\}\) is encoded into a text "U: <\(u_{i-1}\)> S: <\(s_{i-1}\)> U: <\(u_{i}\)>" to be further processed by the dialogue encoder. Inference strategy for object-dialogue alignmentFor the proposed CLIPPER model in the object-dialogue alignment approach, we simply apply sigmoid to the logits and use a threshold value of 0.5 (denoted as _Sigmoid_), since it has a built-in capability to perform multi-label classification. While for the CLIP model, which serves as a baseline, does not have the same capability, hence we use the mean value of the logits as the threshold (denoted as _Mean_). Additionally, we also evaluate the performance of the model if the top-\(k\) objects with the highest logits are considered valid predictions, where \(k\) denotes the correct amount of objects in the ground-truth label (denoted as _Oracle_). Inference strategy for dialogue-contextualized object detectionFor the dialogue-contextualized object detection, since the model is originally for the object detection task, we develop our own inference strategy to allow it to perform multi-label classification for object identification. This is done through several steps: 1) we perform Hungarian matching using all objects, 2) we compute intersection over union (IoU) of all pairs of matched prediction and ground-truth bounding boxes9, and 3) we take all objects having IoU score \(\geq\)10%10. Footnote 9: We do not consider the class label in the scoring to have a fairer comparison with the zero-shot MDETR approach. Footnote 10: We align this with MDETR’s class probability setting during inference. Hyperparameter DetailsFor the dialogue-contextualized object detection, we fine-tune the SitCoM-DETR model for a maximum of 200 epochs with AdamW optimizer using a linear learning rate decay, a learning rate between [1e-4..1e-5], and an early stopping of 10 epochs. For the scene-dialogue alignment, we fine-tune the DETR-BERT and DETR-GPT2 models for a maximum of 200 epochs with AdamW optimizer using a linear learning rate decay, a learning rate between [1e-4..1e-5], and an early stopping of 10 epochs. For the scene-dialogue alignment, we fine-tune the DETR-BERT and DETR-GPT2 models for a maximum of 200 epochs with AdamW optimizer using a linear learning rate decay, a learning rate between [1e-4..1e-5], and an early stopping of 10 epochs. ## 5 Result and Analysis ### Result Overview The results of our experiments are shown in Table 2. The best baseline performance is achieved by **CLIP (fine-tuned)** with 45.09% F1-score outperforming the baselines provided by the SIMMC 2.1 (i.e., **ResNet50-GPT2** and **ResNet50-BERT**), showing the superiority of image-text alignment pre-training over separate unimodal pre-trainings for multimodal object identification. For the dialogue-contextualized object detection methods, the proposed **SitCoM-DETR** outperforms **MDETR (zero-shot)**. Nevertheless, its performance for multimodal object identification is low despite having an acceptable object detection quality. We conjecture that a better method for adapting an object detection model for multimodal object identification is required, which is also shown by our _scene-dialogue alignment_ approach in SS3.4. For the object-dialogue alignment, our **CLIP-PER (v1)** marginally outperforms the **CLIP (fine-tuned)** baseline. This shows the effectiveness of modifying the CLIP objective which is explained in more detail in SS5.3. For the scene-dialogue alignment (i.e., **DETR-BERT** and **DETR-GPT2**), where we combine the object detection and the image-text contrastive objective, we show a significant improvement over **CLIP (fine-tuned)**, which is the highest-performing baseline, by \(\sim\)10-15% F1-score. This suggests the importance of combining object detection representation and image-text contrastive learning to fulfill the need for both visual and spatial matching to solve multimodal object identification. ### Pitfalls of the Best Performing Models We manually analyze the incorrect predictions made by our scene-dialogue alignment approaches, i.e., **DETR-BERT** and **DETR-GPT2**. Based on our analysis in Table 5, our models encounter two main issues. First, our models have difficulties in identifying objects when faced with a sudden object shift in the dialogue, e.g., the sudden shift from beds to a chair in this user dialogue turn U: "_I need a new bed too. Any suggestions?_", S: "_Both of these grey beds are in stock._", U: "_What's the rating on that chair?_". The second issue is the ineffectiveness of handling textual coreferences. For instance, in the user dialogue turn U: "_How about a hat, but cheap and in a small?_", S: "_I have the black hat third from the front, the white hat at the front, and the black hat between them._", U: "_What's the brand and reviews for the black hat?_", the models fail to recognize that "the black hat" in the user utterance is anaphoric to either "the black hat third from the front" or "the black hat between them" in the system utterance, which leads to the system's failure to identify both black hats as \(O^{match}\). This shortcoming also be \begin{table} \begin{tabular}{l l c c c} \hline \hline **Method Type** & **Approach** & **Recall** & **Precision** & **F1-score** \\ \hline \multicolumn{5}{c}{_Baselines_} \\ \multirow{5}{*}{_Heuristic_} & No object & 0.00\% & 0.00\% & 0.00\% \\ & Random & 49.90\% & 22.43\% & 30.95\% \\ & All objects & **100.00**\% & 22.34\% & 36.52\% \\ \hline \multirow{2}{*}{_SIMMC 2.1_} & ResNet50-GPT2 & 36.40\% & 42.26\% & 39.11\% \\ & ResNet50-BERT & 36.70\% & **43.39\%** & 39.76\% \\ \hline \multicolumn{5}{l}{_Dialogue-Contextualized_} \\ \multicolumn{5}{l}{_Object Detection_} \\ \multicolumn{5}{l}{_Object-Dialogue_} \\ \multicolumn{5}{l}{_Alignment_} \\ \multicolumn{5}{l}{_Proposed Methods_} \\ \multicolumn{5}{l}{_Dialogue-Contextualized_} \\ \multicolumn{5}{l}{_Object Detection_} \\ \multicolumn{5}{l}{_Object Detection_} \\ \multicolumn{5}{l}{_Object-Dialogue_} \\ \multicolumn{5}{l}{CLIPPER (v1)} \\ \multicolumn{5}{l}{CLIPPER (v2)} \\ \multicolumn{5}{l}{CLIPPER (v2)} \\ \multicolumn{5}{l}{_Scene-Dialogue_} \\ \multicolumn{5}{l}{_Alignment_} \\ \multicolumn{5}{l}{DETR-BERT_} \\ \multicolumn{5}{l}{_Alignment_} \\ \multicolumn{5}{l}{DETR-GPT2} \\ \multicolumn{5}{l}{63.81\%} \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results of multimodal object identification on the SIMMC 2.1 dataset (Kottur et al., 2021). **Bold** denotes the best performances of baselines and proposed methods. _Underline_ denotes the best performances within a method type. comes more pronounced if the coreference chains are longer. These issues show the limitation of pre-trained LMs for discourse understanding and analysis, especially in terms of coreference and entity linking Jurafsky and Martin (2019); Pandia et al. (2021); Koto et al. (2021). Additionally, some other cases require the model to process long-term dialogue history dependency which existing LMs are not able to handle because of the quadratic cost bottleneck of the attention mechanism of the transformer architecture Vaswani et al. (2017). Adapting an efficient attention mechanism with linear complexity might be beneficial to mitigate this problem. ### Impact of Changing CLIP Objective As shown in Table 3, the CLIPPER models with binary cross-entropy objective have a built-in capability for multi-label classification with **Sigmoid** which consistently performs better compared to the **Mean** thresholding. In addition, **CLIPPER (v1)** outperforms the original CLIP model which is trained with the cross-entropy loss. These facts suggest that changing the CLIP objective is beneficial for performing multi-label classification tasks such as multimodal object identification. When using **Oracle**, we can observe a significant improvement in F1-score score, which mainly comes from the improvement in the precision with only a minor degradation on recall. This suggests that there is a very sensitive range of logits which consists of many negative samples with a few positive samples. To better segregate these few positive samples from the negative ones, hard negative mining techniques such as focal loss Lin et al. (2020) might be beneficial to alleviate this problem. ## 6 Discussion Based on the results and analysis, we show that the _scene-object alignment_ approach is the best performing approach, achieving \(\sim\)55-60% F1-score in the multimodal object identification task of SIMMC 2.1. We analyze the behavior of the model and conjecture that existing LMs have a limitation on understanding discourse. Additionally, we show the potential benefit of modeling the long-term dependency of dialogue history to further improve the quality of multimodal object identification task (SS5.2). Lastly, we analyze the limitation of the existing image-text contrastive approaches for multimodal object identification and propose an alternative objective to alleviate this limitation (SS5.3). For future work, we aim to focus on the scene-dialogue alignment methods to further improve the model performance on the multimodal object identification capability. We note five potential points of improvement that can be further explored to improve the model performance in multimodal object identification: 1) the incorporation of cross-object attention in the modality fusion phase to enable a better relative position understanding between objects, 2) the incorporation of linear attention mechanism to handle the long-term dependency of dialogue history, 3) the exploration on better contrastive objectives for multimodal object identification, 4) the exploration on improving discourse understanding for LMs to better handle coreference and sudden object shift, and 5) the synthetic scene-dialogue data augmentation through the utilization of other publicly available object detection datasets to handle the in-domain data scarcity problem. Figure 5: Frequency of error types of 100 misclassified samples from **DETR-BERT** and **DETR-GPT2**. \begin{table} \begin{tabular}{c c c c} \hline \hline **Approach** & **Rec.** & **Prec.** & **F1** \\ \hline \hline **CLIP — Cross-Entropy** & & & \\ Mean & 73.00\% & 32.62\% & 45.09\% \\ Oracle & 74.99\% & 74.96\% & **74.98\%** \\ \hline \hline **CLIPPER (v1)** & **— Binary Cross-Entropy** & & \\ Sigmoid & 73.41\% & 33.00\% & 45.53\% \\ Mean & 73.08\% & 31.97\% & 44.48\% \\ Oracle & 73.37\% & 73.34\% & **73.36\%** \\ \hline \hline **CLIPPER (v2)** & **— Binary Cross-Entropy** & & \\ Sigmoid & 59.95\% & 25.60\% & 35.88\% \\ Mean & 53.90\% & 23.42\% & 32.65\% \\ Oracle & 54.92\% & 54.89\% & **54.91\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Results for object-dialogue alignment models with different thresholding strategies. Conclusion In this paper, we explore three methods to tackle multimodal object identification and evaluate them on SIMMC 2.1. Our best method, scene-dialogue alignment, improves the performance by \(\sim\)20% F1-score compared to the SIMMC 2.1 baselines. We provide an analysis of incorrect predictions by our best approach and the impact of changing the CLIP learning objective. We further provide discussion regarding the limitation of our methods and the potential directions for future works. ## Acknowledgement We appreciate the guidance that Prof. Dan Xu has provided for this research. This work has been supported by the School of Engineering PhD Fellowship Award, the Hong Kong University of Science and Technology and PF20-43679 Hong Kong PhD Fellowship Scheme, Research Grant Council, Hong Kong.
2309.04939
Ergodic averages for sparse sequences along primes
We investigate the limiting behavior of multiple ergodic averages along sparse sequences evaluated at prime numbers. Our sequences arise from smooth and well-behaved functions that have polynomial growth. Central to this topic is a comparison result between standard Ces\'{a}ro averages along positive integers and averages weighted by the (modified) von Mangoldt function. The main ingredients are a recent result of Matom\"{a}ki, Shao, Tao and Ter\"{a}v\"{a}inen on the Gowers uniformity of the latter function in short intervals, a lifting argument that allows one to pass from actions of integers to flows, a simultaneous (variable) polynomial approximation in appropriate short intervals, and some quantitative equidistribution results for the former polynomials. We derive numerous applications in multiple recurrence, additive combinatorics, and equidistribution in nilmanifolds along primes. In particular, we deduce that any set of positive density contains arithmetic progressions with step $\lfloor p^c \rfloor$, where $c$ is a positive non-integer and $p$ denotes a prime, establishing a conjecture of Frantzikinakis.
Andreas Koutsogiannis, Konstantinos Tsinas
2023-09-10T06:08:34Z
http://arxiv.org/abs/2309.04939v1
# Ergodic averages for sparse sequences along primes ###### Abstract. We investigate the limiting behavior of multiple ergodic averages along sparse sequences evaluated at prime numbers. Our sequences arise from smooth and well-behaved functions that have polynomial growth. Central to this topic is a comparison result between standard Cesaro averages along positive integers and averages weighted by the (modified) von Mangoldt function. The main ingredients are a recent result of Matomaki, Shao, Tao and Teravainen on the Gowers uniformity of the latter function in short intervals, a lifting argument that allows one to pass from actions of integers to flows, a simultaneous (variable) polynomial approximation in appropriate short intervals, and some quantitative equidistribution results for the former polynomials. We derive numerous applications in multiple recurrence, additive combinatorics, and equidistribution in nilmanifolds along primes. In particular, we deduce that any set of positive density contains arithmetic progressions with step \(\lfloor p^{c}\rfloor\), where \(c\) is a positive non-integer and \(p\) denotes a prime, establishing a conjecture of Frantzikinakis. Key words and phrases:Ergodic averages, recurrence, prime numbers, Hardy fields 2020 Mathematics Subject Classification: Primary: 37A44; Secondary: 28D05, 11B30 The second author was supported by ELIDEK-Fellowship number 5367 (3rd Call for HFRI Ph.D. Fellowships). \(X\), and we examine the limiting behavior of the multiple averages \[\frac{1}{N}\sum_{n=1}^{N}T_{1}^{a_{1}(n)}f_{1}\cdot\ldots\cdot T_{k}^{a_{k}(n)}f_ {k}. \tag{1}\] Throughout the article, these assumptions on the transformations will be implicit; we call the tuple \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) a _measure-preserving system_ (or just _system_). Here \(f_{1},\ldots,f_{k}\) are functions in \(L^{\infty}(\mu)\) and we concern ourselves with their convergence mainly in the \(L^{2}\)-sense. In view of Furstenberg's correspondence principle, a satisfactory answer to this problem typically ensures that sets with positive density possess patterns of the form \((m,m+a_{1}(n),\ldots,m+a_{k}(n))\), where \(m,n\in\mathbb{N}\). Specializing to the case where all the sequences are equal and \(T_{i}=T^{i}\), we arrive at the averages \[\frac{1}{N}\sum_{n=1}^{N}T^{a(n)}f_{1}\cdot T^{2a(n)}f_{2}\cdot\ldots\cdot T^{ ka(n)}f_{k}, \tag{2}\] which relate to patterns of arithmetic progressions, whose common difference belongs to the set \(\{a(n)\colon n\in\mathbb{N}\}\). Furthermore, it is particularly tempting to conjecture that results pertaining to mean convergence of the averages in (1) should still be valid, if we restrict the range of summation to a sparse set such as the primes. Normalizing appropriately, we contemplate whether or not the averages \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}\colon p\leq N}T_{1}^{a_{1}(p)}f_{1}\cdot \ldots\cdot T_{k}^{a_{k}(p)}f_{k} \tag{3}\] converge in \(L^{2}(\mu)\) and what is the corresponding limit of these averages. Here, \(\pi(N)\) denotes the number of primes less than or equal to \(N\) and \(\mathbb{P}\) is the set of primes. The first results in this direction were established in the case \(k=1\). Namely, Sarkozy [40] used methods from analytic number theory to show that sets of positive density contain patterns of the form \((m,m+p-1)\), where \(p\) is a prime.1 Additionally, Wierdl [45] established the even stronger pointwise convergence result for the averages (3) in the case \(k=1\) and \(a_{1}(n)=n\), while Nair generalized this theorem to polynomials evaluated at primes [38]. Footnote 1: Throughout this article, it will be a reoccurring theme that in combinatorial applications, certain arithmetic obstructions force one to consider the set of shifted primes \(\mathbb{P}-1\) (or \(\mathbb{P}+1\)) in place of \(\mathbb{P}\), when dealing with polynomials. This is a necessary assumption, as in such cases the corresponding results for the set \(\mathbb{P}\) are easily seen to be incorrect (see, for example, [42, Remark 1.4]). In the setting of several iterates, the first results were provided by Frantzikinakis, Host, and Kra [16], who established that sets of positive density contain 3-term arithmetic progressions whose common difference is a shifted prime. Furthermore, they demonstrated that the averages in (3) converge in the case \(k=2\), \(T_{1}=T_{2}\) and \(a_{i}(n)=in,\ i\in\{1,2\}\). This was generalized significantly by Wooley and Ziegler [47] to hold in the case that the sequences \(a_{i}(n),\ i\in\{1,\ldots,k\}\) are polynomials with integer coefficients and the transformations \(T_{1},\ldots,T_{k}\) are the same. Following that, Frantzikinakis, Host, and Kra confirmed the validity of the Bergelson-Leibman theorem in [17] along the shifted primes. In addition, they showed that the averages in (3) converge in norm when \(a_{i}(n)\) are integer polynomials. Furthermore, Sun obtained convergence and recurrence results in [42] for a single transformation and iterates of the form \(i\lfloor an\rfloor,\ i\in\{1,\ldots,k\}\) or \(\lfloor jan\rfloor,\ j\in\{1,\ldots,k\}\), with \(a\) irrational. Finally, using the convergence results in [30] along \(\mathbb{N}\) for integer parts of real polynomials and several transformations, the first author extended the convergence result of [17] to real polynomials in [29], obtaining recurrence for polynomials with real coefficients rounded to the closest integer. In all of the previous cases, combinatorial applications along the shifted primes were derived as well. In the case of multiple iterates, a shared theme in the methods used has been the close reliance on the deep results provided by the work of Green and Tao in their effort to show that primes contain arbitrarily long arithmetic progressions [20]. For instance, all results2 relied on the Gowers uniformity of the (modified) von Mangoldt function that was established in [21] conditional to two deep conjectures, which were subsequently verified in [24] and [22]. Footnote 2: The methods in [47] do not invoke the full power of this theorem, although their approach draws heavily from the work of Green and Tao. It was conjectured by Frantzikinakis that the polynomial theorems along primes should hold for more general sequences involving fractional powers \(n^{c}\), such as \(\left\lfloor n^{3/2}\right\rfloor\), \(\left\lfloor n^{\sqrt{2}}\right\rfloor\) or even linear combinations thereof. Indeed, it was conjectured in [10] that the sequence \(\left\lfloor p_{n}^{c}\right\rfloor\), where \(c\) is a positive non-integer and \(p_{n}\) is the sequence of primes is good for multiple recurrence and convergence. To be more precise, he conjectured that the averages \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}T^{\left\lfloor p^{c}\right\rfloor }f_{1}\cdot\ldots\cdot T^{k\left\lfloor p^{c}\right\rfloor}f_{k} \tag{4}\] converge in \(L^{2}(\mu)\) for all positive integers \(k\) and all positive non-integers \(c\). Analogously, we have the associated multiple recurrence conjecture, namely that all sets of positive upper density contain \(k\)-term arithmetic progressions with common difference of the form \(\left\lfloor p^{c}\right\rfloor\). When \(0<c<1\), one can leverage the fact that the range of \(\left\lfloor p_{n}^{c}\right\rfloor\) contains all sufficiently large integers to establish the multiple recurrence result. Additionally, the convergence of the previous averages is known in the case \(k=1\) since one can use the spectral theorem and the fact that the sequence \(\{p_{n}^{c}a\}\) is equidistributed mod \(1\) for all non-zero \(a\in\mathbb{R}\). This last assertion follows from [41] or [46] when \(c<1\) and [34] in the case \(c>1\). There were significant obstructions to the solution of this problem. One approach would be to modify the comparison method from [17] (concerning polynomials), but the Gowers uniformity of the von Mangoldt functions is insufficient to establish this claim. The other approach would be to use the method of characteristic factors, which is based on the structure theorem of Host-Kra [26]. Informally, this reduces the task of proving convergence to a specific class of systems with special algebraic structure called nilmanifolds. However, this required some equidistribution results on nilmanifolds for the sequence \(\left\lfloor p_{n}^{c}\right\rfloor\), which were very difficult to establish. A similar conjecture by Frantzikinakis was made for more general averages of the form \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}T^{\left\lfloor p^{c_{1}} \right\rfloor}f_{1}\cdot\ldots\cdot T^{\left\lfloor p^{c_{k}}\right\rfloor}f _{k}\] for distinct positive non-integer \(c_{1},\ldots,c_{k}\). The recent result of Frantzikinakis [13] verifies that these averages converge in \(L^{2}(\mu)\) to the product of the integrals of the functions \(f_{1},\ldots,f_{k}\) in any ergodic system, even in the more general case where the sequences in the iterates are linearly independent fractional polynomials. The number theoretic input required is a sieve-theoretic upper bound for the number of tuples of primes of a specific form, as well as an equidistribution result on fractional powers of primes in the torus that was already known. These methods relied heavily on the use of the joint ergodicity results in [14] and, thus, the linear independence assumption on the fractional polynomials was absolutely essential. In the same paper, it was conjectured [13, Problem] that the case of fractional polynomials can be generalized to a significantly larger class of functions of polynomial growth, called Hardy field functions, which we consider below. The conjecture asks for necessary and sufficient conditions so that the averages along primes converge to the product of the integrals in ergodic systems. The arguments in [13] cannot cover this larger class of functions,3 as it was remarked in Subsection 1.3 of that article. Footnote 3: A more fundamental obstruction in this more general setting was that the necessary seminorm estimates were unavailable even in the simplest case of averages along \(\mathbb{N}\), apart from some known special cases. This was established a few months later by the second author [44]. In this article, our objective is to strengthen the convergence results in [17] and [13] and resolve the convergence problem of the averages in (4). Actually, there is no advantage in confining ourselves to sequences of the form \(\lfloor p^{c}\rfloor\), so we consider the more general class of sequences arising from Hardy field functions of polynomial growth (see Section 2 for the general definition), which, loosely speaking, are functions with pleasant behavior (such as smoothness, for instance). The prototypical example of a Hardy field is the field \(\mathcal{LE}\) of logarithmico-exponential functions, which are defined by a finite combination of the operations \(+,-,\times,\div\) and the functions \(\exp,\log\) acting on a real variable \(t\) and real constants. For instance, the field \(\mathcal{LE}\) contains the functions \(\log^{3/2}t,\,t^{\pi}\), \(t^{17}\log t+\exp(\sqrt{t^{\log t}+\log\log t}).\) The fact that \(\mathcal{LE}\) is a Hardy field was established in [25] and the reader can keep this in mind as a model case throughout this article. We resolve several conjectures involving the convergence of the averages in (3) along Hardy sequences. Consequently, we derive several applications in recurrence and combinatorics that expand the known results in the literature. Finally, we also establish an equidistribution result in nilmanifolds for sequences evaluated at primes. ### Main results We present here our main theorems. We start by stating our mean convergence results, followed by their applications to multiple recurrence and combinatorics, and conclude our presentation with the equidistribution results in nilmanifolds. We will assume below that we are working with a Hardy field \(\mathcal{H}\) that contains the polynomial functions. This assumption is not necessary, but it simplifies the proofs of our main theorems. Besides, this restriction is very mild and the most interesting Hardy fields contain the polynomials. A few results impose additional assumptions on \(\mathcal{H}\) and we state those when necessary. These extra assumptions are a byproduct of convergence results along \(\mathbb{N}\) in the literature that were proved under these hypotheses and we will not need to use the implied additional structure on \(\mathcal{H}\) in any of our arguments. #### 1.2.1. Comparison between averaging schemes For many number-theoretic problems, a suitable proxy for capturing the distribution of the prime numbers is the von-Mangoldt function, which is defined on \(\mathbb{N}\) by \[\Lambda(n)=\begin{cases}\log p&\text{, if }n=p^{k}\text{ for some prime }p\text{ and }k\in\mathbb{N}\\ 0&\text{, otherwise}\end{cases}. \tag{5}\] The function \(\Lambda\) has mean value \(1\) by the prime number theorem. Usually, the prime powers with exponents at least \(2\) contribute a term of significantly lower order in asymptotics, so one can think of \(\Lambda\) as being supported on primes. However, due to the irregularity of the distribution of \(\Lambda\) in residue classes to small moduli, one typically considers a modified version of \(\Lambda\), called the W-tricked version. To define this, let \(w\) be a positive integer and let \(W=\prod_{p\leq w,p\in\mathbb{P}}p\). Then, for any integer \(1\leq b\leq W\) with \((b,W)=1\), we define the \(W\)-tricked von Mangoldt function \(\Lambda_{w,b}\) by \[\Lambda_{w,b}(n)=\frac{\phi(W)}{W}\Lambda(Wn+b), \tag{6}\] where \(\phi\) denotes the Euler totient function. Our main result provides a comparison between ergodic averages along primes and averages along natural numbers. This will allow us to transfer mean convergence results for Cesaro averages to the prime setting, answering numerous conjectures regarding norm convergence of averages as those in (3) followed by applications in multiple recurrence and combinatorics. We explain the choice of the conditions on the functions \(a_{ij}\) in Subsection 1.3. Roughly speaking, the first condition implies that the sequence \(a_{ij}\) is equidistributed mod \(1\) due to a theorem of Boshernitzan (see Theorem D in Section 2). **Theorem 1.1**.: _Let \(\ell,k\) be positive integers and, for all \(1\leq i\leq k,\ 1\leq j\leq\ell\), let \(a_{ij}\in\mathcal{H}\) be functions of polynomial growth such that_ \[\lim_{t\to+\infty}\left|\frac{a_{ij}(t)-q(t)}{\log t}\right|=+\infty\ \ \text{for every polynomial}\ q(t)\in\mathbb{Q}[t], \tag{7}\] _or_ \[\lim_{t\to+\infty}|a_{ij}(t)-q(t)|=0\ \ \text{for some polynomial}\ q(t)\in\mathbb{Q}[t]+\mathbb{R}. \tag{8}\] _Then, for any measure-preserving system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) and functions \(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\), we have_ \[\lim_{w\to+\infty}\ \limsup_{N\to+\infty}\ \max_{\begin{subarray}{c}1\leq b \leq W\\ (b,W)=1\end{subarray}}\ \left\|\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b}(n)-1 \big{)}\prod_{j=1}^{\ell}\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor a_{ij}(Wn+b) \rfloor}\big{)}f_{j}\right\|_{L^{2}(\mu)}=0.\] **Remark 1**.: _We can easily verify that each of the integer parts can be individually replaced by other rounding functions, such as the ceiling function (which we denote by \(\lceil\cdot\rceil\)) or the closest integer function (denoted by \([[\cdot]]\)). This is an immediate consequence of the identities \(\lceil x\rceil=-\lfloor-x\rfloor\) and \([[x]]=\lfloor x+1/2\rfloor,\) for all \(x\in\mathbb{R}\) and the fact that the affine shifts (by rationals) \(q_{1}a_{ij}+q_{2},q_{1},q_{2}\in\mathbb{Q}\), still satisfy (7) or (8) if \(a_{ij}\) does._ Theorem 1.1 is the main tool that we use to derive all of our applications. The bulk of the article is aimed towards establishing it and everything else is practically a corollary (in combination with known norm convergence theorems for Cesaro averages). We remark that unlike several of the theorems below, there are no "independence" assumptions between the functions \(a_{ij}\), although, in applications, we will need to impose analogous assumptions to ensure convergence of the averages, firstly along \(\mathbb{N}\), and then along \(\mathbb{P}\). In order to clarify how the comparison works, we present the following theorem, which is effectively a corollary of Theorem 1.1 and which shall be proven in Section 6. **Theorem 1.2**.: _Let \(\ell,k\) be positive integers, \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) be a measure-preserving system and \(f_{1},\ldots,f_{k}\in L^{\infty}(\mu)\). Assume that for all \(1\leq i\leq k,\ 1\leq j\leq\ell\), \(a_{ij}\in\mathcal{H}\) are functions of polynomial growth such that the following conditions are satisfied: (a) Each one of the functions \(a_{ij}(t)\) satisfies either (7) or (8). (b) For all positive integers \(W,b\), the averages_ \[\frac{1}{N}\sum_{n=1}^{N}\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor a_{i1}(Wn+b) \rfloor}\big{)}f_{1}\cdot\ldots\cdot\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor a_{ i\ell}(Wn+b)\rfloor}\big{)}f_{\ell} \tag{9}\] _converge in \(L^{2}(\mu)\)._ _Then, the averages_ \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\ p\leq N}\big{(}\prod_{i=1}^{k}T_{i}^{ \lfloor a_{i1}(p)\rfloor}\big{)}f_{1}\cdot\ldots\cdot\big{(}\prod_{i=1}^{k}T_ {i}^{\lfloor a_{i\ell}(p)\rfloor}\big{)}f_{\ell} \tag{10}\] _converge in \(L^{2}(\mu)\)._ _Furthermore, if the averages in (9) converge to the function \(F\in L^{\infty}(\mu)\) for all positive integers \(W,b\), then the limit in \(L^{2}(\mu)\) of the averages (10) is equal to \(F\)._ In the setting of Hardy field functions, the fact that we require convergence for sequences along arithmetic progressions is typically harmless. Indeed, convergence results along \(\mathbb{N}\) typically follow from a growth condition on the implicit functions \(a_{ij}\) (such as (7)) and it is straightforward to check that the function \(a_{ij}(Wt+b)\) satisfies a similar growth condition as well. Therefore, one can think of the second condition morally as asking to establish convergence in the case \(W=1\). The final part of Theorem 1.2 allows us to compute the limit of averages along primes in cases where we have an expression for the limit of the standard Cesaro averages. This is possible, in rough terms, whenever the linear combinations of the functions \(a_{ij}\) do not contain polynomials or functions that are approximately equal to a polynomial. The reason for that is that there is no explicit description of the limit of polynomial ergodic averages in a general measure preserving system (although one can get a simplified expression in special cases, or under some total ergodicity assumptions on the system). #### 1.2.2. Convergence of ergodic averages along primes The foremost application is that the averages in (2) converge when \(a(n)\) is a Hardy sequence and when we average along primes. This will also lead to generalizations of Szemeredi's theorem in our applications. The following theorem is a corollary of our comparison and the convergence results in [10] (specifically, Theorems 2.1 and 2.2 of that paper). In conjunction with the corresponding recurrence result of Theorem 1.6 below, we get an affirmative answer to a stronger version of [10, Problem 7] (this problem also reappeared in [12, Problem 27]), which was stated only for sequences of the form \(n^{c},c\in\mathbb{R}^{+}\setminus\mathbb{N}\). **Theorem 1.3**.: _Let \(a\in\mathcal{H}\) be a function of polynomial growth that satisfies either_ \[\lim_{t\to+\infty}\left|\frac{a(t)-cq(t)}{\log t}\right|=+\infty\text{ for every }c\in\mathbb{R}\text{ and every }q\in\mathbb{Z}[t], \tag{11}\] _or_ \[\lim_{t\to+\infty}|a(t)-cq(t)|=d\text{ for some }c,d\in\mathbb{R}\text{ and some }q\in\mathbb{Z}[t]. \tag{12}\] _Then, for any positive integer \(k\), any measure-preserving system \((X,\mathcal{X},\mu,T)\) and functions \(f_{1},\ldots,f_{k}\in L^{\infty}(\mu)\), we have that the averages_ \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\,p\leq N}T^{[a(p)]}f_{1}\cdot\ldots\cdot T ^{k[a(p)]}f_{k} \tag{13}\] _converge in \(L^{2}(\mu)\)._ _In particular, if \(a\) satisfies (11), the limit of the averages in (13) is equal to the limit in \(L^{2}(\mu)\) of the averages_ \[\frac{1}{N}\sum_{n=1}^{N}T^{n}f_{1}\cdot\ldots\cdot T^{kn}f_{k}.\] **Comment**. We can replace the floor function in (13) with either the function \(\lceil\cdot\rceil\) or the function \([[\cdot]]\). The assumption that the iterates are Hardy field functions can also be relaxed. We discuss this more in Section 7. Observe that there is only one function appearing in the statement of the previous theorem. The following convergence results concern the case where we may have several different Hardy field functions. In both cases, there are some "independence" assumptions between the functions involved, which has the advantage of providing an exact description of the limit for the averages along \(\mathbb{N}\). Thus, we can get a description for the limit along \(\mathbb{P}\) as well. The following theorem concerns the "jointly ergodic" case for one transformation, which refers to the setting when we have convergence to the product of the integrals in ergodic systems. Theorem 1.1 combines with [44, Theorem 1.2] to provide the next result. This generalizes the theorem of Frantzikinakis [13, Theorem 1.1] and gives a positive answer to [13, Problem]. Unlike the previous theorem, we have to impose here an additional assumption on \(\mathcal{H}\), since the respective convergence result along \(\mathbb{N}\) is established under this condition. The field \(\mathcal{LE}\) does not have the property appearing in the ensuing theorem, but it is contained in the Hardy field of Pfaffian functions, which does (for the definition, see [44, Section 2]). **Theorem 1.4**.: _Let \(\mathcal{H}\) be a Hardy field that contains \(\mathcal{L}\mathcal{E}\) and is closed under composition and compositional inversion of functions, when defined.4 For a positive integer \(k,\) let \(a_{1},\ldots,a_{k}\) be functions of polynomial growth and assume that every non-trivial linear combination \(a\) of them satisfies_ Footnote 4: This means that if \(f,g\in\mathcal{H}\) are such that \(g(t)\to+\infty,\) then \(f\circ g\in\mathcal{H}\) and \(g^{-1}\in\mathcal{H}.\) (14) \[\lim_{t\to+\infty}\Bigl{|}\frac{a(t)-q(t)}{\log t}\Bigr{|}=+\infty\text{ for every }q(t)\in\mathbb{Z}[t].\lx@note{{}{$\ref{eq:def #### 1.2.3. Applications to multiple recurrence and combinatorics In this subsection, we will translate the previous convergence results to multiple recurrence results and then combine them with Furstenberg's correspondence principle to extrapolate combinatorial applications. Due to arithmetic obstructions arising from polynomials, we have to work with the set of shifted primes in some cases. In addition, it was observed in [29] that in the case of real polynomials, one needs to work with the rounding to the closest integer function instead of the floor function. Indeed, even in the case of sequences of the form \(\lfloor ap(n)+b\rfloor\), explicit conditions that describe multiple recurrence are very complicated (cf. [10, Footnote 4]). Our first application relates to the averages of the form as in (3). We have the following theorem. **Theorem 1.6**.: _Let \(a\in\mathcal{H}\) be a function of polynomial growth. Then, for any measure-preserving system \((X,\mathcal{X},\mu,T),\)\(k\in\mathbb{N},\) and set \(A\) with positive measure we have the following: (a) If \(a\) satisfies (11), we have_ \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\mu(A\cap T^{ -\lfloor a(p)\rfloor}A\cap\cdots\cap T^{-k\lfloor a(p)\rfloor}A)>0.\] _(b) If \(a\) satisfies (12) with \(cp(0)+d=0\),6 then for any set \(A\) with positive measure, the set_ Footnote 6: Notice here the usual necessary assumption that we have to postulate on the polynomial, i.e., to have no constant term, in order to obtain a recurrence, and, consequently, a combinatorial result. \[\left\{n\in\mathbb{N}:\;\mu\big{(}A\cap T^{-\left[\left[a(n)\right]\right]}A \cap\cdots\cap T^{-k\left[\left[a(n)\right]\right]}A\big{)}>0\right\}\] _has non-empty intersection with the sets \(\mathbb{P}-1\) or \(\mathbb{P}+1\)._ We recall that for a subset \(E\) of \(\mathbb{N},\) its upper density \(\bar{d}(E)\) is defined by \[\bar{d}(E):=\limsup_{N\to+\infty}\frac{\left|E\cap\left\{1,\ldots,N\right\} \right|}{N}.\] **Corollary 1.7**.: _For any set \(E\subseteq\mathbb{N}\) of positive upper density, \(k\in\mathbb{N},\) and function \(a\in\mathcal{H}\) of polynomial growth, the following holds: (a) If \(a\) satisfies (11), we have_ \[\liminf_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\bar{d} \big{(}E\cap\left(E-\lfloor a(p)\rfloor\right)\cap\cdots\cap\left(E-k\lfloor a (p)\rfloor\right)\big{)}>0.\] _(b) If \(a\) satisfies (12) with \(cp(0)+d=0\), then the set_ \[\big{\{}n\in\mathbb{N}:\;\bar{d}\big{(}E\cap\left(E-\left[\left[a(n)\right] \right]\right)\cap\cdots\cap\left(E-k\left[\left[a(n)\right]\right]\right) \big{)}>0\big{\}}\] _has non-empty intersection with the sets \(\mathbb{P}-1\) or \(\mathbb{P}+1\).7_ Footnote 7: In this case only, \(\bar{d}(E)\) can be replaced by \(d^{*}(E):=\limsup_{|I|\to+\infty}\frac{\left|E\cap I\right|}{|I|}\) following the arguments from [29], where the \(\limsup\) is taken along all intervals \(I\subseteq\mathbb{Z}\) with lengths tending to infinity. Specializing to the case where \(a(n)=n^{c}\) where \(c\) is a positive non-integer, Theorem 1.3 and part (a) of Theorem 1.6 provide an affirmative answer to [12, Problem 27]. **Remark 3**.: _In part (a) of both Theorem 1.6 and Corollary 1.7, one can evaluate the sequences along \(p+u\) instead of \(p\), for any \(u\in\mathbb{Z}\), or even more generally along the affine shifts \(ap+b\) for \(a,b\in\mathbb{Q}\) with \(a\neq 0\). This follows from the fact that the function \(a_{i}(at+b)\) satisfies (11) as well. However, the shifts \(p-1\) and \(p+1\) are the only correct ones in part (b) of Theorem 1.6. Notice also that the function \(\lfloor\cdot\rfloor\) can be replaced by \(\lceil\cdot\rceil\) or \(\left[\left[\cdot\right]\right]\) in part (a) of the two previous statements._ Now, we state the recurrence result obtained by Theorem 1.4. **Theorem 1.8**.: _Let \(k\in\mathbb{N},\)\(\mathcal{H}\) be a Hardy field that contains \(\mathcal{LE}\) and is closed under composition and compositional inversion of functions, when defined, and suppose \(a_{1},\ldots,a_{k}\in\mathcal{H}\) are functions of polynomial growth whose non-trivial linear combinations satisfy (14). Then, for any measure-preserving system \((X,\mathcal{X},\mu,T),\) and set \(A\) with positive measure, we have that_ \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\mu\big{(}A \cap T^{-\lfloor a_{1}(p)\rfloor}A\cap\cdots\cap T^{-\lfloor a_{k}(p)\rfloor}A \big{)}\geq\big{(}\mu(A)\big{)}^{k+1}.\] **Corollary 1.9**.: _For any \(k\in\mathbb{N},\) set \(E\subseteq\mathbb{N}\) of positive upper density, Hardy field \(\mathcal{H}\) and functions \(a_{1},\ldots,a_{k}\in\mathcal{H}\) as in Theorem 1.8, we have_ \[\liminf_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\bar{d} \big{(}E\cap(E-\lfloor a_{1}(p)\rfloor)\cap\cdots\cap(E-\lfloor a_{k}(p) \rfloor)\big{)}\geq\big{(}\bar{d}(E)\big{)}^{k+1}.\] In particular, we conclude that for any set \(E\subseteq\mathbb{N}\) with positive upper density and \(a_{1},\ldots,a_{k}\) as above, the set \[\{n\in\mathbb{N}:\;\;\text{there exists}\;m\in\mathbb{N}\;\text{such that}\;m,m+ \lfloor a_{1}(n)\rfloor,\ldots,m+\lfloor a_{k}(n)\rfloor\in E\}\] has non-empty intersection with the set \(\mathbb{P}\). The following is a multidimensional analog of Theorem 1.8 and relies on the convergence result of Theorem 1.5. **Theorem 1.10**.: _Let \(k\in\mathbb{N},\)\(\mathcal{H}\) be a shift-invariant Hardy field and suppose that \(a_{1},\ldots,a_{k}\in\mathcal{H}\) are functions of polynomial growth that satisfy the hypotheses of Theorem 1.5. Then, for any system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) and set \(A\) with positive measure, we have that_ \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\mu\big{(}A \cap T_{1}^{-\lfloor a_{1}(p)\rfloor}A\cap\cdots\cap T_{k}^{-\lfloor a_{k}(p )\rfloor}A\big{)}\geq\big{(}\mu(A)\big{)}^{k+1}.\] Lastly, we present the corresponding combinatorial application of our last multiple recurrence result. We recall that for a set \(E\subseteq\mathbb{Z}^{d},\) its _upper density_ is given by \[\bar{d}(E):=\limsup_{N\to+\infty}\frac{|E\cap\{-N,\ldots,N\}^{d}|}{(2N)^{d}}.\] **Corollary 1.11**.: _For any \(k\in\mathbb{N},\) set \(E\subseteq\mathbb{Z}^{d}\) of positive upper density, Hardy field \(\mathcal{H}\) and functions \(a_{1},\ldots,a_{k}\in\mathcal{H}\) as in Theorem 1.10 and vectors \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\in\mathbb{Z}^{d}\), we have_ \[\liminf_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\bar{d} \big{(}E\cap(E-\lfloor a_{1}(p)\rfloor\mathbf{v}_{1})\cap\cdots\cap(E-\lfloor a _{k}(p)\rfloor\mathbf{v}_{k})\big{)}\geq\big{(}\bar{d}(E)\big{)}^{k+1}.\] **Comment**. Once again, we remark that in the recurrence results in both Theorem 1.8 and Theorem 1.10 and the corresponding corollaries, one can replace \(p\) with any other affine shift \(ap+b\) with \(a,b\in\mathbb{Q}\) (\(a\neq 0\)), as we explained in Remark 3. In addition, one can replace the floor functions with either \(\lceil\cdot\rceil\) or \([[\cdot]]\). #### 1.2.4. Equidistribution in nilmanifolds In this part, we present some results relating to pointwise convergence in nilmanifolds along Hardy sequences evaluated at primes. We have the following theorem that is similar in spirit to Theorem 1.2. **Theorem 1.12**.: _Let \(k\) be a positive integer. Assume that \(a_{1},\ldots,a_{k}\in\mathcal{H}\) are functions of polynomial growth, such that the following conditions are satisfied: (a) For every \(1\leq i\leq k\), the function \(a_{i}(t)\) satisfies either (7) or (8). (b) For all positive integers \(W,b\), any nilmanifold \(Y=H/\Delta\), pairwise commuting elements \(u_{1},\ldots,u_{k}\) and points \(y_{1},\ldots,y_{k}\in Y\), the sequence_ \[\Big{(}u_{1}^{\lfloor a_{1}(Wn+b)\rfloor}y_{1},\ldots,u_{k}^{\lfloor a_{k}(Wn+b )\rfloor}y_{k}\Big{)}\] _is equidistributed on the nilmanifold \(\overline{(u_{1}^{Z}y_{1})}\times\cdots\times\overline{(u_{k}^{Z}y_{k})}.\)_ _Then, for any nilmanifold \(X=G/\Gamma\), pairwise commuting elements \(g_{1},\ldots,g_{k}\in G\) and points \(x_{1},\ldots,x_{k}\in X\), the sequence_ \[\Big{(}g_{1}^{\lfloor a_{1}(p_{n})\rfloor}x_{1},\ldots,g_{k}^{\lfloor a_{k}(p_{ n})\rfloor}x_{k}\Big{)}_{n\in\mathbb{N}},\] _where \(p_{n}\) denotes the \(n\)-th prime, is equidistributed on the nilmanifold \(\overline{(g_{1}^{\mathbb{Z}}x_{1})}\times\cdots\times\overline{(g_{k}^{ \mathbb{Z}}x_{k})}\)._ Instead of the "pointwise convergence" assumption (b), one can replace it with a weaker convergence (i.e. in the \(L^{2}\)-sense) hypothesis. However, we will not benefit from this in applications, so we opt to not state our results in that setup. In the case of a polynomial function, a convergence result along primes follows by combining [22, Theorem 7.1] (which is the case of linear polynomials) and the fact that any polynomial orbit on a nilmanifold can be lifted to a linear orbit of a unipotent affine transformation on a larger nilmanifold (an argument due to Leibman [33]). Nonetheless, in this case, we do not have a nice description for the orbit of this polynomial sequence. On the other hand, equidistribution results in higher-step nilmanifolds (along primes) for sequences such as \(\lfloor n^{c}\rfloor\), with \(c\) a non-integer (\(c>1\)), are unknown even in the simplest case of one fractional power. Theorem 1.12 will allow us to obtain the first results in this direction from the corresponding results along \(\mathbb{N}\). Equidistribution results for Hardy sequences along \(\mathbb{N}\) were obtained originally by Frantzikinakis in [9], while more recently new results were established by Richter [39] and the second author [43]. In view of the structure theory of Host-Kra [26], results of this nature are essential to demonstrate that the corresponding multiple ergodic averages along \(\mathbb{N}\) converge in \(L^{2}(\mu)\). All of the pointwise convergence theorems that we mentioned above can be transferred to the prime setting. As an application, we state the following sample corollary of Theorem 1.12. The term invariant under affine shifts refers to a Hardy field \(\mathcal{H}\) for which \(a(Wt+b)\in\mathcal{H}\) whenever \(a\in\mathcal{H}\), for all \(W,b\in\mathbb{N}\). **Corollary 1.13**.: _Let \(k\) be a positive integer, \(\mathcal{H}\) be a Hardy field invariant under affine shifts, and suppose that \(a_{1},\ldots,a_{k}\in\mathcal{H}\) are functions of polynomial growth, for which there exists an \(\varepsilon>0\), so that every non-trivial linear combination \(a\) of them satisfies_ \[\lim_{t\to+\infty}\Bigl{|}\frac{a(t)-q(t)}{t^{\varepsilon}}\Bigr{|}=+\infty \text{ for every }q(t)\in\mathbb{Z}[t]. \tag{16}\] _Then, for any collection of nilmanifolds \(X_{i}=G_{i}/\Gamma_{i}\)\(i=1,\ldots,k\), elements \(g_{i}\in G_{i}\) and points \(x_{i}\in X_{i}\), the sequence_ \[\big{(}g_{1}^{\lfloor a_{1}(p_{n})\rfloor}x_{1},\ldots,g_{k}^{\lfloor a_{k}(p _{n})\rfloor}x_{k}\big{)}_{n\in\mathbb{N}},\] _where \(p_{n}\) denotes the \(n\)-th prime, is equidistributed on the nilmanifold \(\overline{(g_{1}^{\mathbb{Z}}x_{1})}\times\cdots\times\overline{(g_{k}^{ \mathbb{Z}}x_{k})}\)._ The assumption in (16) is a byproduct of the corresponding equidistribution result along \(\mathbb{N}\) proven in [43]. Also, the assumption on \(\mathcal{H}\) can be dropped since the arguments in [43] rely on some growth assumptions on the functions \(a_{i}\) which translate over to their shifted versions. We choose not to remove the assumption here since the results in [43] are not stated in this setup. Our corollary implies that the sequence \[\big{(}g_{1}^{\lfloor p_{n}^{c_{1}}\rfloor}x_{1},\ldots,g_{k}^{\lfloor p_{n}^{ c_{k}}\rfloor}x_{k}\big{)}\] is equidistributed on the subnilmanifold \(\overline{(g_{1}^{\mathbb{Z}}x_{1})}\times\cdots\times\overline{(g_{k}^{ \mathbb{Z}}x_{k})}\) of \(X_{1}\times\cdots\times X_{k}\), for any distinct positive non-integers \(c_{1},\ldots,c_{k}\) and for all points \(x_{i}\in X_{i}\). This is stronger than the result of Frantzikinakis [13] that establishes convergence in the \(L^{2}\)-sense (for linearly independent fractional polynomials). This result is novel even in the simplest case \(k=1\). Furthermore, we remark that in the case \(k=1\) we can actually replace (16) with the optimal condition that \(a(t)-q(t)\) grows faster than \(\log t\), for all \(q(t)\) that are real multiples of integer polynomials, using the results from [9]. ### Strategy of the proof and organization The bulk of the paper is spent on establishing the asserted comparison between the \(W\)-tricked averages and the standard Cesaro averages (Theorem 1.1). The main trick is to recast our problem to the setting where our averages range over a short interval of the form \([N,N+L(N)]\), where \(L(t)\) is a function of sub-linear growth chosen so that Hardy sequences are approximated sufficiently well by polynomials in these intervals. Naturally, the study of the primes in short intervals requires strong number theoretic input and this is provided by the recent result in [36] on the Gowers uniformity of several arithmetic functions in short intervals (this is Theorem A in the following section). The strategy of restricting ergodic averages to short intervals was first used by Frantzikinakis in [10] to demonstrate the convergence of the averages in (2) when \(a(n)\) is a Hardy sequence and then amplified further by the second author in [44] to resolve the problem in the more general setting of the averages in (1) (for one transformation). Certainly, the uniformity estimate in Theorem A requires that the interval is not too short, but it was observed in [44] that one can take the function \(L(t)\) to grow sufficiently fast, as long as one is willing to tolerate polynomial approximations with much larger degrees. After this step has been completed, one typically employs a complexity reduction argument (commonly referred to as PET induction in the literature) that relies on repeated applications of the van der Corput inequality. Using this approach, one derives iterates that are comprised of several expressions with integer parts, which are then assembled together using known identities for the floor function (with an appropriate error). This approach was used for the conventional averages over \(\mathbb{N}\) in [10] and [44], because one can sloppily combine integer parts in the iterates at the cost of inserting a bounded weight in the corresponding averages. To be more precise, this weight is actually the characteristic function of a subset of \(\mathbb{N}\). However, we cannot afford to do this blindly in our setting, since there is no guarantee that this subset of \(\mathbb{N}\) does not correlate very strongly with \(\Lambda_{w,b}(n)-1\), which could imply that the resulting average is large. The fact that the weight \(\Lambda_{w,b}-1\) is unbounded complicates this step as well. Nonetheless, it was observed in [29] (using an argument from [30]),8 that if the fractional parts of the sequences in the iterates do not concentrate heavily around \(1\), then one can pass to an extension of the system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\), wherein the actions \(T_{i}\) are lifted to \(\mathbb{R}\)-actions (also called measure-preserving flows) and the integer parts are removed. Since there are no nuisances with combining rounding functions in the iterates, one can then run the complexity reduction argument in the new system and obtain the desired bounds. Footnote 8: This argument was first used for \(k=1\) in [5] and [35] to prove that when a sequence of real positive numbers is good for (single term) pointwise convergence, then its floor value is also good. The method was later adapted to the \(k=2\) setting by Wierdl (personal communication with the first author, 2015). Unfortunately, there is still an obstruction in this approach arising from the fact that the flows in the extension are not continuous. To be more precise, let us assume that we derived an approximation of the form \(a(n)=p_{N}(n)+\varepsilon_{N}(n)\), where \(n\in[N,N+L(N)]\), \(p_{N}(n)\) is a Taylor polynomial and \(\varepsilon_{N}(n)\) is the remainder term. The PET induction can eliminate the polynomials \(p_{N}(n)\), by virtue of the simple observation that taking sufficiently many "discrete derivatives" makes a polynomial vanish. However, this procedure cannot eliminate the error term \(\varepsilon_{N}(n)\) at all and the fact that the flow is not continuous prohibits us from replacing them with zero. Thus, we take action to discard \(\varepsilon_{N}(n)\) beforehand. This is done by studying the equidistribution properties of the polynomial \(p_{N}(n)\) in the prior approximation, using standard results from the equidistribution theory of finite polynomial orbits due to Weyl. Practically, we show that for "almost all" values of \(n\) in the interval \([N,N+L(N)]\), we can write \(\lfloor p_{N}(n)+\varepsilon_{N}(n)\rfloor=\lfloor p_{N}(n)\rfloor\), so that the error \(\varepsilon_{N}(n)\) can be removed from the expressions in the iterates. In our approach, some equidistribution assumptions on our original functions are required. This clarifies the conditions on Theorem 1.1. Indeed, (7) implies that the sequence \(\big{(}a_{ij}(n)\big{)}_{n}\) is equidistributed modulo \(1\) (due to Theorem D), while condition (8) implies that the function \(a_{ij}(t)\) is essentially equal to a polynomial with rational coefficients (thus periodic modulo \(1\)). #### 1.3.1. A simple example We demonstrate the methods discussed above in a basic case that avoids most complications that appear in the general setting. Even this simple case, however, is not covered by prior methods in the literature. We will use some prerequisites from the following section, such as Theorem A. We consider the averages \[\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\,p\leq N}T^{\lfloor p^{3/2}\rfloor}f_{1} \cdot T^{2}\lfloor p^{3/2}\rfloor f_{2}, \tag{17}\] where \((X,\mathcal{X},\mu,T)\) is a system and \(f_{1},f_{2}\in L^{\infty}(\mu)\). For every \(1\leq b\leq W\) with \((b,W)=1\), we study the averages \[\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor n^{3/2} \rfloor}f_{1}\cdot T^{2}\lfloor n^{3/2}\rfloor f_{2}, \tag{18}\] which is the required comparison for the averages in (17). We will show that as \(N\to+\infty\) and then \(w\to+\infty\), the norm of this average converges to \(0\) uniformly in \(b\). We set \(L(t)=t^{0.65}\). Notice that \(L(t)\) grows faster than \(t^{5/8}\), which is a necessary condition to use Theorem A. In order to establish the required convergence for the averages in (18), it suffices to show that \[\limsup_{r\to+\infty}\Big{\|}\,\underset{r\leq n\leq r+L(r)}{\mathbb{E}}\, \big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor n^{3/2}\rfloor}f_{1}\cdot T^{2} \lfloor n^{3/2}\rfloor f_{2}\Big{\|}_{L^{2}(\mu)}=o_{w}(1) \tag{19}\] uniformly in \(b\). We remark that in the more general case that encompasses several functions, we will need to average over the parameter \(r\) as well and, thus, we are dealing with a double-averaging scheme. This reduction is the content of Lemma 5.1. Using the Taylor expansion around \(r\), we can write for every \(0\leq h\leq L(r)\): \[(r+h)^{3/2}=r^{3/2}+\frac{3r^{1/2}h}{2}+\frac{3h^{2}}{8r^{1/2}}-\frac{3h^{3}} {48\xi_{h}^{3/2}},\ \ \text{where}\ \xi_{h}\in[r,r+h].\] Observe that the error term is smaller than a constant multiple of \[\frac{\big{(}L(r)\big{)}^{3}}{r^{3/2}}=o_{r}(1).\] We show that we have \[\Big{\lfloor}(r+h)^{3/2}\Big{\rfloor}=\,\left\lfloor r^{3/2}+\frac{3r^{1/2}h} {2}+\frac{3h^{2}}{8r^{1/2}}\right\rfloor\] for "almost all" \(0\leq h\leq L(r)\), in the sense that the number of \(h\)'s that do not obey this relation is bounded by a constant multiple of \(L(r)\log^{-100}r\) (say). Thus, their contribution on the average is negligible, since the sequence \(\Lambda_{w,b}\) has size comparable to \(\log r\). Let us denote by \(p_{r}(h)\) the quadratic polynomial in the Taylor expansion above. In order to establish this assertion, we will investigate the discrepancy of the finite sequence \(\big{(}p_{r}(h)\big{)}_{0\leq h\leq L(r)}\), using some exponential sum estimates and the Erdos-Turan inequality (Theorem E). This is the content of Proposition 4.4. Assuming that all the previous steps were completed, we shall ultimately reduce our problem to showing that \[\limsup_{r\to+\infty}\Big{\|}\,\underset{0\leq h\leq L(r)}{\mathbb{E}}\left( \Lambda_{w,b}(r+h)-1\right)T^{[p_{r}(h)]}f_{1}\cdot T^{2\lvert p_{r}(h)\rvert}f_ {2}\Big{\|}_{L^{2}(\mu)}=o_{w}(1)\] uniformly for \(1\leq b\leq W\) coprime to \(W\). Now that the error terms have been eliminated, we are left with an average that involves polynomial iterates. Next, we use an argument from [30] that allows us to pass to an extension of the system \((X,\mathcal{X},\mu,T)\). To be more precise, there exists an \(\mathbb{R}\)-action (see the definition in Section 2) \((Y,\mathcal{Y},v,S)\) and functions \(\widetilde{f}_{1},\widetilde{f}_{2}\), such that we have the equality \[T^{\lvert p_{r}(h)\rvert}f_{1}\cdot T^{2\lvert p_{r}(h)\rvert}f_{2}=S_{p_{r}( h)}\widetilde{f}_{1}\cdot S_{2p_{r}(h)}\widetilde{f}_{2}.\] This procedure can be done because the polynomial \(p_{r}(h)\) has good equidistribution properties (which we analyze in the previous step) and thus the fractional parts of the finite sequence \(\big{(}p_{r}(h)\big{)}_{0\leq h\leq L(r)}\) fall inside a small interval around \(1\) with the correct frequency. This is a necessary condition in order to use Proposition 3.1, which provides a bound for the inner average. To be more specific, we have the expression \[\limsup_{r\to+\infty}\Big{\|}\,\underset{0\leq h\leq L(r)}{\mathbb{E}}\left( \Lambda_{w,b}(r+h)-1\right)S_{p_{r}(h)}\widetilde{f}_{1}\cdot S_{2p_{r}(h)} \widetilde{f}_{2}\Big{\|}_{L^{2}(\mu)}.\] The inner average involves polynomials and can be bounded uniformly by the Gowers norm of the sequence \(\Lambda_{w,b}(n)-1\) by Proposition 3.1 (modulo some constants and error terms that we ignore for the sake of this discussion). In particular, we have that the average in (19) is bounded by \[\big{\|}\Lambda_{w,b}(n)-1\big{\|}_{U^{s}(r,r+L(r)]}\] for some \(s\in\mathbb{N}\). Finally, Theorem A implies that for sufficiently large values of \(r\) we have \(\big{\|}\Lambda_{w,b}(n)-1\big{\|}_{U^{s}(r,r+L(r)]}\) is \(o_{w}(1)\) uniformly in \(b\). Finally, sending \(w\) to \(+\infty\), we reach the desired conclusion. This argument is quite simpler than the general case since it involves only one function. As we commented briefly, one extra complication is that we are dealing with a double averaging, unlike the model example. During the proof of Theorem 1.1, we will also need to split the functions \(a_{ij}\) into several distinct classes, which are handled with different methods. For example, the argument above works nicely for the function \(t^{3/2}\) but has to be modified in the case of the function \(\log^{2}t\), because the latter cannot be approximated by polynomials of degree \(1\) or higher on our short intervals. Namely, the Taylor polynomial corresponding to \(\log^{2}t\) is constant and the previous method is orendered ineffective. Thus, we present an additional, more elaborate model example in Section 4, which exemplifies the possible cases that arise in the main proof. ### Open problems and further directions We expect that condition (11) in Theorem 1.6 can be relaxed significantly and still provide a multiple recurrence result. Motivated by [10, Theorem 2.3], we make the following conjecture. **Conjecture 1**.: _Let \(a\in\mathcal{H}\) be a function of polynomial growth which satisfies_ \[\lim_{t\to+\infty}\bigl{\lvert}a(t)-cp(t)\bigr{\rvert}=+\infty\text{ for every }c\in\mathbb{R}\text{ and }p(t)\in\mathbb{Z}[t].\] _Then, for any \(k\in\mathbb{N},\) measure-preserving system \((X,\mathcal{X},\mu,T)\) and set \(A\) of positive measure, the set_ \[\{n\in\mathbb{N}:\ \mu\big{(}A\cap T^{-\lfloor a(n)\rfloor}A\cap\dots\cap T^{-k \lfloor a(n)\rfloor}A\big{)}>0\}\] _has non-empty intersection with \(\mathbb{P}\)._ Comparing the assumptions on the function \(a\) to those in Theorem 1.6, we see that we are very close to establishing Conjecture 1. However, there are examples that our work does not encompass, such as the function \(t^{4}+\log t\) or \(t^{2}+\log\log(5t)\). In the setting of multiple recurrence along \(\mathbb{N}\), the corresponding result was established in [10] and was generalized for more functions in [3]. In view of [3, Corollary B.3, Corollary B.4], we also make the following conjecture: **Conjecture 2**.: _Let \(k\in\mathbb{N}\) and \(a_{1},\ldots,a_{k}\in\mathcal{H}\) be functions of polynomial growth. Assume that every non-trivial linear combination \(a\) of the functions \(a_{1},\ldots,a_{k}\), \(a\), has the property_ \[\lim_{t\to+\infty}|a(t)-p(t)|=+\infty\text{ for all }p(t)\in\mathbb{Z}[t].\] _Then, for any measure-preserving system \((X,\mathcal{X},\mu,T)\) and set \(A\) of positive measure, the set_ \[\{n\in\mathbb{N}:\ \mu\big{(}A\cap T^{-\lfloor a_{1}(n)\rfloor}A\cap\cdots \cap T^{-\lfloor a_{k}(n)\rfloor}A\big{)}>0\}\] _has non-empty intersection with \(\mathbb{P}\)._ We remark that if one wants to also include functions that are essentially equal to a polynomial, then there are more results in this direction in [3], where it was shown that a multiple recurrence result for functions that are approximately equal to jointly-intersective polynomials is valid. Certainly, one would need to work with the sets \(\mathbb{P}+1\) or \(\mathbb{P}-1\) in this setting to transfer this result from \(\mathbb{N}\) to the primes. It is known that a convergence result along \(\mathbb{N}\) with typical Cesaro averages cannot be obtained, if one works with the weaker conditions of the previous two conjectures. Indeed, the result would fail even for rotations on tori, because the corresponding equidistribution statement is false. The main approach employed in [3] was to consider a weaker averaging scheme than Cesaro averages. Using a different averaging method, one can impose some equidistribution assumption on functions that are not equidistributed in the standard sense. For instance, it is well-known that the sequence \((\log n)_{n\in\mathbb{N}}\) is not equidistributed mod \(1\) using Cesaro averages, but it is equidistributed under logarithmic averaging. Thus, it is natural to expect that an analog of Theorem 1.1 for other averaging schemes would allow someone to relax the conditions (7) and (8) in order to tackle the previous conjectures. A comparison result similar to Theorem 1.1 (but for other averaging schemes) appears to be a potential first step in this problem. We expect that, under the same hypotheses, the analogous result in the setting of multiple commuting transformations will also hold. In particular, aside from the special cases established in [11], convergence results along \(\mathbb{N}\) for Hardy sequences and commuting transformations are still open. For instance, it is unknown whether the averages in Theorem 1.5 converge when the functions \(a_{i}\) are linear combinations of fractional powers. In view of Theorem 1.1 and Theorem 1.2, any new result in this direction can be transferred to the setting of primes in a rather straightforward fashion, since conditions (7) and (8) are quite general to work with. ### Acknowledgements We thank Nikos Frantzikinakis for helpful discussions. ### Notational conventions Throughout this article, we denote with \(\mathbb{N}=\{1,2,\ldots\}\), \(\mathbb{Z}\), \(\mathbb{Q}\), \(\mathbb{R}\), and \(\mathbb{C}\) the sets of natural, integer, rational, real, and complex numbers respectively. We denote the one dimensional torus \(\mathbb{T}=\mathbb{R}/\mathbb{Z}\), the exponential phases \(e(t)=e^{2\pi it}\), while \(\left\|x\right\|_{\mathbb{T}}=d(x,\mathbb{Z})\), \(\left[[x]\right]\), \(\left\lfloor x\right\rfloor\), \(\left\lceil x\right\rceil\), and \(\{x\}\) are the distance of \(x\) from the nearest integer, the nearest integer to \(x\), the greatest integer which is less or equal to \(x\), the smallest integer which is greater or equal to \(x\), and the fractional part of \(x\) respectively. We also let \(\mathbf{1}_{A}\) denote the characteristic function of a set \(A\) and \(|A|\) is its cardinality. For any integer \(Q\) and \(0\leq a\leq Q-1\), we use the symbol \(a\) (\(Q\)) to denote the residue class \(a\) modulo \(Q\). Therefore, the notation \(\mathbf{1}_{a\,(Q)}\) refers to the characteristic function of the set of those integers, whose residue when divided by \(Q\) is equal to \(a\). For two sequences \(a_{n},b_{n}\), we say that \(b_{n}\)_dominates_\(a_{n}\) and write \(a_{n}\prec b_{n}\) or \(a_{n}=o(b_{n})\), when \(a_{n}/b_{n}\) goes to \(0\), as \(n\to+\infty\). In addition, we write \(a_{n}\ll b_{n}\) or \(a_{n}=O(b_{n})\), if there exists a positive constant \(C\) such that \(|a_{n}|\leq C|b_{n}|\) for large enough \(n\). When we want to denote the dependence of the constant \(C\) on some parameters \(h_{1},\ldots,h_{k}\), we will use the notation \(a_{n}=O_{h_{1},\ldots,h_{k}}(b_{n})\). In the case that \(b_{n}\ll a_{n}\ll b_{n}\), we shall write \(a_{n}\sim b_{n}\). We say that \(a_{n}\) and \(b_{n}\) have the same growth rate when the limit of \(\frac{a_{n}}{b_{n}}\), as \(n\to+\infty\) exists and is a non-zero real number. We use a similar notation and terminology for asymptotic relations when comparing functions of a real variable \(t\). Under the same setup as in the previous paragraph, we say that the sequence \(a_{n}\)_strongly dominates_ the sequence \(b_{n}\) if there exists \(\delta>0\) such that \[\frac{a_{n}}{b_{n}}\gg n^{\delta}.\] In this case, we write \(b_{N}\lll a_{N}\), or \(a_{N}\ggb_{N}\).9 We use similar terminology and notation for functions on a real variable \(t\). Footnote 9: This notation is non-standard, so we may refer back to this part quite often throughout the text. Finally, for any sequence \((a(n))\), we employ the notation \[\mathop{\mathbb{E}}_{n\in S}a(n)=\frac{1}{|S|}\sum_{n\in S}a(n)\] to denote averages over a finite non-empty set \(S\). We will typically work with averages over the integers in a specified interval, whose endpoints will generally be non-integers. We shall avoid using this notation for the Cesaro averages. ## 2. Background ### Measure-preserving actions Let \((X,\mathcal{X},\mu)\) be a Lebesgue probability space. A transformation \(T:X\to X\) is _measure-preserving_ if \(\mu(T^{-1}(A))=\mu(A)\) for all \(A\in\mathcal{X}\). It is called _ergodic_ if all the \(T\)-invariant functions are constant. If \(T\) is invertible, then \(T\) induces a \(\mathbb{Z}\)-action on \(X\) by \((n,x)=T^{n}x\), for every \(n\in\mathbb{Z}\) and \(x\in X\). More generally, let \(G\) be a group. A _measure-preserving \(G\)-action_ on a Lebesgue probability space \((X,\mathcal{X},\mu)\) is an action on \(X\) by measure-preserving maps \(T_{g}\) for every \(g\in G\) such that, for all \(g_{1},g_{2}\in G\), we have \(T_{g_{1}g_{2}}=T_{g_{1}}\circ T_{g_{2}}\). For the purposes of this article, we will only need to consider actions by the additive groups of \(\mathbb{Z}\) or \(\mathbb{R}\). Throughout the following sections, we will also refer to \(\mathbb{R}\)-actions as _measure-preserving flows_. In the case of \(\mathbb{Z}\)-actions, we follow the usual notation and write \(T^{n}\) to indicate the map \(T_{n}\). ### Hardy fields Let \((\mathcal{B},+,\cdot)\) denote the ring of germs at infinity of real-valued functions defined on a half-line \((t_{0},+\infty)\). A sub-field \(\mathcal{H}\) of \(\mathcal{B}\) that is closed under differentiation is called a _Hardy field_. For any two functions \(f,g\in\mathcal{H}\), with \(g\) not identically zero, the limit \[\lim_{t\to+\infty}\frac{f(t)}{g(t)}\] exists in the extended line and thus we can always compare the growth rates of two functions in \(\mathcal{H}\). In addition, every non-constant function in \(\mathcal{H}\) is eventually monotone and has a constant sign eventually. We define below some notions that will be used repeatedly throughout the remainder of the paper. **Definition 2.1**.: _Let \(a\) be a function in \(\mathcal{H}\). We say that the function \(a\) has polynomial growth if there exists a positive integer \(k\) such that \(a(t)\ll t^{k}\). The smallest positive integer \(k\) for which this holds will be called the degree of \(a\). The function \(a\) is called sub-linear if \(a(t)\prec t\). It will be called sub-fractional if \(a(t)\prec t^{\varepsilon}\), for all \(\varepsilon>0\). Finally, we will say that \(a\) is strongly non-polynomial if, for all positive integers \(k\), we have that the functions \(a(t)\) and \(t^{k}\) have distinct growth rates._ Throughout the proofs in the following sections, we will assume that we have fixed a Hardy field \(\mathcal{H}\). Some of the theorems impose certain additional assumptions on \(\mathcal{H}\), but this is a byproduct of the arguments used to establish the case of convergence of Cesaro averages in [44] and we will not need to use these hypotheses in any of our arguments. ### Gowers uniformity norms on intervals of integers Let \(N\) be a positive integer and let \(f:\mathbb{Z}_{N}\to\mathbb{C}\) be a function. For any positive integer \(s\), we define the _Gowers uniformity norm_\(\|f\|_{U^{s}(\mathbb{Z}_{N})}\) inductively by \[\big{\|}f\big{\|}_{U^{1}(\mathbb{Z}_{N})}=\big{|}\mathop{\mathbb{E}}_{n\in \mathbb{Z}_{N}}\ f(n)\big{|}\] and for \(s\geq 2\), \[\big{\|}f\big{\|}_{U^{s}(\mathbb{Z}_{N})}^{2^{s}}=\mathop{\mathbb{E}}_{h\in \mathbb{Z}_{N}}\big{\|}\overline{f(\cdot)}f(\cdot+h)\big{\|}_{U^{s-1}(\mathbb{ Z}_{N})}^{2^{s-1}}.\] A straightforward computation implies that \[\big{\|}f\big{\|}_{U^{s}(\mathbb{Z}_{N})}=\Big{(}\mathop{\mathbb{E}}_{h\in \mathbb{Z}_{N}^{s}}\mathop{\mathbb{E}}_{n\in\mathbb{Z}_{N}}\prod_{\underline{ \varepsilon}\in\{0,1\}^{s}}\mathcal{C}^{|\underline{\varepsilon}|}f(n+ \underline{h}\cdot\underline{\varepsilon})\Big{)}^{\frac{1}{2^{s}}}.\] Here, the notation \(\mathcal{C}\) denotes the conjugation map in \(\mathbb{C}\), whereas for \(\underline{\varepsilon}\in\{0,1\}^{s}\), \(|\underline{\varepsilon}|\) is the sum of the entries of \(\underline{\varepsilon}\) (the number of coordinates equal to \(1\)). It can be shown that for \(s\geq 2\), \(\|\cdot\|_{U^{s}(\mathbb{Z}_{N})}\) is a norm and that \[\|f\|_{U^{s}(\mathbb{Z}_{N})}\leq\|f\|_{U^{s+1}(\mathbb{Z}_{N})}\] for any function \(f\) on \(\mathbb{Z}_{N}\)[27, Chapter 6]. For the purposes of this article, it will be convenient to consider similar expressions that are not necessarily defined only for functions in an abelian group \(\mathbb{Z}_{N}\). Therefore, for any \(s\geq 1\) and a finitely supported sequence \(f(n),n\in\mathbb{Z}\), we define the _unnormalized Gowers uniformity norm_ \[\big{\|}f\big{\|}_{U^{s}(\mathbb{Z})}=\Big{(}\sum_{\underline{h}\in\mathbb{Z} ^{s}}\ \sum_{n\in\mathbb{Z}}\ \prod_{\underline{\varepsilon}\in\{0,1\}^{s}} \mathcal{C}^{|\underline{\varepsilon}|}f(n+\underline{h}\cdot\underline{ \varepsilon})\Big{)}^{\frac{1}{2^{s}}} \tag{20}\] and for a bounded interval \(I\subset\mathbb{R}\), we define \[\big{\|}f\big{\|}_{U^{s}(I)}=\frac{\big{\|}f\cdot\mathbf{1}_{I}\big{\|}_{U^{s} (\mathbb{Z})}}{\big{\|}\mathbf{1}_{I}\big{\|}_{U^{s}(\mathbb{Z})}}. \tag{21}\] First of all, observe that a simple change of variables in the summation in (21) implies that for \(X\in\mathbb{Z}\) \[\big{\|}f\big{\|}_{U^{s}(X,X+H]}=\big{\|}f(\cdot+X)\big{\|}_{U^{s}[1,H]}.\] Evidently, we want to compare uniformity norms on the interval \([1,H]\) with the corresponding norms on the abelian group \(\mathbb{Z}_{H}\). To this end, we will use the following lemma, whose proof can be found in [27, Chapter 22, Proposition 11]. **Lemma 2.2**.: _Let \(s\) be a positive integer and \(N,N^{\prime}\in\mathbb{N}\) with \(N^{\prime}\geq 2N\). Then, for any sequence \(\big{(}f(n)\big{)}_{n\in\mathbb{Z}^{\prime}}\), we have_ \[\big{\|}f\big{\|}_{U^{s}[1,N]}=\frac{\big{\|}f\cdot 1_{[1,N]}\big{\|}_{U^{s} (\mathbb{Z}_{N^{\prime}})}}{\big{\|}1_{[1,N]}\big{\|}_{U^{s}(\mathbb{Z}_{N^{ \prime}})}}.\] We will need a final lemma that implies that the Gowers uniformity norm is smaller when the sequence is evaluated along arithmetic progressions. **Lemma 2.3**.: _Let \(u(n)\) be a sequence of complex numbers. Then, for any integer \(s\geq 2\) and any positive integers \(0\leq a\leq Q-1\), we have_ \[\big{\|}u(n)\mathbf{1}_{a\;(Q)}(n)\big{\|}_{U^{s}(X,X+H]}\leq\big{\|}u(n)\big{\|} _{U^{s}(X,X+H]},\] _for all integers \(X\geq 0\) and all \(H\geq 1\)._ Proof.: We set \(u_{X}(n)=u(X+n)\), so that we can rewrite the norm on the left-hand side as \(\big{\|}u_{X}(n)\mathbf{1}_{a\;(Q)}(X+n)\big{\|}_{U^{s}[1,H]}\). Observe that the function \(\mathbf{1}_{a\;(Q)}(n)\) is periodic modulo \(Q\). Thus, treating it as a function in \(\mathbb{Z}_{Q}\), we have the Fourier expansion \[\mathbf{1}_{a\;(Q)}(n)=\sum_{\xi\in\mathbb{Z}_{q}}\mathbf{\widehat{1}}_{a\;(Q )}(\xi)e\Big{(}\frac{n\xi}{Q}\Big{)},\] for every \(0\leq n\leq Q-1\), and this can be extended to hold for all \(n\in\mathbb{Z}\) due to periodicity. Furthermore, we have the bound \[\big{|}\mathbf{\widehat{1}}_{a\;(Q)}(\xi)\big{|}=\frac{1}{Q}\Big{|}e\bigg{(} \frac{a\xi}{Q}\bigg{)}\Big{|}\leq\frac{1}{Q}.\] Applying the triangle inequality, we deduce that \[\big{\|}u_{X}(n)\mathbf{1}_{a\;(Q)}(X+n)\big{\|}_{U^{s}[1,H]}\leq\sum_{\xi\in \mathbb{Z}_{Q}}|\mathbf{\widehat{1}}_{a\;(Q)}(\xi)|\cdot\Big{\|}u_{X}(n)e\Big{(} \frac{(X+n)\xi}{Q}\Big{)}\Big{\|}_{U^{s}[1,H]}.\] However, it is immediate from (20) that the \(U^{s}\)-norm is invariant under multiplication by linear phases, for every \(s\geq 2\). Therefore, we conclude that \[\big{\|}u_{X}(n)\mathbf{1}_{a\;(Q)}(X+n)\big{\|}_{U^{s}[1,H]}\leq\big{\|}u_{X} (n)\big{\|}_{U^{s}[1,H]}=\big{\|}u(n)\big{\|}_{U^{s}(X,X+H]},\] which is the desired result. The primary utility of the Gowers uniformity norms is the fact that they arise naturally in complexity reduction arguments that involve multiple ergodic averages with polynomial iterates. In particular, Proposition 2.4 below implies that polynomial ergodic averages weighted by a sequence \((a(n))_{n\in\mathbb{N}}\) can be bounded in terms of the Gowers norm of \(a\) on the abelian group \(\mathbb{Z}_{sN}\) for some positive integer \(s\) (that depends only on the degrees of the underlying polynomials). **Proposition 2.4**.: _[_17_, Lemma 3.5]_ _Let \(k,\ell\in\mathbb{N},\)\((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) be a system of commuting \(\mathbb{Z}\) actions, \(p_{i,j}\in\mathbb{Z}[t]\) be polynomials for every \(1\leq i\leq k,\)\(1\leq j\leq\ell,\)\(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\) and \(a:\mathbb{N}\to\mathbb{C}\) be a sequence. Then, there exists \(s\in\mathbb{N},\) depending only on the maximum degree of the polynomials \(p_{i,j}\) and the integers \(k,\ell\), and a constant \(C_{s}\) depending on \(s,\) such that_ \[\Big{\|}\operatorname*{\mathbb{E}}_{1\leq n\leq N}a(n)\cdot\prod_{j=1}^{\ell} \prod_{i=1}^{k}T_{i}^{p_{i,j}(n)}f_{j}\Big{\|}_{L^{2}(\mu)}\leq C_{s}\left( \big{\|}a\cdot\mathbf{1}_{[1,N]}\big{\|}_{U^{s}(\mathbb{Z}_{sN})}+\frac{\max \{1,\|a\|_{\ell^{\infty}[1,sN]}^{2s}\}}{N}\right). \tag{22}\] **Remark 4**.: \((i)\) _The statement presented in [17] asserts that the second term in the prior sum is just \(o_{N}(1)\), under the assumption that \(a(n)\ll n^{c}\) for all \(c>0\). However, a simple inspection of the proof gives the error term presented above. Indeed, the error terms appearing in the proof of Proposition 2.4 are precisely of the form_ \[\frac{1}{N}\operatorname*{\mathbb{E}}_{n\in[1,N]}\operatorname*{\mathbb{E}}_{ \underline{k}\in[1,N]^{k}}\Big{|}\prod_{\underline{\varepsilon}\in\{0,1\}^{k }}\mathcal{C}^{[\underline{\varepsilon}]}\;a(n+\underline{h}\cdot\underline{ \varepsilon})\Big{|}\] _for \(k\leq s-1\), which are the error terms in the van der Corput inequality. Deducing the error term on (22) is then straightforward._ \((ii)\) _The number \(s-1\) is equal to the number of applications of the van der Corput inequality in the associated PET argument and we may always assume that \(s\geq 2\). In that case, Lemma 2.2 and the bound \(\big{\|}\mathbf{1}_{[1,N]}\big{\|}_{U^{s}(\mathbb{Z}_{sN})}\leq 1\) implies that we can replace the norm in (22) with the term \(\|a\|_{U^{s}[1,N]}\)._ For polynomials \(p_{i,j}(t)\in\mathbb{R}[t]\) of the form \[p_{i,j}(t)=a_{ij,d_{ij}}t^{d_{ij}}+\cdots+a_{ij,1}t+a_{ij,0},\] and \((T_{i,s})_{s\in\mathbb{R}}\)\(\mathbb{R}\)-actions, we have \[T_{i,p_{i,j}(n)}=\Big{(}T_{i,a_{ij,d_{ij}}}\Big{)}^{n^{d_{ij}}}\cdot\ldots \cdot\Big{(}T_{i,a_{ij,1}}\Big{)}^{n}\cdot\Big{(}T_{i,a_{ij,0}}\Big{)}.\] Thus, Proposition 2.4 implies the following. **Corollary 2.5**.: _Let \(k,\ell\in\mathbb{N},\)\((X,\mathcal{X},\mu,S_{1},\ldots,S_{k})\) be a system of commuting \(\mathbb{R}\)-actions, \(p_{i,j}\in\mathbb{Z}[t]\) be polynomials for all \(1\leq i\leq k,\)\(1\leq j\leq\ell,\)\(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\) and \(a:\mathbb{N}\to\mathbb{C}\) be a sequence. Then, there exists \(s\in\mathbb{N},\) depending only on the maximum degree of the polynomials \(p_{i,j}\) and the integers \(k,\ell\) and a constant \(C_{s}\) depending on \(s,\) such that_ \[\Big{\|}\operatorname*{\mathbb{E}}_{1\leq n\leq N}a(n)\cdot\prod_{j=1}^{\ell} \prod_{i=1}^{k}S_{i,p_{i},j}(n)f_{j}\Big{\|}_{L^{2}(\mu)}\leq C_{s}\left(\big{\|} a\cdot\mathbf{1}_{[1,N]}\big{\|}_{U^{s}(\mathbb{Z}_{sN})}+\frac{\max\{1,\|a\|_{ \ell^{\infty}[1,sN]}^{2s}\}}{N}\right). \tag{23}\] ### Number theoretic tools The following lemma is a standard consequence of the prime number theorem and the sparseness of prime powers (actually, we use this argument in the proof of Corollary 2.8 below). For a proof, see, for instance, [27, Chapter 25]. **Lemma 2.6**.: _For any bounded sequence \((a(n))_{n\in\mathbb{N}}\) in a normed space, we have_ \[\lim_{N\to+\infty}\Big{\|}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\ p\leq N}a(p)- \frac{1}{N}\sum_{n=1}^{N}\Lambda(n)a(n)\Big{\|}=0. \tag{24}\] Therefore, in order to study ergodic averages along primes, we can replace them with the ergodic averages over \(\mathbb{N}\) weighted by the function \(\Lambda(n)\). For the modified von Mangoldt function, we have the following deep theorem, which was recently established in [36]. **Theorem A**.: _[_36_, Theorem 1.5]_ _Let \(\varepsilon>0\) and assume \(L(N)\) is a positive sequence that satisfies the bounds \(N^{\frac{5}{8}+\varepsilon}\leq L(N)\leq N^{1-\varepsilon}\). Let \(s\) be a fixed integer and let \(w\) be a positive integer. Then, if \(N\) is large enough in terms of \(w\), we have that_ \[\|\Lambda_{w,b}-1\|_{U^{s}(N,N+L(N)]}=o_{w}(1) \tag{25}\] _for every \(1\leq b\leq W\) with \((b,W)=1\)._ We will need to use the orthogonality of \(\Lambda_{w,b}\) to polynomial phases in short intervals. This is an immediate consequence of the \(U^{d}\) uniformity in Theorem A in conjunction with an application of the van der Corput inequality \(d\) times until the polynomial phase is eliminated. Alternatively, one can use Proposition 2.4 for a rotation on the torus \(\mathbb{T}\) to carry out the reduction to Theorem A.10 We omit its proof. **Lemma 2.7**.: _Let \(L(N)\) be a positive sequence satisfying \(N^{\frac{5}{8}+\varepsilon}\prec L(N)\prec N^{1-\varepsilon}\) for some \(\varepsilon>0\). Then, we have that_ \[\max_{\begin{subarray}{c}1\leq b\leq N\\ (b,W)=1\end{subarray}}\sup_{\begin{subarray}{c}p\in\mathbb{R}[t]\\ \deg p=d\end{subarray}}\Big{|}_{N\leq n\leq N+L(N)}\big{(}\Lambda_{w,b}(n)-1 \big{)}e(p(n))\Big{|}=o_{w}(1). \tag{26}\] _for every \(N\) large enough in terms of \(w\)._ **Remark 5**.: \((i)\) _The error term \(o_{w}(1)\) depends on the degree \(d\), but since this will be fixed in applications, we suppressed that dependence above. \((ii)\) Quantitative bounds for similar expressions (involving the more general class of nilsequences, as well) were the main focus in [36], though in that setting the authors used a different weight of the form \(\Lambda-\Lambda^{\#}\), where \(\Lambda^{\#}\) is a carefully chosen approximant for the von Mangoldt function arising from considerations of the (modified) Cramer random model for the primes._ Finally, we will also use a corollary of the Brun-Titchmarsh inequality to bound the contribution of bad residue classes in our ergodic averages by a constant term. For \(q\geq 2\) and \((a,q)=1\), we denote by \(\pi(x,q,a)\) the number of primes \(\leq x\) that are congruent to \(a\) modulo \(q\). Alternatively, one could also use the asymptotics for averages of \(\Lambda\) in short intervals that were established by Huxley [28], since \(L(N)\) will be chosen to grow sufficiently fast in our applications. **Theorem B** (Brun-Titchmarsh inequality).: _We have_ \[\pi(x+y,q,a)-\pi(x,q,a)\leq\frac{2y}{\phi(q)\log(\frac{y}{q})} \tag{27}\] _for every \(x\geq y>q\)._ While we referred to this as the Brun-Titchmarsh inequality, the previous theorem was established in [37] by Montgomery and Vaughan (prior results contained the term \(2+o(1)\) in the numerator). We will need a variant of this theorem adapted to the von Mangoldt function. This follows easily from the previous theorem and a standard partial summation argument. **Corollary 2.8**.: _For every \(q\leq y\leq x\), we have_ \[\sum_{\begin{subarray}{c}x\leq n\leq x+y\\ n\equiv a\ (q)\end{subarray}}\Lambda(n)\leq\frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+ O\big{(}\frac{y}{\log x}\big{)}+O\big{(}x^{\frac{1}{2}}\log x\big{)}.\] Proof.: Consider the function \[\pi(x,q,a)=\sum_{\begin{subarray}{c}1\leq n\leq x\\ n\equiv a\ (Q)\end{subarray}}1_{\mathbb{P}}(n)\] as in the statement of Theorem B, defined for all \(x\geq 3/2\). Let \[\theta(x,q,a)=\sum_{\begin{subarray}{c}1\leq n\leq x\\ n\equiv a\ (Q)\end{subarray}}1_{\mathbb{P}}(n)\log n,\ \ \psi(x,q,a)=\sum_{ \begin{subarray}{c}1\leq n\leq x\\ n\equiv a\ (Q)\end{subarray}}\Lambda(n).\] It is evident that \[\Big{|}\theta(x,q,a)-\psi(x,q,a)\Big{|}\leq\sum_{p^{k}\leq x:\ p\in\mathbb{P}, k\geq 2}\log p\leq x^{1/2}\log x, \tag{28}\] since there are at most \(x^{1/2}\) prime powers \(\leq x\) and each one of them contributes at most \(\log x\) in this sum. Now, we use summation by parts to deduce that \[\theta(x+y,q,a)-\theta(x,q,a)=\sum_{\begin{subarray}{c}x<n\leq x+y \\ n\equiv a\ (Q)\end{subarray}}1_{\mathbb{P}}(n)\log n+O(1)=\pi(x+y,q,a)\log(x+y)-\\ \pi(x,q,a)\log(x+1)+\sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a\ (Q)\end{subarray}}\pi(n,q,a)\Big{(}\log n-\log(n+1)\Big{)}+O(1).\] Using the inequalities \(\log n-\log(n+1)\leq-(n+1)^{-1}\) and \(\log(x+y)\leq\log x+y/x\), we deduce that \[\theta(x+y,q,a)-\theta(x,q,a)\leq\log x\Big{(}\pi(x+y,q,a)-\pi(x,q,a)\Big{)}+\frac{\pi(x+y,q,a)y}{x}-\\ \sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a\ (Q)\end{subarray}}\frac{\pi(n,q,a)}{n+1}+O(1).\] Using the estimate \(\pi(x,q,a)\ll\frac{x}{\phi(q)\log x}\) and Theorem B, we bound the sum in the previous expression by \[\log x\frac{2y}{\phi(q)\log(\frac{y}{q})}+O\Big{(}\frac{(x+y)y}{\phi(q)x\log( x+y)}\Big{)}+O\Big{(}\sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a\ (Q)\end{subarray}}\frac{1}{\phi(q)\log n}\Big{)}+O(1).\] Since \[\sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a\ (Q)\end{subarray}}\frac{1}{\log n}\leq\int_{x}^{x+y}\frac{dt}{ \log t}+O(1)=\frac{x+y}{\log(x+y)}-\frac{x}{\log x}+\int_{x}^{x+y}\frac{dt}{ \log^{2}t}+O(1)\leq\\ \frac{y}{\log x}+O(\frac{y}{\log^{2}x})+O(1),\] we conclude that \[\theta(x+y,q,a)-\theta(x,q,a)\leq\frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+O( \frac{y}{\log x})+O(1). \tag{29}\] Consequently, if we combine (28) and (29), we arrive at \[\psi(x+y,q,a)-\psi(x,q,a)\leq\frac{2y\log x}{\phi(q)\log(\frac{y}{q})}+O( \frac{y}{\log x})+O(x^{\frac{1}{2}}\log x),\] as was to be shown. **Remark 6**.: _We will apply this corollary for \(q=W\) and \(y\gg x^{5/8+\varepsilon}\). Note that for \(y\) in this range, the second error term can be absorbed into the first one._ ### Quantitative equidistribution mod 1 **Definition 2.9**.: _Let \((x_{n})_{n\in\mathbb{N}}\) be a real valued sequence. We say that \((x_{n})_{n\in\mathbb{N}}\) is_ * equidistributed \(mod\ 1\) _if for all_ \(0\leq a<b\leq 1,\) _we have_ (30) \[\lim_{N\rightarrow+\infty}\frac{\big{|}\big{\{}n\in\{1,\ldots,N\}:\ \{x_{n}\}\in[a,b)\big{\}}\big{|}}{N}=b-a.\] * well distributed \(mod\ 1\) _if for all_ \(0\leq a<b\leq 1,\) _we have_ (31) \[\lim_{N\rightarrow+\infty}\frac{\big{|}\big{\{}n\in\{1,\ldots,N\}:\ \{x_{k+n}\}\in[a,b)\big{\}}\big{|}}{N}=b-a,\ \text{uniformly in }k=0,1,\ldots.\] In the case of polynomial sequences, their equidistribution properties are well understood. If the polynomial has rational non-constant coefficients, it is straightforward to check that the sequence of its fractional parts is periodic. On the other hand, for polynomials with at least one non-constant irrational coefficient, we have the following theorem. **Theorem C** (Weyl).: _Let \(p\in\mathbb{R}[t]\) be a polynomial with at least one non-constant irrational coefficient. Then, the sequence \((p(n))_{n\in\mathbb{N}}\) is well-distributed \(mod\,1\)._ This theorem is classical and for a proof, we refer the reader to [32, Chapter 1, Theorem 3.2].11 In the case of Hardy field functions, we have a complete characterization of equidistribution modulo \(1\) due to Boshernitzan. We recall here [4, Theorem 1.3]. Footnote 11: While this theorem concerns the case of equidistribution, the more general result follows easily by a straightforward adaptation of van der Corput’s difference theorem to the case of well-distribution. The authors of [32] discuss this in the notes of Section 5 in Chapter 1. **Theorem D** (Boshernitzan).: _Let \(a\in\mathcal{H}\) be a function of polynomial growth. Then, the sequence \((a(n))_{n\in\mathbb{N}}\) is equidistributed \(mod\,1\) if and only if \(|a(t)-p(t)|\succ\log t\) for every \(p\in\mathbb{Q}[t]\)._ This theorem explains the assumptions in Theorem 1.1 and, in particular, condition (7). Indeed, since we need equidistribution assumptions for our method to work, this condition appears to be vital. We will invoke Boshernitzan's theorem only in the case of sub-fractional functions. Indeed, we will investigate the equidistribution properties of fast-growing functions by studying their exponential sums in short intervals. This leads to a proof of the previous theorem indirectly, at least in the case that the function involved is not sub-fractional. For our purposes, we will need a quantitative version of the equidistribution phenomenon. For a finite sequence of real numbers \((u_{n})_{1\leq n\leq N}\) and an interval \([a,b]\subseteq[0,1]\), we define the _discrepancy_ of the sequence \(u_{n}\) with respect to \([a,b]\) by \[\Delta_{[a,b]}(u_{1},\ldots,u_{N})=\Bigg{|}\frac{\big{|}\big{\{}n\in\{1,\ldots, N\}\colon\{u_{n}\}\in[a,b]\big{\}}\big{|}}{N}-(b-a)\Bigg{|}. \tag{32}\] The discrepancy of a sequence is a quantitative measure of how close a sequence of real numbers is to being equidistributed modulo \(1\). For example, it is immediate that for an equidistributed sequence \(u_{n}\), we have that \[\lim_{N\to+\infty}\Delta_{[a,b]}(u_{1},\ldots,u_{N})=0,\] for all \(0\leq a\leq b\leq 1.\) For an in-depth discussion on the concept of discrepancy and the more general theory of equidistribution on \(\mathbb{T}\), we refer the reader to [32]. Our only tool will be an upper bound of Erdos and Turan on the discrepancy of a finite sequence. For a proof of this result, see [32, Chapter 2, Theorem 2.5].12 Footnote 12: In this book, the theorem is proven for measures of the form \(\nu=\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}}\), although the more general statement follows by noting that every Borel probability measure is a weak limit of measures of the previous form. **Theorem E** (Erdos-Turan).: _There exists an absolute constant \(C\), such that for any positive integer \(M\) and any Borel probability measure \(\nu\) on \(\mathbb{T}\), we have_ \[\sup_{A\subseteq\mathbb{T}}|\nu(A)-\lambda(A)|\leq C\Big{(}\frac{1}{M}+\sum_{ m=1}^{M}\frac{|\widehat{\nu}(m)|}{m}\Big{)},\] _where \(\lambda\) is the Lebesgue measure on \(\mathbb{T}\) and the supremum is taken over all arcs \(A\) of \(\mathbb{T}\)._ _In particular, specializing to the case that \(\nu=N^{-1}\sum_{i=1}^{N}\delta_{\{u_{i}\}}\), where \(u_{1},\ldots,u_{N}\) is a finite sequence of real numbers, we have_ \[\Delta_{[a,b]}(u_{1},\ldots,u_{N})\leq C\Big{(}\frac{1}{M}+\sum_{m=1}^{M}\frac{ 1}{m}\Big{|}\frac{1}{N}\sum_{n=1}^{N}e(mu_{n})\Big{|}\Big{)} \tag{33}\] _for all positive integers \(M\) and all \(0\leq a\leq b<1\)._ It is clear that in order to get the desired bounds on the discrepancy in our setting, we will need some estimates for exponential sums of Hardy field sequences in short intervals. Due to the Taylor approximation, this is morally equivalent to establishing estimates for exponential sums of polynomial sequences. There are several well-known estimates in this direction, the most fundamental of these being a result of Weyl that shows that an exponential sum along a polynomial sequence is small unless all non-constant coefficients of the polynomial are "major-arc". In the case of strongly non-polynomial Hardy field functions, we will only need to study the leading coefficient of the polynomial in its Taylor approximation, which will not satisfy such a major-arc condition. To this end, we require the following lemma. **Lemma 2.10**.: _Let \(0<\delta<1\) and \(d\in\mathbb{N}\). There exists a positive constant \(C\) depending only on \(d\), such that if \(p(x)=a_{d}x^{d}+\cdots+a_{1}x+a_{0}\) is a real polynomial that satisfies_ \[\Big{|}\frac{1}{N}\sum_{n=1}^{N}e(p(n))\Big{|}>\delta,\] _then, for every \(1\leq k\leq d\), there exists \(q\in\mathbb{Z}\) with \(|q|\leq\delta^{-C}\), such that \(N^{k}\left\|qa_{k}\right\|_{\mathbb{T}}\leq\delta^{-C}\)._ Note that there is no dependency of the constant on the length of the averaging interval, or on the implicit polynomial \(p\) (apart from its degree). For a proof of this lemma, see [23, Proposition 4.3], where a more general theorem is established in the setting of nilmanifolds as well. ### Nilmanifolds and correlation sequences Let \(G\) be a nilpotent Lie group with nilpotency degree \(s\) and let \(\Gamma\) be a discrete and cocompact subgroup. The space \(X=G/\Gamma\) is called an \(s\)-step _nilmanifold_. The group \(G\) acts on the space \(X\) by left multiplication and the measure on \(X\) that is invariant under this action is called the _Haar measure_ of \(X\), which we shall denote by \(m_{X}\). Given a sequence of points \(x_{n}\in X\), we will say that the sequence \(x_{n}\) is _equidistributed_ on \(X\), if for any continuous function \(F:X\to\mathbb{C}\) we have that \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}F(x_{n})=\int F\ d\,m_{X}.\] A _subnilmanifold_ of \(X=G/\Gamma\) is a set of the form \(Hx\), where \(H\) is a closed subgroup of the Lie group \(G\), \(x\in X\) and such that \(Hx\) is closed in \(X\). Let \(g\) be any element on the group \(G\). Then, for any \(x\in X\), the closed orbit of the action of \(g\) on \(x\) will be denoted by \(\overline{(g^{2}x)}\). It is known that this set is a subnilmanifold of \(X=G/\Gamma\) and that the sequence \(g^{n}x\) is equidistributed in the subnilamnifold \(\overline{(g^{2}x)}\) (see, for example, [27, Chapter 11, Theorem 9]). We now present the following definition for nilsequences in several variables. **Definition 2.11**.: _Let \(k,s\) be positive integers and let \(X=G/\Gamma\) be an \(s\)-step nilmanifold. Assume that \(g_{1},\ldots,g_{k}\) are pairwise commuting elements of the group \(G\), \(F:X\to\mathbb{C}\) is a continuous function on \(X\) and \(x\in X\). Then, the sequence_ \[\psi(n_{1},\ldots,n_{k})=F(g_{1}^{n_{1}}\cdot\ldots\cdot g_{k}^{n_{k}}x),\ \text{ where }n_{1},\ldots,n_{k}\in\mathbb{Z}\] _is called an \(s\)-step nilsequence in \(k\)-variables._ The main tool that we will need is an approximation of general nilsequences by multi-correlation sequences in the \(\ell^{\infty}\)-sense. The following lemma is established in [15, Proposition 4.2]. **Lemma 2.12**.: _Let \(k,s\) be positive integers and \(\psi:\mathbb{Z}^{k}\to\mathbb{C}\) be a \((s-1)\)-step nilsequence in \(k\) variables. Then, for every \(\varepsilon>0\), there exists a system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) and functions \(F_{1},\ldots,F_{s}\) on \(L^{\infty}(\mu)\), such that the sequence \(b(n_{1},\ldots,n_{k})\) defined by_ \[b(n_{1},\ldots,n_{k})=\int\prod_{j=1}^{s}\big{(}T_{1}^{\ell_{j}n_{1}}\cdot \ldots\cdot T_{k}^{\ell_{j}n_{k}}\big{)}F_{j}\ d\,\mu,\ (n_{1},\ldots,n_{k})\in\mathbb{Z}^{k}\] _with \(\ell_{j}=s!/j\) satisfies_ \[\|\psi-b\|_{\ell^{\infty}(\mathbb{Z}^{k})}\leq\varepsilon.\] **Comment**. The definition of nilsequences used in [15] imposed that \(x=\Gamma\) and that \(\mathbf{n}\in\mathbb{N}^{k}\). However, their arguments generalize in a straightforward manner to the slightly more general setting that we presented above. ## 3. Lifting to an extension flow In this section, we use a trick that allows us to replace the polynomial ergodic averages with similar ergodic averages over \(\mathbb{R}\) actions on an extension of the original probability space, removing the rounding functions in the process.. This argument is implicit in [29] for Cesaro averages, so we adapt its proof to the setting of short intervals. **Proposition 3.1**.: _Let \(k,\ell,d\) be positive integers and let \(L(N)\) be a positive sequence satisfying \(N^{\frac{5}{8}+\varepsilon}\ll L(N)\ll N^{1-\varepsilon}\). Let \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) be a system of commuting transformations. Then, there exists a positive integer \(s\) depending only on \(k,\ell,d\), such that for any variable family \(\mathcal{P}=\{p_{i,j,N}\colon 1\leq i\leq k,1\leq j\leq\ell\}\) of polynomials with degrees at most \(d\) that, for all \(i,j\), satisfy_ \[\lim_{\delta\to 0^{+}}\lim_{N\to+\infty}\frac{|\{N\leq n\leq N+L(N):\ \{p_{i,j,N}(n)\}\in[1-\delta,1)\}|}{L(N)}=0, \tag{34}\] _we have that for any \(0<\delta<1\) and functions \(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\)_ \[\Big{\|}_{N\leq n\leq N+L(N)}\ \big{(}\Lambda_{w,b}(n)-1\big{)} \prod_{j=1}^{\ell}\prod_{i=1}^{k}T_{i}^{\lfloor p_{i,j,N}(n)\rfloor}f_{j} \Big{\|}_{L^{2}(\mu)}\ll_{k,\ell,d}\\ \frac{1}{\delta^{k\ell}}\Big{(}\|\Lambda_{w,b}(n)-1\big{\|}_{U^{ s}(N,N+sL(N)]}+o_{w}(1)\Big{)}+o_{\delta}(1)(1+o_{w}(1)),\] _for all \(1\leq b\leq W,\ (b,W)=1\), where \(W=\prod_{p\in\mathbb{P}\colon p\leq w}p\)._ Proof.: Let \(\lambda\) denote the Lebesgue measure on \([0,1)\) and we define (as in [29]) the measure-preserving \(\mathbb{R}^{k\ell}\)-action \(\prod_{i=1}^{k}S_{i,s_{i,1}}\cdot\ldots\cdot\prod_{i=1}^{k}S_{i,s_{i,\ell}}\) on the space \(Y:=X\times[0,1)^{k\ell}\), endowed with the measure \(\nu:=\mu\times\lambda^{k\ell}\), by \[\prod_{j=1}^{\ell}\prod_{i=1}^{k}S_{i,s_{i,j}}(x,a_{1,1},\ldots,a_{k,1},a_{1,2 },\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})=\] \[\left(\prod_{j=1}^{\ell}\prod_{i=1}^{k}T_{i}^{[s_{i,j}+a_{i,j}]}x,\{s_{1,1}+a _{1,1}\},\ldots,\{s_{k,1}+a_{k,1}\},\ldots,\{s_{1,\ell}+a_{1,\ell}\},\ldots, \{s_{k,\ell}+a_{k,\ell}\}\right).\] If \(f_{1},\ldots,f_{\ell}\) are bounded functions on \(X\), we define the \(Y\)-extensions of \(f_{j}\), setting for every element \((a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k, \ell})\in[0,1)^{k\ell}\): \[\hat{f}_{j}(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell}, \ldots,a_{k,\ell})=f_{j}(x),\ \ 1\leq j\leq\ell;\] and we also define the function \[\hat{f}_{0}(x,a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,\ell})=1_{[0,\delta]^ {k\ell}}(a_{1,1},\ldots,a_{k,1},a_{1,2},\ldots,a_{k,\ell}).\] For every \(N\leq n\leq N+L(N)\), we consider the functions (on the original space \(X\)) \[b_{N}(n):=(\prod_{i=1}^{k}T_{i}^{[p_{i,1,N}(n)]})f_{1}\cdot\ldots\cdot(\prod_{ i=1}^{k}T_{i}^{[p_{i,\ell,N}(n)]})f_{\ell}\] as well as the functions \[\tilde{b}_{N}(n):=\hat{f}_{0}\cdot(\prod_{j=1}^{\ell}\prod_{i=1}^{k}S_{i, \delta_{j1}\cdot p_{i,1,N}(n)})\hat{f}_{1}\cdot\ldots\cdot(\prod_{j=1}^{\ell} \prod_{i=1}^{k}S_{i,\delta_{j\ell}\cdot p_{i,\ell,N}(n)})\hat{f}_{\ell}\] defined on the extension \(Y\). Here, \(\delta_{ij}\) denotes the Kronecker \(\delta\), meaning that the only terms that do not vanish are the diagonal ones (i.e., when \(i=j\)). For every \(x\in X\), we also let \[b_{N}^{\prime}(n)(x):=\int_{[0,1)^{k\ell}}\tilde{b}_{N}(n)(x,a_{1,1},\ldots,a _{k,1},a_{1,2},\ldots,a_{k,2},\ldots,a_{1,\ell},\ldots,a_{k,\ell})\,d\lambda^{ k\ell},\] where the integration is with respect to the variables \(a_{i,j}\). Using the triangle and Cauchy-Schwarz inequalities, we have \[\delta^{k\ell}\Big{\|}\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)} \big{(}\Lambda_{w,b}(n)-1\big{)}b_{N}(n)\Big{\|}_{L^{2}(\mu)}\leq\\ \Big{\|}\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\big{(}\Lambda_{ w,b}(n)-1\big{)}\cdot(\delta^{k\ell}b_{N}(n)-b_{N}^{\prime}(n))\Big{\|}_{L^{2}( \mu)}+\Big{\|}\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\big{(}\Lambda_{w,b}(n)- 1\big{)}\tilde{b}_{N}(n)\Big{\|}_{L^{2}(\nu)}. \tag{35}\] Using Proposition 2.4, we find an integer \(s\in\mathbb{N}\), depending only on the integers \(k,\ell,d\), and a constant \(C_{s}\) depending on \(s\), such that \[\Big{\|}\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\big{(}\Lambda_{w,b}(n)-1 \big{)}\tilde{b}_{N}(n)\Big{\|}_{L^{2}(\nu)}\leq C_{s}\left(\big{\|}\Lambda_{ w,b}-1\big{\|}_{U^{s}(N,N+sL(N)]}+o_{N}(1)\right), \tag{36}\] where the \(o_{N}(1)\) term depends only on the integer \(s\) and the sequence \(\Lambda_{w,b}(n)\). Now we study the first term \[\Big{\|}\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\big{(}\Lambda_{w,b}(n)-1\big{)} \cdot(\delta^{k\ell}b_{N}(n)-b_{N}^{\prime}(n))\Big{\|}_{L^{2}(\mu)}\] in (35). For every \(x\in X\) and \(N\leq n\leq N+L(N)\), we have \[\Big{|}\delta^{k\ell}b_{N}(n)(x)-b_{N}^{\prime}(n)(x)\Big{|}=\\ \left|\int_{[0,\delta]^{k\ell}}\left(\prod_{j=1}^{\ell}f_{j}(\prod _{i=1}^{k}T_{i}^{[p_{i,j,N}(n)]}x)-\prod_{j=1}^{\ell}f_{j}(\prod_{i=1}^{k}T_{ i}^{[p_{i,j,N}(n)+a_{i,j}]}x)\right)\,d\lambda^{k\ell}\right|.\] Since all the integrands \(a_{i,j}\) are less than or equal than \(\delta\), we deduce that if all of the implicit polynomials satisfy \(\{p_{i,j,N}(n)\}<1-\delta\), we have \(T_{i}^{[p_{i,j,N}(n)+a_{i,j}]}=T_{i}^{[p_{i,j,N}(n)]}\) for all \(1\leq i\leq k\), \(1\leq j\leq\ell\). To deal with the possible case where \(\{p_{i,j,N}(n)\}\geq 1-\delta\) for at least one of our polynomials, we define, for every \(1\leq i\leq k\), \(1\leq j\leq\ell\), the set \[E_{\delta,N}^{i,j}:=\{n\in[N,N+L(N)]\colon\{p_{i,j,N}(n)\}\in[1-\delta,1)\}.\] Then, by using the fact that \[\mathbf{1}_{E_{8,N}^{1,1}\cup\ldots\cup E_{8,N}^{1,t}\cup E_{8,N}^{2,1}\cup\ldots \cup E_{6,N}^{k,t}}\leq\sum_{(i,j)\in[1,k]\times[1,\ell]}\mathbf{1}_{E_{8,N}^{ i,j}}\] and that \(\mathbf{1}_{E_{k,N}^{i,j}}\left(n\right)=\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}( n)\})\), we infer that \[\left|\delta^{k\ell}b_{N}(n)(x)-b_{N}^{\prime}(n)(x)\right|\leq 2\delta^{k \ell}\sum_{(i,j)\in[1,k]\times[1,\ell]}\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}( n)\})\] for every \(x\in X\). In view of the above, using the inequality \(|\Lambda_{w,b}(n)-1|\leq\Lambda_{w,b}(n)+1\), we deduce that \[\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\left|\left(\Lambda_{w,b} (n)-1\right)\right|\cdot\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})\leq\] \[\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\left(\Lambda_{w,b}(n)-1 \right)\cdot\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})+2\mathop{\mathbb{E}} _{N\leq n\leq N+L(N)}\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})\leq\] \[\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\left(\Lambda_{w,b}(n)-1 \right)\cdot\mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})+2\cdot\frac{|E_{ \delta,N}^{i,j}|}{L(N)}.\] Since each polynomial \(p_{i,j,N}\) satisfies (34) for large \(N\) and small enough \(\delta\), the term (and the sum of finitely many terms of this form) \(\frac{|E_{\delta,N}^{i,j}|}{L(N)}\) is as small as we want. It remains to show that the term \[\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\left(\Lambda_{w,b}(n)-1\right)\cdot \mathbf{1}_{[1-\delta,1)}(\{p_{i,j,N}(n)\})\] goes to zero as \(N\to\infty\), then \(w\to\infty\) and finally \(\delta\to 0^{+}.\) To this end, it suffices to show \[\mathop{\mathbb{E}}_{N\leq n\leq N+L(N)}\ \left(\Lambda_{w,b}(n)-1\right)e^{2 \pi imp_{i,j,N}(n)}\to 0\] as \(N\to\infty\) and then \(w\to\infty\) for all \(m\in\mathbb{Z}\setminus\{0\}\),13 which follows from Lemma 2.7. Footnote 13: This follows by the fact that if \(f\) is Riemann integrable on \([0,1)\) with \(\int_{[0,1)}f(x)\,dx=c\), then, for every \(\varepsilon>0\), we can find trigonometric polynomials \(q_{1},\ q_{2}\), with no constant terms, with \(q_{1}(t)+c-\varepsilon\leq f(t)\leq q_{2}(t)+c+\varepsilon.\) We use this for the function \(f=\mathbf{1}_{[1-\delta,1)}\). ## 4. Equidistribution in short intervals We gather here some useful propositions that describe the behavior of a Hardy field function when restricted to intervals of the form \([N,N+L(N)]\), where \(L(N)\) grows slower compared to the parameter \(N\). In our applications, we will typically need the function \(L(N)\) to grow faster than \(N^{5/8}\) in order to be able to use the uniformity results in short intervals, but we will not need to work under this assumption throughout most of this section, the only exception being Proposition 4.6 below. We will also present an example that illustrates the main points in the proof of Theorem 1.1 in the following section. ### Details on the proof In the case of strongly non-polynomial functions that also grow faster than some fractional power, we show that the associated Taylor polynomial \(p_{N}(n)\) has ideal equidistribution properties. Indeed, by picking the length \(L(N)\) a little more carefully, one gains arbitrary logarithmic powers over the trivial bound in the exponential sums of \(p_{N}\). Consequently, we demonstrate that the number of integers in \([N,N+L(N)]\) for which \(\lfloor a(n)\rfloor\neq\lfloor p_{N}(n)\rfloor\) is less than \(L(N)(\log N)^{-100}\) (say) and, thus, their contribution to the average is negligible. Therefore, for all intents and purposes, one can suppose that the error terms are identically zero. The situation is different when a function that grows slower than all fractional powers is involved since these functions are practically constant in these short intervals. For instance, if one has the function \(p(t)+\log^{2}t\), where \(p\) is a polynomial, the only feasible approximation is of the form \(p(n)+\log^{2}n=p(n)+\log^{2}N+e_{N}(n)\), where \(e_{N}(n)\) converges to \(0\). While it seems that we do have a polynomial as the main term in the approximation (at least when \(p\) is non-constant), quantitative bounds on the exponential sums of the polynomial component cannot be established in this case at all. The main reason is that such bounds depend heavily on the diophantine properties of the coefficients of \(p\), for which we have no data. In the case that \(p\) is a constant polynomial, we can use the equidistribution (mod 1) of the sequence \(\log^{2}n\) to show that in most short intervals \([N,N+L(N)]\), we have for all \(n\in[N,N+L(N)]\). The contribution of the bad short intervals is then bounded using the triangle inequality and Corollary 2.8. Suppose that the polynomial \(p\) above is non-constant. In the case that \(p\) has rational non-constant coefficients, we split our averages to suitable arithmetic progressions so that the resulting polynomials have integer coefficients (aside from the constant term), and the effect of \(e_{N}(n)\) will be eliminated when we calculate the integer parts. In the case that \(p\) has a non-constant irrational coefficient, we can invoke the well-distribution of \(p(n)\) to conclude that the number of integers of the set \[E_{N}=\{n\in[N,N+L(N)]\colon\big{[}p(n)+\log^{2}n\big{]}\neq\big{\lfloor}p(n) +\log^{2}N\big{\rfloor}\}\] is \(O(\varepsilon L(N))\), for a fixed small parameter \(\varepsilon\) and \(N\) large. However, in order to bound the total contribution of the set \(E_{N}\), we can only use the triangle inequality in the corresponding ergodic averages, so we are forced to extract information on how large the quantity \[\frac{1}{L(N)}\sum_{N\leq n\leq N+L(N)}\Lambda_{w,b}(n)\mathbf{1}_{E_{N}}(n)\] can be. This can be bounded effectively if the corresponding exponential sums \[\frac{1}{L(N)}\sum_{N\leq n\leq N+L(N)}\Lambda_{w,b}(n)e\big{(}p(n)\big{)}\] are small. This is demonstrated by combining the fact that the exponential sums of \(p(n)\) are small (due to the presence of an irrational coefficient) with the fact that exponential sums weighted by \(\Lambda_{w,b}(n)-1\) are small due to the uniformity of the \(W\)-tricked von Mangoldt function. The conclusion follows again by an application of the Erdos-Turan inequality, this time for a probability measure weighted by \(\Lambda_{w,b}(n)\). ### A model example We sketch the main steps in the case of the ergodic averages \[\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor n\log n \rfloor}f_{1}\cdot T^{\big{\lfloor}an^{2}+\log n\big{\rfloor}}f_{2}\cdot T^{ \big{\lfloor}\log^{2}n\big{\rfloor}}f_{3}. \tag{37}\] where \(a\) is an irrational number. We will show that the \(L^{2}\)-norm of this expression converges to \(0\), as \(N\to+\infty\) and then \(w\to+\infty\). Note that the three sequences in the iterates satisfy our hypotheses. In addition, we remark that the arguments below are valid in the setting where we have three commuting transformations, but we consider a simpler case for convenience. Additionally, we do not evaluate the sequences at \(Wn+b\) (as we should in order to be in the setup of Theorem 1.1), since the underlying arguments remain identical apart from changes in notation. We choose \(L(t)=t^{0.66}\) (actually, any power \(t^{c}\) with \(5/8<c<2/3\) works here) and claim that it suffices to show that \[\operatorname*{\mathbb{E}}_{1\leq r\leq R}\Big{\|}_{r\leq n\leq r+L(r)}\big{(} \Lambda_{w,b}(n)-1\big{)}T^{\lfloor n\log n\rfloor}f_{1}\cdot T^{\big{\lfloor} an^{2}+\log n\big{\rfloor}}f_{2}\cdot T^{\big{\lfloor}\log^{2}n\big{\rfloor}}f_{3} \Big{\|}_{L^{2}(\mu)}=0. \tag{38}\] This reduction is the content of Lemma 5.1. Now, we can use the Taylor expansion around \(r\) to write \[n\log n =r\log r+(\log r+1)(n-r)+\frac{(n-r)^{2}}{2r}-\frac{(n-r)^{3}}{6\xi _{1,n,r}^{2}}\] \[\log n =\log r+\frac{n-r}{\xi_{2,n,r}}\] \[\log^{2}n =\log^{2}r+\frac{2(n-r)\log\xi_{3,n,r}}{\xi_{3,n,r}},\] for some real numbers \(\xi_{i,n,r}\in[r,n]\)\((i=1,2,3)\). Our choice of \(L(t)\) implies that \[\Big{|}\frac{(n-r)^{3}}{6\xi_{1,n,r}^{2}}\Big{|}\leq\frac{r^{3\cdot 0.65}}{6r^{ 2}}\ll 1,\] and similarly for the other two cases. To be more specific, there exists a \(\delta>0\), such that all the error terms (the ones involving the quantities \(\xi_{i,n,r}\)) are \(O(r^{-\delta})\). Let us fix a small \(\varepsilon>0\). Firstly, we shall deal with the third iterate, since this is the simplest one. Observe that if \(r\) is chosen large enough and such that it satisfies \(\{\log^{2}r\}\in(\varepsilon,1-\varepsilon)\), then for all \(n\in[r,r+L(r)]\), we will have \[\big{\lfloor}\log^{2}n\big{\rfloor}=\big{\lfloor}\log^{2}r\big{\rfloor},\] since the error terms in the expansion are \(O(r^{-\delta})\), which is smaller than \(\varepsilon\) for large \(r\). In addition, the sequence \(\log^{2}n\) is equidistributed modulo \(1\), so our prior assumption can fail for at most \(3\varepsilon R\) (say) values of \(r\in[1,R]\), provided that \(R\) is sufficiently large. For the bad values of \(r\), we use the triangle inequality for the corresponding norm to deduce that their contribution on the average is \(O(\varepsilon R)\), which will be acceptable if \(\varepsilon\) is small. Actually, in order to establish this, we will need to use Corollary 2.8, though we will ignore that in this exposition. In conclusion, we can rewrite the expression in (38) as \[\mathop{\mathbb{E}}_{1\leq r\leq R}\Big{\|}\mathop{\mathbb{E}}_{r\leq n\leq r+ L(r)}\big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor n\log n\rfloor}f_{1}\cdot T^{ \lfloor an^{2}+\log n\rfloor}f_{2}\cdot T^{\lfloor\log^{2}r\rfloor}f_{3} \Big{\|}_{L^{2}(\mu)}+O(\varepsilon). \tag{39}\] Now, we deal with the first function. We claim that the discrepancy of the finite sequence \[\Big{(}\{r\log r+(\log r+1)(n-r)+\frac{(n-r)^{2}}{2r}\}\Big{)}_{r\leq n\leq r +L(r)}\] is \(O_{A}(\log^{-A}r)\) for any \(A>0\). We will establish this in Proposition 4.4 using Lemma 2.10 and Theorem E. As a baby case, we show the following estimate for some simple trigonometric averages: \[\Big{|}_{r\leq n\leq r+L(r)}e\Big{(}\frac{(n-r)^{2}}{2r}\Big{)}\Big{|}\leq \frac{1}{\log^{A}r}\] for \(r\) large enough. Indeed, if that inequality fails for some \(r\in\mathbb{N}\), there exists an integer \(|q_{r}|\leq\log^{O(A)}r\), such that \[\Big{\|}\frac{q_{r}}{2r}\Big{\|}_{\mathbb{T}}\leq\frac{\log^{O(A)}r}{(L(r))^{ 2}}.\] Certainly, if \(r\) is large enough, we can replace the norm with the absolute value, so that the previous inequality implies that \[\big{(}L(r)\big{)}^{2}\leq\frac{2r\log^{O(A)}r}{|q_{r}|}.\] However, the choice \(L(t)=t^{0.66}\) implies that this inequality is false for large \(r\). In our problem, we can just pick \(A=2\). Using the definition of discrepancy, we deduce that the number of integers in \([r,r+L(r)]\), for which we have \[\{r\log r+(\log r+1)(n-r)+\frac{(n-r)^{2}}{2r}\}\in[0,r^{-\delta/2}]\cup[1-r^{- \delta/2},1)\] is \(O(L(r)\log^{-2}r)\). However, if \(n\) does not belong to this set of bad values, we conclude that \[\lfloor n\log n\rfloor=\left\lfloor r\log r+(\log r+1)(n-r)+\frac{(n-r)^{2}} {2r}\right\rfloor\] since the error terms are \(O(r^{-\delta})\). Furthermore, since \(\Lambda_{w,b}(n)=O(\log r)\) for \(n\in[r,r+L(r)]\), we conclude that the contribution of the bad values is \(o_{r}(1)\) on the inner average. Therefore, we can rewrite the expression in (39) as \[\mathop{\mathbb{E}}_{1\leq r\leq R}\Big{\|}\mathop{\mathbb{E}}_{r\leq n\leq r +L(r)}\big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor p_{r}(n)\rfloor}f_{1}\cdot T ^{\left\lfloor an^{2}+\log n\right\rfloor}f_{2}\cdot T^{\left\lfloor\log^{2} r\right\rfloor}f_{3}\Big{\|}_{L^{2}(\mu)}+O(\varepsilon)+o_{R}(1), \tag{40}\] where \(p_{r}(n)=r\log r+(\log r+1)(n-r)+\frac{(n-r)^{2}}{2r}\). Finally, we deal with the second iterate. We consider the parameter \(\varepsilon\) as above and set \(M=1/\varepsilon\). Once again, we shall assume that \(r\) is very large compared to \(M\). Since \(a\) is irrational, we have that the sequence \(an^{2}\) is well-distributed modulo \(1\), so we would expect the number of \(n\) for which \(\{an^{2}+\log r\}\not\in[\varepsilon,1-\varepsilon]\) to be small. Note that for the remaining values of \(n\), we have \(\left\lfloor an^{2}+\log n\right\rfloor=\left\lfloor an^{2}+\log r\right\rfloor\), since the error term in the approximation is \(O(r^{-\delta})\). Therefore, we estimate the size of the set \[\mathcal{B}_{r,\varepsilon}:=\{n\in[r,r+L(r)]\colon\{an^{2}+\log r\}\in[0, \varepsilon]\cup[1-\varepsilon,1)\}\] Using Weyl's theorem, we conclude that \[\max_{1\leq m\leq M}\Bigl{|}\mathop{\mathbb{E}}_{r\leq n\leq r+L(r)}e\big{(}m( an^{2}+\log r)\big{)}\Bigr{|}=o_{r}(1). \tag{41}\] Here, the \(o_{r}(1)\) term depends on \(M=1/\varepsilon\), but since we will send \(r\to+\infty\) and then \(\varepsilon\to 0\), this will not cause any issues. We suppress these dependencies in this exposition. An application of Theorem E implies that \[\frac{|\mathcal{B}_{r,\varepsilon}|}{L(r)}\ll 2\varepsilon+\frac{1}{M}+ \sum_{m=1}^{M}\frac{1}{m}\Bigl{|}\mathop{\mathbb{E}}_{r\leq n\leq r+L(r)}e \big{(}m(ar^{2}+\log r)\big{)}\Bigr{|}, \tag{42}\] so that \(|\mathcal{B}_{r,\varepsilon}|\ll(\varepsilon+o_{r}(1))L(r)\). Additionally, we will need to estimate \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B }_{r}}(n),\] which will arise when we apply the triangle inequality to bound the contribution of the set \(\mathcal{B}_{r}\). However, we have that \[\max_{1\leq m\leq M}\Bigl{|}\mathop{\mathbb{E}}_{r\leq n\leq r+L(r)}\Lambda_{ w,b}(n)e\big{(}m(an^{2}+\log r)\big{)}\Bigr{|}=o_{w}(1)+o_{r}(1), \tag{43}\] which can be seen by splitting \(\Lambda_{w,b}(n)=(\Lambda_{w,b}(n)-1)+1\), applying the triangle inequality and using Lemma 2.7 and (41), respectively, to treat the resulting exponential averages. In view of this, we can apply the Erdos-Turan inequality (Theorem E) for the probability measure \[\nu(S)=\frac{\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\delta_{\{an^{2} +\log r\}}(S)}{\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)}\] as well as Corollary 2.8 (to bound the sum in the denominator) to conclude that \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B}_{r }}(n)\ll\varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1),\] Therefore, if we apply the triangle inequality, we conclude that the contribution of the set \(\mathcal{B}_{r,\varepsilon}\) on the average over \([r,r+L(r)]\) is at most \(O(\varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1))\). This is acceptable if we send \(R\to+\infty\), then \(w\to+\infty\), and then \(\varepsilon\to 0\) at the end. Ignoring the peculiar error terms that turn out to be satisfactory, we can rewrite the expression in (40) as \[\mathop{\mathbb{E}}_{1\leq r\leq R}\Big{\|}\mathop{\mathbb{E}}_{r\leq n\leq r +L(r)}\big{(}\Lambda_{w,b}(n)-1\big{)}T^{\lfloor p_{r}(n)\rfloor}f_{1}\cdot T^ {\lfloor an^{2}+\log r\rfloor}f_{2}\cdot T^{\lfloor\log^{2}r\rfloor}f_{3} \Big{\|}_{L^{2}(\mu)}. \tag{44}\] Now, the iterates satisfy the assumptions of Proposition 3.1. This is true for the first iterate since we have a good bound on the discrepancy and it is also true for the second iterate because the polynomial \(an^{2}\) has an irrational coefficient (so we can use its well-distribution modulo \(1\)). For the third one, our claim is obvious because we simply have an integer in the iterate. Therefore, we can bound the inner average by a constant multiple of the norm \[\left\|\Lambda_{w,b}-1\right\|_{U^{s}(r,r+L(r)]}\] with some error terms that we will ignore here. Finally, we invoke Theorem A to show that the average \[\mathop{\mathbb{E}}_{1\leq r\leq R}\left\|\Lambda_{w,b}-1\right\|_{U^{s}(r,r+L (r)]}\] converges to \(0\), which leads us to our desired conclusion. ### Some preparatory lemmas Let us fix a Hardy field \(\mathcal{H}\). Firstly, we will need a basic lemma that relates the growth rate of a Hardy field function of polynomial growth with the growth rate of its derivative. To do this, we recall a lemma due to Frantzikinakis [9, Lemma 2.1], as well as [44, Proposition A.1]. **Lemma 4.1**.: _Let \(a\in\mathcal{H}\) satisfy \(t^{-m}\prec a(t)\prec t^{m}\) for some positive integer \(m\) and assume that \(a(t)\) does not converge to a non-zero constant as \(t\to+\infty\). Then,_ \[\frac{a(t)}{t\log^{2}t}\prec a^{\prime}(t)\ll\frac{a(t)}{t}.\] Observe that if a function \(a(t)\) satisfies the growth inequalities in the hypothesis of this lemma, then the function \(a^{\prime}(t)\) satisfies \(\frac{t^{-1-m}}{\log^{2}t}\prec a^{\prime}(t)\prec t^{m-1}\). Therefore, we deduce the relations \(t^{-m-2}\prec a^{\prime}(t)\prec t^{m+2}\), which implies that the function \(a^{\prime}(t)\) satisfies a similar growth condition. Provided that the function \(a^{\prime}(t)\) does not converge to a non-zero constant as \(t\to+\infty\), the above lemma can then be applied to the function \(a^{\prime}(t)\). When a function \(a(t)\) is strongly non-polynomial and dominates the logarithmic function \(\log t\), one can get a nice ordering relation for the growth rates of consecutive derivatives. This is the content of the following proposition. **Proposition 4.2**.: _[_44_, Proposition A.2]_ _Let \(a\in\mathcal{H}\) be a function of polynomial growth that is strongly non-polynomial and also satisfies \(a(t)\succ\log t\). Then, for all sufficiently large \(k\in\mathbb{N}\), we have_ \[1\prec\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\prec\big{|}a^{(k+1)}(t)\big{|}^ {-\frac{1}{k+1}}\prec t.\] **Remark 7**.: _The proof of Proposition 4.2 in [44] establishes the fact that if \(a\) satisfies the previous hypotheses, then the derivatives of \(a\) always satisfy the conditions of Lemma 4.1._ This proposition is the main tool used to show that a strongly non-polynomial function \(a(t)\) can be approximated by polynomials in short intervals. Indeed, assume that a positive sub-linear function \(L(t)\) satisfies \[\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\prec L(t)\prec\big{|}a^{(k+1)}(t)\big{|} ^{-\frac{1}{k+1}} \tag{45}\] for some sufficiently large \(k\in\mathbb{N}\) (large enough so that the inequalities in Proposition 4.2 hold). In particular, this implies that \(\lim\limits_{t\to+\infty}a^{(k+1)}(t)\to 0\). Then, we can use the Taylor expansion around the point \(N\) to write \[a(N+h)=a(N)+ha^{\prime}(N)+\dots+\frac{h^{k}a^{(k)}(N)}{k!}+\frac{h^{k+1}a^{(k +1)}(\xi_{N,h})}{(k+1)!}\,\text{for some $\xi_{N,h}\in[N,N+h]$}\] for every \(0\leq h\leq L(N)\). However, we observe that \[\Big{|}\frac{h^{k+1}a^{(k+1)}(\xi_{N,h})}{(k+1)!}\Big{|}\leq\frac{L(N)^{k+1}| a^{(k+1)}(N)|}{(k+1)!}=o_{N}(1),\] where we used the fact that \(|a^{(k+1)}(t)|\to 0\) monotonically (since \(a^{(k+1)}(t)\in\mathcal{H}\)). Therefore, we have \[a(N+h)=a(N)+ha^{\prime}(N)+\dots+\frac{h^{k}a^{(k)}(N)}{k!}+o_{N}(1),\] which implies that the function \(a(N+h)\) is essentially a polynomial in \(h\). The final lemma implies that if the function \(L(t)\) satisfies certain growth assumptions, then a strongly non-polynomial function \(a(t)\) will be approximated by a polynomial of some degree \(k\). **Proposition 4.3**.: _Let \(a\in\mathcal{H}\) be a strongly non-polynomial function of polynomial growth, such that \(a(t)\succ\log t\). Assume that \(L(t)\) is a positive sub-linear function, such that \(1\prec L(t)\ll t^{1-\varepsilon}\) for some \(\varepsilon>0\). Then, there exists a non-negative integer \(k\) depending on the function \(a(t)\) and \(L(t)\), such that_ \[\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\prec L(t)\prec\big{|}a^{(k+1)}(t) \big{|}^{-\frac{1}{k+1}},\] _where we adopt the convention that \(\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\) denotes the constant function 1, when \(k=0\)._ Proof.: We split the proof into two cases depending on whether \(a\) is sub-fractional or not. Assume first that \(a(t)\ll t^{\delta}\) for all \(\delta>0\). We will establish the claim for \(k=0\). This means that functions that are sub-fractional become essentially constant when restricted to intervals of the form \([N,N+L(N)]\). The left inequality is obvious. Furthermore, since \(a(t)\prec t^{\varepsilon}\), Lemma 4.1 implies that \[a^{\prime}(t)\prec\frac{1}{t^{1-\varepsilon}}\ll\frac{1}{L(t)},\] which yields the desired result. Assume now that \(a(t)\succ t^{\delta}\) for some \(\delta>0\). Observe that, in this case, we have that \[\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\prec\big{|}a^{(k+1)}(t)\big{|}^{- \frac{1}{k+1}}\] for \(k\) large enough, due to Proposition 4.2. We also consider the integer \(d\), such that \(t^{d}\prec a(t)\prec t^{d+1}\). This number exists because the function \(a\) is strongly non-polynomial. If \(L(t)\prec\big{|}a^{(d+1)}(t)\big{|}^{-\frac{1}{d+1}}\), then the claim holds for \(k=d\), since \(\big{|}a^{(d)}(t)\big{|}^{-\frac{1}{d}}\prec 1\prec L(t)\). It suffices to show that there exists \(k\in\mathbb{N}\), such that \(L(t)\prec\big{|}a^{(k+1)}(t)\big{|}^{-\frac{1}{k+1}}\), which, in turn, follows if we show that \[t^{1-\varepsilon}\prec\big{|}a^{(k+1)}(t)\big{|}^{-\frac{1}{k+1}} \tag{46}\] for some \(k\in\mathbb{N}\). We can rewrite the above inequality as \(a^{(k+1)}(t)\prec t^{(k+1)(\varepsilon-1)}\). However, since the function \(a(t)\) is strongly non-polynomial and \(a(t)\succ\log t\), the functions \(a^{(k)}(t)\) satisfy the hypotheses of Lemma 4.1 (see also Remark 7). Therefore, iterating the aforementioned lemma, we deduce that \[a^{(k+1)}(t)\ll\frac{a(t)}{t^{k+1}}.\] Hence, it suffices to find \(k\) such that \(a(t)\ll t^{(k+1)\varepsilon}\) and such a number exists, because the function \(a(t)\) has polynomial growth. **Remark 8**.: _The condition \(L(t)\prec t^{1-\varepsilon}\) is necessary. For example, if \(a(t)=t\log t\) and \(L(t)=\frac{t}{\log t}\), then for any \(k\in\mathbb{N}\), we can write_ \[(N+h)\log(N+h)=N\log N+\cdots+\frac{C_{1}h^{k}}{N^{k-1}}+\frac{C_{2}h^{k+1}}{ \xi_{N,h}^{k}}\] _for every \(0\leq h\leq\frac{N}{\log N}\) and some numbers \(C_{1},C_{2}\in\mathbb{R}\). However, there is no positive integer \(k\) for which the last term in this expansion can be made to be negligible since \(\frac{N}{\log N}\succ N^{\frac{k}{k+1}}\) for all \(k\in\mathbb{N}\). Essentially, in order to approximate the function \(t\log t\) in these specific short intervals, one would be forced to use the entire Taylor series instead of some appropriate cutoff._ ### Eliminating the error terms in the approximations In the previous subsection, we saw that any Hardy field function can be approximated by polynomials in short intervals using the Taylor expansion. Namely, if \(a(t)\) diverges and \(L(t)\to+\infty\) is a positive function, such that \[\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\prec L(t)\prec\big{|}a^{(k+1)}(t) \big{|}^{-\frac{1}{k+1}} \tag{47}\] then, for any \(0\leq h\leq L(N)\), we have \[a(N+h)=a(N+h)=a(N)+\cdots+\frac{h^{k}a^{(k)}(N)}{k!}+\frac{h^{k+1}a^{(k+1)}( \xi_{N,h})}{(k+1)!}=p_{N}(h)+\theta_{N}(h)\] for some \(\xi_{N,h}\in[N,N+h]\), where we denote \[p_{N}(h)=a(N)+\cdots+\frac{h^{k}a^{(k)}(N)}{k!}.\] Observe that our growth assumption on \(L(t)\) implies that the term \(\theta_{N}(h)\) is bounded by a quantity that converges to \(0\), as \(N\to+\infty\). Therefore, for large values of \(N\), we easily deduce that \[\lfloor a(N+h)\rfloor=\lfloor p_{N}(h)\rfloor+\varepsilon_{N,h},\] where \(\varepsilon_{N,h}\in\{-1,0,1\}\). In order to be able to apply Proposition 3.1, we will need to eliminate the error terms \(\varepsilon_{N,h}\). We will consider three distinct cases, which are tackled using somewhat different arguments. #### 4.4.1. The case of fast-growing functions Firstly, we establish the main proposition that will allow us to remove the error terms in the case of functions that contain a "non-polynomial part" which does not grow too slowly. We will need a slight strengthening of the growth conditions in (47), which, as we saw previously, are sufficient to have a Taylor approximation in the interval \([N,N+L(N)]\). **Proposition 4.4**.: _Let \(A>0\) and let \(a(t)\) be a \(C^{\infty}\) function defined for all sufficiently large \(t\in\mathbb{R}\). Assume \(L(t)\) is a positive sub-linear function going to infinity and let \(k\) be a positive integer, such that_ \[1\lll\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\lll L(t)\lll\big{|}a^{(k+1)}(t) \big{|}^{-\frac{1}{k+1}} \tag{48}\] _and such that the function \(a^{(k+1)}(t)\) converges to 0 monotonically. Then, for \(N\) large enough, we have that, for all \(0\leq c\leq d<1\),_ \[\frac{\big{|}\{n\in[N,N+L(N)]\colon a(n)\in[c,d]\}\big{|}}{L(N)}=|d-c|+O_{A}(L(N )\log^{-A}N).\lx@note{footnote}{One can actually get a small power saving here, with an exponent that depends on $k$ and the implicit fractional powers in the growth relations of (48), though this will not be any more useful for our purposes.} \tag{49}\] _Consequently, for all \(N\) sufficiently large, we have that_ \[\big{|}a(N+h)\big{|}=\bigg{|}a(N)+ha^{\prime}(N)+\cdots+\frac{h^{k}a^{(k)}(N)} {k!}\bigg{|}\] _for all, except at most \(O_{A}(L(N)\log^{-A}(N))\) values of integers \(h\in[N,N+L(N)]\)._ _Proof._ Our hypothesis on \(L(t)\) implies that there exist \(\varepsilon_{1},\varepsilon_{2}>0\) such that \[L(t)\big{|}a^{(k)}(t)\big{|}^{\frac{1}{k}}\gg t^{\varepsilon_{1}}\text{ and }\ L(t)\big{|}a^{(k+1)}(t)\big{|}^{\frac{1}{k+1}}\ll t^{-\varepsilon_{2}}. \tag{50}\] In addition, the leftmost inequality implies that there exists \(\varepsilon_{3}>0\), such that \(a^{(k)}(t)\ll t^{-\varepsilon_{3}}\). Using the Taylor expansion around the point \(N\), we can write \[a(N+h)=a(N)+ha^{\prime}(N)+\cdots+\frac{h^{k}a^{(k)}(N)}{k!}+\frac{h^{k+1}a^{ (k+1)}(\xi_{h})}{(k+1)!},\ \text{ for some }\xi_{h}\in[N,N+h], \tag{51}\] for every \(h\in[0,L(N)]\). We denote \[p_{N}(h)=a(N)+\cdots+\frac{h^{k}a^{(k)}(N)}{k!}\] and \[\theta_{N}(h)=\frac{h^{k+1}a^{(k+1)}(\xi_{h})}{(k+1)!}.\] The function \(a^{(k+1)}(t)\) converges to 0 monotonically due to our hypothesis. Therefore, for sufficiently large \(N\), \[\max_{0\leq h\leq L(N)}\big{|}\theta_{N}(h)\big{|}\leq\Big{|}\frac{a^{(k+1)}(N )}{(k+1)!}\Big{|}(L(N))^{k+1}=\theta_{N}, \tag{52}\] and the quantity \(\theta_{N}\) is strongly dominated by the constant 1 due to (50). More precisely, we have that \(\theta_{N}\ll N^{-(k+1)\varepsilon_{2}}\). Let \(A>0\) be any constant. We study the discrepancy of the finite polynomial sequence \[p_{N}(h),\text{ where }0\leq h\leq L(N).\] We shall establish that we have \[\Delta_{[c,d]}\big{(}p_{N}(h)\big{)}\ll_{A}\log^{-A}N\] for any choice of the interval \([c,d]\subseteq[0,1]\). To this end, we apply Theorem E for the finite sequence \((p_{N}(h))_{0\leq h\leq L(N)}\) to deduce that \[\Delta_{[c,d]}\Big{(}\big{(}p_{N}(h)\big{)}_{0\leq h\leq L(N)}\Big{)}\leq \frac{C}{\big{|}\log^{A}N\big{|}}+C\sum_{m=1}^{\big{|}\log^{A}N\big{|}}\frac{ 1}{m}\Big{|}\underset{0\leq h\leq L(N)}{\mathbb{E}}e(mp_{N}(h))\Big{|}, \tag{53}\] where \(C\) is an absolute constant. We claim that for every \(1\leq m\leq\big{|}\log^{A}N\big{|}\), we have that \[\Big{|}\underset{0\leq h\leq L(N)}{\mathbb{E}}e(mp_{N}(h))\Big{|}\leq\frac{1} {\log^{A}N}, \tag{54}\] provided that \(N\) is sufficiently large. Indeed, assume for the sake of contradiction that there exists \(1\leq m_{0}\leq\left\lfloor\log^{A}N\right\rfloor\), such that \[\Big{\lvert}_{0\leq h\leq L(N)}\,e(m_{0}p_{N}(h))\Big{\rvert}>\frac{1}{\log^{A} N}. \tag{55}\] The leading coefficient of \(m_{0}p_{N}(h)\) is equal to \[\frac{m_{0}a^{(k)}(N)}{k!}.\] Then, Lemma 2.10 implies that there exists a constant \(C_{k}\) (depending only on \(k\)) an integer \(q\) satisfying \(|q|\leq\log^{C_{k}A}N\) and such that \[\Big{\lVert}q\cdot\frac{m_{0}a^{(k)}(N)}{k!}\Big{\rVert}_{\mathbb{T}}\leq \frac{\log^{C_{k}A}N}{\lfloor L(N)\rfloor^{k}}.\] The number \(qm_{0}\) is bounded in magnitude by \(\log^{(C_{k}+1)A}(N)\), so that \[q\cdot\frac{m_{0}a^{(k)}(N)}{k!}\ll\log^{(C_{k}+1)A}N\cdot N^{-\varepsilon_{3 }}=o_{N}(1).\] Therefore, for large values of \(N\), we can substitute the circle norm of the fraction in (55) with the absolute value, which readily implies that \[\Big{\lvert}q\cdot\frac{m_{0}a^{(k)}(N)}{k!}\Big{\rvert}\leq\frac{\log^{C_{k} A}N}{\lfloor L(N)\rfloor^{k}}\implies\lfloor L(N)\rfloor^{k}\big{\lvert}a^{(k)}(N )\big{\rvert}\leq k!\log^{C_{k}A}N.\] However, this implies that \(L(t)\) cannot strongly dominate the function \(\big{(}a^{(k)}(t)\big{)}^{-\frac{1}{k}}\), which is a contradiction due to our hypothesis. We have established that for every \(1\leq m\leq\left\lfloor\log^{A}N\right\rfloor\) and large \(N\), inequality (54) holds. Substituting this in (53), we deduce that \[\Delta_{[c,d]}\Big{(}\big{(}p_{N}(h)\big{)}_{0\leq h\leq L(N)}\Big{)}\leq\frac {C}{\big{\lfloor}\log^{A}N\big{\rfloor}}+C\sum_{m=1}^{\left\lfloor\log^{A}N \right\rfloor}\frac{1}{m\log^{A}N},\] which implies that \[\Delta_{[c,d]}\Big{(}\big{(}p_{N}(h)\big{)}_{0\leq h\leq L(N)}\Big{)}\ll\frac {A\log\log N}{\log^{A}N}.\] In particular, since \(A\) was arbitrary, we get \[\Delta_{[c,d]}\Big{(}\big{(}p_{N}(h)\big{)}_{0\leq h\leq L(N)}\Big{)}\ll_{A} \frac{1}{\log^{A}N}. \tag{56}\] This establishes the first part of the proposition. The second part of our statement follows from an application of the bound on the discrepancy of the finite polynomial sequence \((p_{N}(h))\). Indeed, we consider the set \[S_{N}=[0,\theta_{N}]\cup[1-\theta_{N},1),\] where we recall that \(\theta_{N}\) was defined in (52) and decays faster than a small fractional power. Then, if \(\{p_{N}(h)\}\notin S_{N}\), we have \(\lfloor p_{N}(h)+\theta_{N}(h)\rfloor=\lfloor p_{N}(h)\rfloor\), as can be seen by noticing that the error term in (51) is bounded in magnitude by \(\theta_{N}\). Now, we estimate the number of integers \(h\in[0,L(N)]\) for which \(\{p_{N}(h)\}\in S_{N}\). Using the definition of discrepancy and the recently established bounds, we deduce that \[\frac{\big{\lvert}\{h\in[0,L(N)]\colon\{p_{N}(h)\}\in[0,\theta_{N}]\}\big{\rvert} }{L(N)}-\theta_{N}\ll_{A}\frac{1}{\log^{A}N}\] for every \(A>0\). Since the number \(\theta_{N}\) is dominated by \(N^{-(k+1)\varepsilon_{2}}\), this implies that \[\big{|}\{h\in[0,L(N)]\colon\{p_{N}(h)\}\in[0,\theta_{N}]\}\big{|}\ll_{A}\frac{L( N)}{\log^{A}N}.\] An entirely similar argument yields the analogous relation for the interval \([1-\theta_{N},1)\). Therefore, the number of integers in \([0,L(N)]\) for which \(\{p_{N}(h)\}\in S_{N}\) is at most \(O_{A}(L(N)\log^{-A}N)\). In conclusion, since \(\lfloor a(N+h)\rfloor=\lfloor p_{N}(h)\rfloor\) for all integers not in \(S_{N}\), we have that the number of integers which does not satisfy this last relation is \(O_{A}(L(N)\log^{-A}N)\), which yields the desired result. The above proposition asserts that, for almost all values of \(h\in[0,L(N)]\), we can write \(\lfloor a(N+h)\rfloor=\lfloor p_{N}(h)\rfloor\). The logarithmic power saving in the statement will be helpful since we are dealing with averages weighted by the sequence \(\Lambda_{w,b}(n)-1\), which has size comparable to \(\log N\) on the interval \([N,N+L(N)]\). Furthermore, notice that we did not assume that \(a\) is a Hardy field function in the proof. Thus, the conditions in this proposition can be used to prove a comparison result for more general iterates. #### 4.4.2. The case of slow functions Unfortunately, the previous proposition cannot deal with functions whose only possible Taylor approximations involve only a constant term. This case will emerge when we have sub-fractional functions (see Definition 2.1) since, as we have already remarked, these functions have a polynomial approximation of degree \(0\) in short intervals (assuming that \(L(t)\ll t^{1-\varepsilon}\)). To cover this case, we will need the following proposition which is practically of a qualitative nature. **Proposition 4.5**.: _Let \(a(t)\in\mathcal{H}\) be a sub-fractional function such that \(a(t)\succ\log t\). Assume \(L(t)\) is a positive sub-linear function going to infinity and such that \(L(t)\ll t^{1-\delta}\), for some \(\delta>0\). Then, for every \(0<\varepsilon<1\), we have the following: for all \(R\in\mathbb{N}\) sufficiently large we have \(\lfloor a(N+h)\rfloor=\lfloor a(N)\rfloor\) for every \(h\in[0,L(N)]\), for all, except at most \(\varepsilon R\) values of \(N\in[1,R]\)._ Proof.: Observe that for any \(h\in[0,L(N)]\), we have \[a(N+h)=a(N)+ha^{\prime}(\xi_{h}) \tag{57}\] for some \(\xi_{h}\in[N,N+h]\). In addition, since \(a^{\prime}(t)\) converges to \(0\) monotonically, we have \[|ha^{\prime}(\xi_{h})|\leq L(N)a^{\prime}(N)\ll N^{1-\delta}a^{\prime}(N)\ll 1,\] where the last inequality follows from Lemma 4.1 and the assumption that \(a(t)\) is sub-fractional. In particular, there exists a positive real number \(q\), such that \(|ha^{\prime}(\xi_{h})|\ll N^{-q}\), for all \(h\in[0,L(N)]\).15 Footnote 15: We do not actually need this quantity to converge to zero faster than some power of \(N\). The same argument applies if this quantity simply converges to zero. The sequence \(a(n)\) is equidistributed mod \(1\) by Theorem D, since it dominates the function \(\log t\). Now, suppose that \(\varepsilon>0\), and choose a number \(R_{0}\) such that \(R_{0}^{-2q}<\varepsilon/2\). Then, for \(R\geq R_{0}\), the number of integers \(N\in[R_{0},R]\) such that \(\{a(N)\}\in[\frac{\varepsilon}{2},1-\frac{\varepsilon}{2}]\) is \[(R-R_{0})(1-\varepsilon+o_{R}(1))\] due to the fact that \(a(n)\) is equidistributed. For these values of \(N\), we have that \[\{a(N)\}\notin[0,N^{-2q}]\cup[1-N^{-2q},1],\] which implies that for all \(h\in[0,L(N)]\), we have that \(\lfloor a(N+h)\rfloor=\lfloor a(N)\rfloor\), as can be derived easily by (57) and the fact that the error term is \(O(N^{-q})\). If we consider the integers \(N\) in the interval \([1,R_{0}]\) as well, then the number of "bad values" (that is, the numbers \(N\) for which we do not have \(\lfloor a(N+h)\rfloor=\lfloor a(N)\rfloor\) for every \(h\in[0,L(N)]\)) is at most \[R_{0}+(R-R_{0})(\varepsilon+o_{R}(1)).\] Finally, choosing \(R\) sufficiently large, we get that this number is smaller than \(2\varepsilon R\) and the claim follows. In simplistic terms, what we have established is that if we restrict our attention to short intervals \([N,N+L(N)]\) for the natural numbers \(N\), such that \(\{a(N)\}\in[\varepsilon,1-\varepsilon]\), then we can just write \(\lfloor a(N+h)\rfloor=\lfloor a(N)\rfloor\) for all \(h\in[0,L(N)]\). Due to the equidistribution of \(a(n)\) mod \(1\) (which follows from Theorem D), this is practically true for almost all \(N\), if we take \(\varepsilon\) sufficiently small. #### 4.4.3. The case of polynomial functions The final case is the case of functions of the form \(p(t)+x(t)\), where \(p\) is a polynomial with real coefficients and \(x(t)\) is a sub-fractional function. The equidistribution of the corresponding sequence will be affected only by the polynomial \(p\) when restricted to short intervals. Nonetheless, the techniques of Proposition 4.4 cannot be employed, because we cannot establish quantitative bounds on the exponential sums uniformly over all real polynomials. Therefore, we will use the following proposition, which allows us to calculate the integer parts in this case. Unlike the previous two propositions which can be bootstrapped to give a similar statement for several functions, we establish this one for several functions from the outset. We do not need to concern ourselves with rational polynomials, since these can be trivially reduced to the case of integer polynomials by passing to arithmetic progressions. **Proposition 4.6**.: _Let \(k,d\) be positive integers, let \(0<\varepsilon<1/2\) be a real number and let \(w\in\mathbb{N}\). We define \(W=\prod_{p\in\mathbb{P}:\,p\leq w}p\) and let \(1\leq b\leq W\) be any integer with \((b,W)=1\). Suppose that \(a_{1},\ldots,a_{k}\in\mathcal{H}\) are functions of the form \(p_{i}(t)+x_{i}(t)\), where \(p_{i}\) are polynomials of degree at most \(d\) and with at least one irrational non-constant coefficient, while \(x_{i}(t)\) are sub-fractional functions. Finally, assume that \(L(t)\) is a positive sub-linear function going to infinity and such that_ \[t^{\frac{5}{8}}\lll L(t)\lll t.\lx@note{footnote}{See the notational conventions for the definition of $\lll\cdot$}.\] _Then, for every \(r\) sufficiently large in terms of \(w\), \(\frac{1}{\varepsilon}\), we have that there exists a subset \(\mathcal{B}_{r,\varepsilon}\) of integers in the interval \([r,r+L(r)]\) with at most \(O_{k}(\varepsilon L(r))\) elements, such that for all integers \(n\in[r,r+L(r)]\setminus\mathcal{B}_{r,\varepsilon}\), we have_ \[\lfloor p_{i}(n)+x_{i}(n)\rfloor=\lfloor p_{i}(n)+x_{i}(r)\rfloor.\] _Furthermore, the set \(\mathcal{B}_{r,\varepsilon}\) satisfies_ \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B }_{r,\varepsilon}}(n)\ll_{k,d}\varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o _{r}(1). \tag{58}\] **Remark 9**.: _The \(o_{r}(1)\) term depends on the fixed parameters \(w,\varepsilon\). However, in our applications, we will send \(r\to+\infty\), then we will send \(w\to+\infty\), and then \(\varepsilon\to 0\). We shall reiterate this observation in the proof of Theorem 1.1. On the other hand, the \(o_{w}(1)\) term is the same as the one in 2.7 and depends on the degree \(d\) of the polynomials, which will be fixed in applications._ Proof of Proposition 4.6.: Fix an index \(1\leq i\leq k\) and consider a sufficiently large integer \(r\). Using the mean value theorem and the fact that \(|x^{\prime}_{i}(t)|\) decreases to \(0\) faster than all fractional powers by Lemma 4.1, we deduce that \[\max_{0\leq h\leq L(r)}|x_{i}(r+h)-x_{i}(r)|\leq L(r)|x^{\prime}_{i}(r)|\lll 1.\] In particular, there exists \(\delta_{0}>0\) depending only on the functions \(a_{1},\ldots,a_{k}\) and \(L(t)\), such that \[\max_{0\leq h\leq L(r)}|x_{i}(r+h)-x_{i}(r)|\ll r^{-\delta_{0}} \tag{59}\] for all \(1\leq i\leq k\). Thus, we observe that if \(\{p_{i}(n)+x_{i}(r)\}\in(\varepsilon,1-\varepsilon)\) and \(r\) is large enough in terms of \(1/\varepsilon\), then we have that \[\lfloor p_{i}(n)+x_{i}(n)\rfloor=\lfloor p_{i}(n)+x_{i}(r)\rfloor.\] Naturally, we consider the set \[\mathcal{B}_{i,r,\varepsilon}=\{n\in[r,r+L(r)]\colon\{p_{i}(n)+x_{i}(r)\}\in[ 0,\varepsilon]\cup[1-\varepsilon,1)\} \tag{60}\] and take \(\mathcal{B}_{r,\varepsilon}=\mathcal{B}_{1,r,\varepsilon}\cup\cdots\cup \mathcal{B}_{k,r,\varepsilon}\). Now, we observe that the polynomial sequence \(p_{i}\) is well-distributed modulo \(1\), since it has at least one non-constant irrational coefficient. Therefore, if \(r\) is large enough, we have that the set \(\mathcal{B}_{i,r,\varepsilon}\) has less than \(3\varepsilon L(r)\) elements (say). Using the union bound, we conclude that the set \(\mathcal{B}_{r,\varepsilon}\) has \(O(\varepsilon kL(r))\) elements. This shows the first requirement of the proposition. We have to establish (58). We shall set \(M=\big{\lfloor}\varepsilon^{-1}\big{\rfloor}\) for brevity so that \(r\) is assumed to be very large in terms of \(M\). Since the polynomials \(p_{i}\) have at least one non-constant irrational coefficient, we can use Weyl's criterion for well-distribution (see, for instance, [32, Theorem 5.2, Chapter 1]) to conclude that \[\max_{1\leq m\leq M}\Big{|}_{r\leq n\leq r+L(r)}e\big{(}m(p_{i}(n)+x_{i}(r)) \big{)}\Big{|}=o_{r}(1),\] for all \(r\) sufficiently large in terms of \(M\), as we have assumed to be the case.17 On the other hand, Lemma 2.7 implies Footnote 17: A bound that is uniform over all \(m\in\mathbb{N}\) is in general false, so we have to restrict \(m\) to a finite range. \[\max_{1\leq m\leq M}\Big{|}_{r\leq n\leq r+L(r)}\big{(}\Lambda_{w,b}(n)-1\big{)} e\big{(}m(p_{i}(n)+x_{i}(r))\big{)}\Big{|}=o_{w}(1)\] for \(r\) sufficiently large in terms of \(w\). Combining the last two bounds, we deduce that \[\max_{1\leq m\leq M}\Big{|}_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)e\big{(}m(p_{i }(n)+x_{i}(r))\big{)}\Big{|}=o_{w}(1)+o_{r}(1). \tag{61}\] Since we have estimates on the exponential sums weighted by \(\Lambda_{w,b}(n)\), we can now make the passage to (58). To this end, we apply Theorem E for the probability measure \[\nu(S)=\frac{\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\delta_{\{p_{i}( n)+x_{i}(r)\}}(S)}{\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)}.\lx@note{ footnote}{The denominator is non-zero if $r$ is large enough}.\] Setting \[S_{r}=\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\] for brevity, we conclude that \[\begin{split}\sum\limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n) \delta_{\{p_{i}(n)+x_{i}(r)\}}\big{(}[0,\varepsilon]\cup[1-\varepsilon,1) \big{)}\\ S_{r}&\ll 2\varepsilon+\frac{1}{M}+\\ &\sum\limits_{m=1}^{M}\frac{1}{m}\Big{|}\frac{1}{S_{r}}\sum \limits_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)e\big{(}m(p_{i}(n)+x_{i}(r)) \big{)}\Big{|},\end{split} \tag{62}\] where the implied constant is absolute. Applying the bounds in (61) and recalling the definition of \(\mathcal{B}_{i,r,\varepsilon}\), we conclude that \[\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B}_{i,r,\varepsilon}}(n)\ll\Big{(}\varepsilon+\frac{1}{M}\Big{)}S_{r}+\sum_{m=1}^{M }\frac{L(r)}{m}(o_{w}(1)+o_{r}(1))\\ \ll\varepsilon S_{r}+L(r)\big{(}o_{w}(1)+o_{r}(1)\big{)}\log\frac {1}{\varepsilon}, \tag{63}\] since \(M=\big{\lfloor}\varepsilon^{-1}\big{\rfloor}\). Finally, we bound \(S_{r}\) by applying Corollary 2.8 to conclude that \[S_{r}=\frac{\phi(W)}{W}\sum_{\begin{subarray}{c}Wr+b\leq n\leq Wr+b+ WL(r)\\ n=b\;(W)\end{subarray}}\Lambda(n)\leq\frac{\phi(W)}{W}\Big{(}\frac{2WL(r)\log r }{\phi(W)\log\big{(}\frac{L(r)}{W}\big{)}}+\\ O\Big{(}\frac{L(r)}{\log r}\Big{)}+O(r^{1/2}\log r)\Big{)}\ll L(r)(1+o _{r}(1)), \tag{64}\] where we used the fact that \(L(r)\gg t^{5/8}\) to bound the first fraction by an absolute constant. Applying this in (63), we conclude that \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B }_{i,r,\varepsilon}}(n)\ll\varepsilon(1+o_{r}(1))+\big{(}o_{w}(1)+o_{r}(1) \big{)}\log\frac{1}{\varepsilon}.\] Finally, we recall that \(\mathcal{B}_{r,\varepsilon}=\mathcal{B}_{1,r,\varepsilon}\cup\cdots\cup \mathcal{B}_{k,r,\varepsilon}\) and use the union bound to get \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{ B}_{r,\varepsilon}}(n)\ll_{k}\varepsilon++o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1),\] provided that \(r\) is very large in terms of \(1/\varepsilon,w\). This is the desired conclusion. ### Simultaneous approximation of Hardy field functions In view of Proposition 4.4, we would like to show that we can find a function \(L(t)\) such that the growth rate condition of the statement is satisfied for several functions in \(\mathcal{H}\) simultaneously. This is the content of the following lemma. We will only need to consider the case where the functions dominate some fractional power, since for sub-fractional functions, we have Propositions 4.5 and 4.6 that can cover them adequately. We refer again to our notational conventions in Section 1 for the notation \(\lll\). **Proposition 4.7**.: _Let \(\ell\in\mathbb{N}\) and suppose \(a_{1},\ldots,a_{\ell}\in\mathcal{H}\) are strongly non-polynomial functions of polynomial growth that are not sub-fractional. Then, for all \(0<c<1\), there exists a positive sub-linear function \(L(t)\), such that \(t^{c}\ll L(t)\ll t^{1-\varepsilon}\) for some \(\varepsilon>0\) and such that, for all \(1\leq i\leq\ell\), there exist positive integers \(k_{i}\), which satisfy_ \[1\lll\big{|}a_{i}^{(k_{i})}(t)\big{|}^{-\frac{1}{k_{i}}}\lll L(t)\lll\big{|}a_ {i}^{(k_{i}+1)}(t)\big{|}^{-\frac{1}{k_{i}+1}}.\] _Furthermore, the integers \(k_{i}\) can be chosen to be arbitrarily large, provided that \(c\) is sufficiently close to 1._ Proof.: We will use induction on \(\ell\). For \(\ell=1\), it suffices to show that there exists a positive integer \(k\), such that the function \(\big{|}a^{(k+1)}(t)\big{|}^{-\frac{1}{k+1}}\) strongly dominates the function \(\big{|}a^{(k)}\big{(}t\big{)}\big{|}^{-\frac{1}{k}}\). Then, we can pick the function \(L(t)\) to be the geometric mean of these two functions to get our claim.19 Footnote 19: It is straightforward to check that if \(f\lll g\), then \(f\lll\sqrt{fg}\lll g\), assuming, of course, that the square root is well-defined (e.g. when the functions \(f,g\) are eventually positive). Firstly, note that if we pick \(k\) sufficiently large, then we can ensure that \((a^{(k)}(t))^{-\frac{1}{k}}\gg t^{c}\), which would also imply the lower bound on the other condition imposed on the function \(L(t)\). To see why this last claim is valid, observe that the derivatives of \(a\) satisfy the assumptions of Lemma 4.1, so that we have \(a^{(k)}(t)\ll t^{-k}a(t)\). Thus, if \(d\) is a positive integer, such that \(t^{d}\) grows faster than \(a(t)\) and we choose \(k>\frac{d}{c}-1\), we verify that our claim holds. Secondly, we will show that for all \(k\in\mathbb{N}\), we have \[\left|a^{(k)}(t)\right|^{-\frac{1}{k}}\ll t^{1-\varepsilon}\] for some \(0<\varepsilon<1\), as this relation (with \(k+1\) in place of \(k\)) will yield the upper bound on the growth of the function \(L(t)\) that we chose above. For the sake of contradiction, we assume that this fails and use the lower bound from Lemma 4.1, to deduce that \[t^{k(\varepsilon-1)}\gg a^{(k)}(t)\succ\frac{a(t)}{t^{k}\log^{2k}t}\] for every \(0<\varepsilon<1\). This, implies that \(a(t)\ll t^{k\varepsilon}\log^{2k}t\) for all small \(\varepsilon\), which contradicts the hypothesis that \(a(t)\) is not sub-fractional. We remark in passing that this argument also indicates that the integer \(k\) can be made arbitrarily large by choosing \(c\) to be sufficiently close to \(1\), as the last claim in our statement suggests. In order to complete the base case of the induction, we show that for all sufficiently large \(k\), we have \[\left|a^{(k)}(t)\right|^{-\frac{1}{k}}\lll|a^{(k)}(t)|^{-\frac{1}{k+1}}.\] Equivalently, we prove that \[\frac{\left|a^{(k+1)}(t)\right|^{-\frac{1}{k+1}}}{\left|a^{(k)}(t)\right|^{- \frac{1}{k}}}\gg t^{\delta} \tag{65}\] for some \(\delta>0\) that will depend on \(k\). Choose a real number \(0<q<1\) (the value of \(q\) depends on \(k\)), such that \(\left|a^{(k)}(t)\right|^{-\frac{1}{k}}\ll t^{1-q}\), which can be done as we demonstrated above. In order to establish (65), we combine the inequality \(a^{(k)}(t)\gg ta^{(k+1)}(t)\) with the inequality \(\left|a^{(k)}(t)\right|^{-\frac{1}{k}}\ll t^{1-q}\), which after some computations gives the desired result for \(\delta=q/(k+1)\). This completes the base case. Assume that the claim has been established for the integer \(\ell\). Now, let \(a_{1},\ldots,a_{\ell+1}\) be functions that satisfy the hypotheses of the proposition. Our induction hypothesis implies that there exists a function \(L(t)\) with \(t^{c}\ll L(t)\ll t^{1-\varepsilon}\) and integers \(k_{1},\ldots,k_{\ell}\), such that \[\left|a_{i}^{(k_{i})}(t)\right|^{-\frac{1}{k_{i}}}\lll L(t)\lll|a_{i}^{(k_{i} +1)}(t)|^{-\frac{1}{k_{i}+1}},\ \ 1\leq i\leq\ell.\] Due to Proposition 4.3, there exists a positive integer \(s\), such that \[\left|a_{\ell+1}^{(s)}(t)\right|^{-\frac{1}{s}}\prec L(t)\prec\left|a_{\ell+1 }^{(s+1)}(t)\right|^{-\frac{1}{s+1}}. \tag{66}\] Without loss of generality, we may assume that \(c\) is sufficiently close to \(1\). This implies that the integer \(s\) can be chosen to be sufficiently large as well, so that the relation \(\left|a_{\ell+1}^{(s)}(t)\right|^{-\frac{1}{s}}\lll\left|a_{\ell+1}^{(s+1)}( t)\right|^{-\frac{1}{s+1}}\) holds, as we established in the base case of the induction. If each function strongly dominates the preceding one in (66), then we are finished. Therefore, assume that \(L(t)\) is not strongly dominated by the function \(\left|a_{\ell+1}^{(s+1)}(t)\right|^{-\frac{1}{s+1}}\) (the other case is similar). Note that for every \(1\leq i\leq\ell\), we have that \[\left|a_{i}^{(k_{i})}(t)\right|^{-\frac{1}{k_{i}}}\lll\left|a_{\ell+1}^{(s+1 )}(t)\right|^{-\frac{1}{s+1}}.\] Indeed, since the function \(L(t)\) strongly dominates the function \(\left|a_{i}^{(k_{i})}(t)\right|^{-\frac{1}{k_{i}}}\) (by the induction hypothesis) and \(L(t)\) grows slower than the the function \(\left|a_{\ell+1}^{(s+1)}(t)\right|^{-\frac{1}{s+1}}\), this claim follows immediately. Among the functions \(a_{1},\ldots,a_{\ell+1}\), we choose a function for which the growth rate of \(\big{|}a_{i}^{(k_{i})}(t)\big{|}^{-\frac{1}{k_{i}}}\) is maximized.20 Assume that this happens for the index \(i_{0}\in\{1,\ldots,\ell+1\}\) and observe that the function \(\big{|}a_{\ell+1}^{(s+1)}(t)\big{|}^{-\frac{1}{s+1}}\) strongly dominates \(\big{|}a_{i_{0}}^{(k_{i_{0}})}(t)\big{|}^{-\frac{1}{k_{i_{0}}}}\), because the first function grows faster than \(L(t)\) and \(L(t)\) strongly dominates the latter (in the case \(i_{0}=\ell+1\), this follows from the fact that \(\big{|}a_{\ell+1}^{(s)}(t)\big{|}^{-\frac{1}{s}}\lll\big{|}a_{\ell+1}^{(s+1)}( t)\big{|}^{-\frac{1}{s+1}}\)). Footnote 20: In the case \(i=\ell+1\), we are referring to the function \(\big{|}a_{i}^{(s)}(t)\big{|}^{-\frac{1}{\ell}}\). Define the function \(\widetilde{L}(t)\) to be the geometric mean of the functions \(\big{|}a_{i_{0}}^{(k_{i_{0}})}(t)\big{|}^{-\frac{1}{k_{i_{0}}}}\) and \(\big{|}a_{\ell+1}^{(s+1)}(t)\big{|}^{-\frac{1}{s+1}}\). Observe that this function grows slower than the function \(L(t)\), since it is strongly dominated by the function \(\big{|}a_{\ell+1}^{(s+1)}(t)\big{|}^{-\frac{1}{s+1}}\), while the original function \(L(t)\) is not. Due to its construction, we deduce that the function \(\widetilde{L}(t)\) satisfies \[\big{|}a_{\ell+1}^{(s)}(t)\big{|}^{-\frac{1}{s}}\lll\widetilde{L}(t)\lll \big{|}a_{\ell+1}^{(s+1)}(t)\big{|}^{-\frac{1}{s+1}}\] and \[\big{|}a_{i}^{(k_{i})}(t)\big{|}^{-\frac{1}{k_{i}}}\lll\widetilde{L}(t)\] for all \(1\leq i\leq\ell\). This is a simple consequence of the fact that \(\widetilde{L}(t)\) strongly dominates the function \(\big{|}a_{i_{0}}^{(k_{i_{0}})}(t)\big{|}^{-\frac{1}{k_{i_{0}}}}\) and the index \(i_{0}\) was chosen so that the growth rate of the associated function is maximized. In addition, the function \(L(t)\) grows faster than the function \(\widetilde{L}(t)\), which implies that \[\widetilde{L}(t)\prec L(t)\lll\big{|}a_{i}^{(k_{i}+1)}(t)\big{|}^{-\frac{1}{ k_{i}+1}}\] for all \(1\leq i\leq\ell\). The analogous relation in the case \(i=\ell+1\) is also correct, as we pointed out previously. Therefore, the function \(\widetilde{L}(t)\) satisfies all of our required properties and the induction is complete. Finally, the assertion that the integers \(k_{i}\) can be made arbitrarily large follows by enlarging \(c\) appropriately and the fact that given a fixed \(k_{i}\in\mathbb{N}\), the function \(\big{|}a_{i}^{(k_{i}+1)}(t)\big{|}^{-\frac{1}{k_{i}+1}}\) cannot dominate all powers \(t^{c}\) with \(c<1\), as we displayed in the base case of the induction. We can actually weaken the hypothesis that the functions are strongly non-polynomial. The following proposition is more convenient to use and its proof is an immediate consequence of Proposition 4.7. **Proposition 4.8**.: _Let \(\ell\in\mathbb{N}\) and suppose \(a_{1},\ldots,a_{\ell}\in\mathcal{H}\) are functions of polynomial growth, such that \(|a_{i}(t)-p(t)|\ggg 1\), for all real polynomials \(p(t)\) and every \(i\in\{1,\ldots,\ell\}\). Then, for all \(0<c<1\), there exists a positive sub-linear function \(L(t)\), such that \(t^{c}\prec L(t)\ll t^{1-\varepsilon}\) for some \(\varepsilon>0\) and such that there exist positive integers \(k_{i}\), which satisfy_ \[1\lll\big{|}a_{i}^{(k_{i})}(t)\big{|}^{-\frac{1}{k_{i}}}\lll L(t)\lll\big{|}a_ {i}^{(k_{i}+1)}(t)\big{|}^{-\frac{1}{k_{i}+1}}.\] Proof.: Each of the functions \(a_{i}\) can be written in the form \(p_{i}(t)+x_{i}(t)\), where \(p_{i}\) is a polynomial with real coefficients and \(x_{i}\in\mathcal{H}\) is strongly non-polynomial. The hypothesis implies that the functions \(x_{i}\) are not sub-fractional. If \(k\) is large enough, then we have \(a_{i}^{(k)}(t)=x_{i}^{(k)}(t)\) for all \(t\in\mathbb{R}\). The conclusion follows from Proposition 4.7 applied to the functions \(x_{i}(t)\), where the corresponding integers \(k_{i}\) are chosen large enough so that the equality \(a_{i}^{(k_{i})}(t)=x_{i}^{(k_{i})}(t)\) holds. ## 5. The main comparison In this section, we will establish the main proposition that asserts that averages weighted by the W-tricked von-Mangoldt function are morally equal to the standard Cesaro averages over \(\mathbb{N}\). In order to do this, we will use the polynomial approximations for our Hardy field functions and we will try to remove the error terms arising from these approximations using Propositions 4.4, 4.5 and 4.6. Firstly, we will use a lemma that allows us to pass from long averages over the interval \([1,N]\) to shorter averages over intervals of the form \([N,N+L(N)]\). This lemma is similar to [44, Lemma 3.3], the only difference being the presence of the unbounded weights. **Lemma 5.1**.: _Let \((A_{n})_{n\in\mathbb{N}}\) be a sequence in a normed space, such that \(\|A_{n}\|\leq 1\) and let \(L(t)\in\mathcal{H}\) be an (eventually) increasing sub-linear function, such that \(L(t)\gg t^{\varepsilon}\) for some \(\varepsilon>0\). Suppose that \(w\) is a fixed natural number. Then, we have_ \[\Big{\|}\underset{1\leq r\leq R}{\mathbb{E}}\big{(}\Lambda_{w,b}(r)-1\big{)} A_{r}\Big{\|}\leq\underset{1\leq r\leq R}{\mathbb{E}}\ \Big{\|}\underset{r\leq n\leq r+L(r)}{\mathbb{E}}\big{(}\Lambda_{w,b}(n)-1 \big{)}A_{n}\Big{\|}+o_{R}(1),\] _uniformly for all \(1\leq b\leq W\) with \((b,W)=1\)._ Proof.: Using the triangle inequality, we deduce that \[\underset{1\leq r\leq R}{\mathbb{E}}\ \Big{\|}\underset{r\leq n\leq r+L(r)}{ \mathbb{E}}\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\Big{\|}\geq\Big{\|}\underset {1\leq r\leq R}{\mathbb{E}}\big{(}\underset{r\leq n\leq r+L(r)}{\mathbb{E}} \big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\big{)}\Big{\|}.\] Therefore, our result will follow if we show that \[\Big{\|}\underset{1\leq r\leq R}{\mathbb{E}}\Big{(}\underset{r\leq n\leq r+L (r)}{\mathbb{E}}\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\Big{)}-\underset{1\leq r \leq R}{\mathbb{E}}\big{(}\Lambda_{w,b}(r)-1\big{)}A_{r}\Big{\|}=o_{R}(1).\] Let \(u\) denote the inverse of the function \(t+L(t)\), which is well-defined for sufficiently large \(t\) due to monotonicity. Furthermore, it is straightforward to derive that \(\underset{t\rightarrow+\infty}{\lim}u(t)/t=1\) from the fact that \(t+L(t)\) also grows linearly. Now, we have \[\underset{1\leq r\leq R}{\mathbb{E}}\Big{(}\underset{r\leq n\leq r +L(r)}{\mathbb{E}}\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\Big{)}=\frac{1}{R} \Big{(}\sum_{n=1}^{R}p_{R}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}+\\ \sum_{n=R+1}^{R+L(R)}p_{R}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n} \Big{)}\] for some real numbers \(p_{R}(n)\), which denote the number of appearances of \(A_{n}\) in the previous expression (weighted by the term \(1/L(r)\) that appears on each inner average). Assuming that \(n\) (and thus \(R\)) is sufficiently large, so that \(u(n)\) is positive, we can calculate \(p_{R}(n)\) to be equal to \[p_{R}(n)=\frac{1}{L(\lfloor u(n)\rfloor)+1}+\cdots+\frac{1}{L(n)+1}+o_{n}(1),\] since the number \(A_{n}\) appears on the average \(\underset{r\leq n\leq r+L(r)}{\mathbb{E}}\) if and only if \(u(n)\leq r\leq n\). Note that \(p_{R}(n)\) is actually independent of \(R\) (for \(n\) large enough) and therefore, we will denote it simply as \(p(n)\) from now on. We have that \[\lim_{n\rightarrow+\infty}p(n)=1. \tag{67}\] This follows exactly as in the proof of Lemma 3.3 in [44], so we omit its proof here. Now, we show that \[\frac{1}{R}\sum_{n=R+1}^{R+L(R)}p(n)\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}=o_{R} (1). \tag{68}\] Bounding \(p(n)\) trivially by 2 (since its limit is equal to 1) and \(\|A_{n}\|\) by 1, we infer that it is sufficient to show that \[\frac{1}{R}\sum_{n=R+1}^{R+L(R)}\big{|}\Lambda_{w,b}(n)-1\big{|}=o_{R}(1).\] Using the triangle inequality and the fact that \(L(r)\prec r\), this reduces to \[\frac{1}{R}\sum_{n=R+1}^{R+L(R)}\Lambda_{w,b}(n)=o_{R}(1).\] To establish this, we apply Corollary 2.8 to conclude that \[\frac{1}{R}\sum_{n=R+1}^{R+L(R)}\frac{\phi(W)}{W}\Lambda(Wn+b)=\frac{1}{R}\sum _{WR+R+b\leq n\leq WR+R+b+WL(r)}\Lambda(n)\leq\] \[\frac{\phi(W)}{WR}\Big{(}\frac{2WL(R)\log R}{\phi(W)\log\big{(}\frac{L(R)}{W} \big{)}}+O\big{(}\frac{L(R)}{\log(WR+R+b)}\big{)}+O(R^{1/2}\log R)\Big{)}=o_{R} (1).\] This follows from the fact that \(L(R)\prec R\) and that the quantity \(\log R/\log(L(R))\) is bounded by the hypothesis \(L(R)\gg R^{\varepsilon}\). In view of this, it suffices to show that \[\Big{\|}\frac{1}{R}\sum_{n=1}^{R}p(n)\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}- \frac{1}{R}\sum_{n=1}^{R}\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\Big{\|}=o_{R}(1).\] We have \[\Big{\|}\frac{1}{R}\sum_{n=1}^{R}p(n)\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}- \frac{1}{R}\sum_{n=1}^{R}\big{(}\Lambda_{w,b}(n)-1\big{)}A_{n}\Big{\|}\leq\frac {1}{R}\sum_{n=1}^{R}|p(n)-1||\Lambda_{w,b}(n)-1|,\] by the triangle inequality. Now, given \(\varepsilon>0\), we can bound this by \[\frac{1}{R}\sum_{n=1}^{R}\varepsilon\big{(}\Lambda_{w,b}(n)+1\big{)}+o_{R}(1),\] where the \(o_{R}(1)\) term reflects the fact that the bound for \(|p(n)-1|\leq\varepsilon\) is valid for large values of \(n\) only. It suffices to bound the term \[\frac{\varepsilon}{R}\sum_{n=1}^{R}\Lambda_{w,b}(n),\] since the remainder is simply \(O(\varepsilon)\). However, using Corollary 2.8 (or the prime number theorem in arithmetic progressions), we see that this term is also \(O(\varepsilon)\), exactly as we did above. Sending \(\varepsilon\to 0\), we reach the desired conclusion. We restate here our main theorem for convenience. **Theorem 1.1**.: _Let \(\ell,k\) be positive integers and, for all \(1\leq i\leq\ell,\ 1\leq j\leq k\), let \(a_{ij}\in\mathcal{H}\) be functions of polynomial growth such that_ \[|a_{ij}(t)-q(t)|\succ\log t\ \text{ for every polynomial }q(t)\in\mathbb{Q}[t], \tag{69}\] _or_ \[\lim_{t\to+\infty}|a_{ij}(t)-q(t)|=0\ \text{ for some polynomial }q(t)\in\mathbb{Q}[t]+\mathbb{R}. \tag{70}\] _Then, for any measure-preserving system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) of commuting transformations and functions \(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\), we have_ \[\lim_{w\to+\infty}\ \limsup_{N\to+\infty}\max_{\begin{subarray}{c}1\leq b\leq W \\ (b,W)=1\end{subarray}}\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\ \big{(}\Lambda_{w,b}(n)-1\big{)}\prod_{j=1}^{\ell}\big{(} \prod_{i=1}^{k}T_{i}^{\lfloor a_{ij}(Wn+b)\rfloor}\big{)}f_{j}\Big{\|}_{L^{2}( \mu)}=0. \tag{71}\] Proof.: We split this reduction into several steps. For a function \(a\in\mathcal{H}\), we will use the notation \(a_{w,b}(t)\) to denote the function \(a(Wt+b)\) and we will need to keep in mind that the asymptotic constants must not depend on \(W\) and \(b\). As is typical in these arguments, we shall rescale the functions \(f_{1},\ldots,f_{\ell}\) so that they are all bounded by \(1\). ### Step 1: A preparatory decomposition of the functions Each function \(a_{ij}\) can be written in the form \[a_{ij}(t)=g_{ij}(t)+p_{ij}(t)+q_{ij}(t)\] where \(g_{ij}(t)\) is a strongly non-polynomial function (or identically zero), \(p_{ij}(t)\) is either a polynomial with at least one non-constant irrational coefficient or a constant polynomial, and, lastly, \(q_{ij}(t)\) is a polynomial with rational coefficients. Observe that there exists a fixed positive integer \(Q_{0}\) for which all the polynomials \(q_{ij}(Q_{0}n+s_{0})\) have integer coefficients except possibly the constant term, for all \(0\leq s_{0}\leq Q_{0}\). These non-integer constant terms can be absorbed into the polynomial \(p_{ij}(t)\). Therefore, splitting our average into the arithmetic progressions \((Q_{0}n+s_{0})\), it suffices to show that \[\lim_{w\to+\infty}\ \limsup_{N\to+\infty}\ \max_{\begin{subarray}{c}1\leq b \leq W\\ (b,W)=1\end{subarray}}\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b}(Q _{0}n+s_{0})-1\big{)}\prod_{j=1}^{\ell}\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor a _{ij,w,b}(Q_{0}n+s_{0})\rfloor}\big{)}f_{j}\Big{\|}_{L^{2}(\mu)}=0\] for all \(s_{0}\in\{0,\ldots,Q_{0}-1\}\). Observe that each one of the functions \(a_{ij,w,b}(Q_{0}t+s_{0})\) satisfies either (7) or (8). Since the polynomials \(q_{ij,w,b}(Q_{0}n+s_{0})\) have integer coefficients, we can rewrite the previous expression as \[\lim_{w\to+\infty}\ \lim_{N\to+\infty}\ \max_{\begin{subarray}{c}1\leq b \leq W\\ (b,W)=1\end{subarray}}\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\mathbf{1}_{s_{0}\ (Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\\ \prod_{j=1}^{\ell}\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor g_{ij,w,b}( n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}\big{)}f_{j}\Big{\|}_{L^{2}(\mu)}=0. \tag{72}\] ### Step 2: Separating the iterates Define the sets \[S_{1}=\{(i,j)\in[1,k]\times[1,\ell]\colon g_{ij}(t)\ll t^{\delta}\text{ for all }\delta>0\text{ and }p_{ij}\text{ is non-constant}\}, \tag{73}\] and \[S_{2}=\{(i,j)\in[1,k]\times[1,\ell]\colon g_{ij}(t)\ll t^{\delta}\text{ for all }\delta>0\text{ and }p_{ij}\text{ is constant}\}, \tag{74}\] whose union contains precisely the pairs \((i,j)\), for which \(g_{ij}(t)\) is sub-fractional. Our first observation is that if a pair \((i,j)\) belongs to \(S_{2}\), then the function \(a_{ij}(t)\) has the form \(g_{ij}(t)+q_{ij}(t)\), where \(g_{ij}\) is sub-fractional and \(q_{ij}\) is a rational polynomial. Thus, (7) and (8) imply that we either have that \(g_{ij}(t)\succ\log(t)\) or \(g_{ij}(t)\) converges to a constant, as \(t\to+\infty\). The constant can be absorbed into the constant polynomial \(p_{ij}\). In view of this, we will subdivide \(S_{2}\) further into the following two sets: \[S_{2}^{\prime}=\{(i,j)\in S_{2}\colon g_{ij}(t)\succ\log t\},\] \[S_{2}^{\prime\prime}=\{(i,j)\in S_{2}\colon g_{ij}(t)\prec 1\}. \tag{75}\] Observe that iterates corresponding to pairs \((i,j)\) that do not belong to the union \(S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}\) have an expression inside the integer part that has the form \(g(t)+p(t)\), where \(g\) is a strongly non-polynomial function that is not sub-fractional. In particular, these functions satisfy the hypotheses of Proposition 4.8. Furthermore, functions that correspond to the set \(S_{1}\) have the form \(p(t)+x(t)\), where \(p\) is an irrational polynomial and \(x\) is sub-fractional, while functions in \(S_{2}^{\prime}\) are sub-fractional functions that dominate \(\log t\). We will use Proposition 4.6 and Proposition 4.5 for these two collections respectively. Finally, observe that if \((i,j)\in S_{2}^{\prime\prime}\), then for \(n\) sufficiently large, we can write \[\lfloor a_{ij}(Q_{0}n+s_{0})\rfloor=q_{ij}(Q_{0}n+s_{0})+\lfloor c_{ij}\rfloor+ e_{ij,Q_{0}n+s_{0}},\] where \(e_{ij,Q_{0}n+s_{0}}\in\{0,-1\}\) and \(c_{ij}\) is a constant term arising from the constant (in this case) polynomial \(p_{ij}\). The error term \(e_{ij,Q_{0}n+s_{0}}\) actually exists only if \(c_{ij}\) is an integer. In particular, we have \(e_{ij,Q_{0}n+s_{0}}=0\) for all large enough \(n\) when \(g_{ij}(t)\) decreases to \(0\) and \(e_{ij,Q_{0}n+s_{0}}=-1\) if \(g_{ij}(t)\) increases to \(0\). Therefore, if we redefine the polynomials \(q_{ij}(t)\) accordingly so that both \(\lfloor c_{ij}\rfloor\) and the error term \(e_{ij,Q_{0}n+s_{0}}\) (which is independent of \(s_{0}\)) is absorbed into the constant term, we may assume without loss of generality that for all \(n\) sufficiently large, we have \[\lfloor g_{ij}(Q_{0}n+s_{0})+p_{ij}(Q_{0}n+s_{0})\rfloor+q_{ij}(Q_{0}n+s_{0})= q_{ij}(Q_{0}n+s_{0}).\] We will employ this relation to simplify the iterates in (72), where \(n\) will be replaced by \(Wn+b\). We rewrite the limit in (72) as \[\lim_{w\to+\infty}\ \limsup_{N\to+\infty}\ \max_{\begin{subarray}{c}1 \leq b\leq W\\ (b,W)=1\end{subarray}}\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\mathbf{1}_{s_{0}\;(Q_ {0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\\ \prod_{j=1}^{\ell}\Big{(}\prod_{i:\;(i,j)\in S_{1}}T_{i}^{\big{[} g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)}\cdot\prod_{i:\;(i,j)\in S _{2}^{\prime}}T_{i}^{\big{[}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)} \cdot\\ \prod_{i:\;(i,j)\in S_{2}^{\prime\prime}}T_{i}^{q_{ij,w,b}(n)} \cdot\prod_{i:\;(i,j)\not\in S_{2}^{\prime}\cup S_{2}^{\prime\prime}}T_{i}^{ \big{[}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)}\Big{)}f_{j}\Big{\|}_{ L^{2}(\mu)}. \tag{76}\] **Step 3: Passing to short intervals.** The functions \(g_{ij}(t)+p_{ij}(t)\) with \((i,j)\in S_{1}\) satisfy the assumptions of Proposition 4.6, while the functions \(g_{ij}(t)+p_{ij}(t)\) with \((i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}\) satisfy the assumptions of Proposition 4.8 (thus, each one of them satisfies Proposition 4.4 for some appropriately chosen values of the integer \(k\) in that statement). Lastly, the functions of the set \(S_{2}^{\prime}\) satisfy the assumptions of Propositions 4.5. It is straightforward to infer that, in each case, the corresponding property continues to hold when the functions \(g_{ij}(t)+p_{ij}(t)\) are replaced by the functions \(g_{ij,w,b}(t)+p_{ij,w,b}(t)\). This is a simple consequence of the fact that if \(f\in\mathcal{H}\) has polynomial growth, then the functions \(f\) and \(f_{w,b}\) have the same growth rate. Let \(d_{0}\) be the maximal degree appearing among the polynomials \(p_{ij}(t)\). Then, we can find a sub-linear function \(L(t)\) such that \[t^{\frac{5}{8}}\lll L(t)\lll t \tag{77}\] and, such that there exists positive integers \(k_{ij}\) for \((i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}\), for which we have the growth inequalities \[\Big{|}g_{ij}^{(k_{ij})}(t)\Big{|}^{-\frac{1}{k_{ij}}}\lll L(t)\lll\Big{|}g_{ ij}^{(k_{ij}+1)}(t)\Big{|}^{-\frac{1}{k_{ij}+1}}. \tag{78}\] Furthermore, we can assume that \(k_{ij}\) are very large compared to the maximal degree \(d_{0}\) of the polynomials \(p_{ij}(t)\), by taking \(L(t)\) to grow sufficiently fast. We remark that (78) also implies the inequalities \[\Big{|}g_{ij,w,b}^{(k_{ij})}(t)\Big{|}^{-\frac{1}{k_{ij}}}\lll L(t)\lll\Big{|} g_{ij,w,b}^{(k_{ij}+1)}(t)\Big{|}^{-\frac{1}{k_{ij}+1}}. \tag{79}\] for any fixed \(w,b\). For the choice of \(L(t)\) that we made above, we apply Lemma 5.1 to infer that it suffices to show that \[\lim_{w\to+\infty}\ \limsup_{R\to+\infty}\ \max_{\begin{subarray}{c}1 \leq b\leq W\\ (b,W)=1\end{subarray}}\ \operatorname*{\mathbb{E}}_{1\leq r\leq R}\Big{\|} \operatorname*{\mathbb{E}}_{r\leq n\leq r+L(r)}\mathbf{1}_{s_{0}\ (Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\\ \prod_{j=1}^{\ell}\Big{(}\prod_{i:\ (i,j)\in S_{1}}T_{i}^{\lfloor g _{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}\cdot\prod_{i:\ (i,j)\in S_{2}^{\prime}}T_{i}^{\lfloor g _{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}.\\ \prod_{i:\ (i,j)\in S_{2}^{\prime\prime}}T_{i}^{q_{ij,w,b}(n)} \cdot\prod_{i:\ (i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}}T_{i}^{ \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}\Big{)}f_{j}\Big{\|}_ {L^{2}(\mu)}=0. \tag{80}\] **Step 4: Reducing to polynomial iterates and using uniformity bounds.** We now fix \(w\) (thus \(W\)) and the integer \(b\). Suppose that \(R\) is sufficiently large and consider the expression \[\mathcal{J}_{w,b,s_{0}}(R):=\operatorname*{\mathbb{E}}_{1\leq r \leq R}\Big{\|}\operatorname*{\mathbb{E}}_{r\leq n\leq r+L(r)}\mathbf{1}_{s_{0 }\ (Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\\ \prod_{j=1}^{\ell}\Big{(}\prod_{i:\ (i,j)\in S_{1}}T_{i}^{ \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}\cdot\prod_{i:\ (i,j)\in S_{2}^{\prime}}T_{i}^{ \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}.\\ \prod_{i:\ (i,j)\in S_{2}^{\prime\prime}}T_{i}^{q_{ij,w,b}(n)} \cdot\prod_{i:\ (i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}}T_{i}^{ \lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor+q_{ij,w,b}(n)}\Big{)}f_{j}\Big{\|}_ {L^{2}(\mu)}. \tag{81}\] We will apply Propositions 4.4, 4.5 and 4.6 to replace the iterates with polynomials (with coefficients depending on \(r\)). Due to the nature of Proposition 4.5 (namely, that it excludes a small set of \(r\in[1,R]\)), we let \(\mathcal{E}_{R,w,b}\) denote a subset of \(\{1,\ldots,R\}\), which will be constructed throughout the proof and will have small size. We remark that the iterates corresponding to \(S_{2}^{\prime\prime}\) have been dealt with (morally), so we will focus our attention on the other three sets. Let \(d\) be the maximum number among the degrees among the polynomials \(p_{ij},q_{ij}\) and the integers \(k_{ij}\). Let \(\varepsilon>0\) be a small (but fixed) quantity and we assume that \(r\) is large enough in terms of \(1/\varepsilon\), i.e., larger than some \(R_{0}=R_{0}(\varepsilon)\). Observe that if \(R\) is sufficiently large, then we have \(R_{0}\leq\varepsilon R\). We include the "small" \(r\) in the exceptional set \(\mathcal{E}_{R,w,b}\), so that \(\mathcal{E}_{R,w,b}\) now has at most \(\varepsilon R\) elements. We will need to bound the expression \(\mathcal{J}_{w,b,s_{0}}(R)\) for large \(R\) uniformly in \(b\). _Throughout the rest of this step, we implicitly assume that all terms of the form \(o_{r}(1)\) or \(o_{R}(1)\) are allowed to depend on the parameters \(w\) and \(\varepsilon\) which will be fixed up until the end of Step 4. One can keep in mind the following hierarchy \(\frac{1}{\varepsilon}\ll w\ll r\)._ _Case \(\underline{1}:\) We first deal with the functions in \(S_{2}^{\prime}\). Fix an \((i,j)\in S_{2}^{\prime}\) and consider the function \(g_{ij,w,b}(n)+p_{ij,w,b}(n)\) appearing in the corresponding iterate. Observe that due to the definition of \(S_{2}^{\prime}\) in (75), the polynomial \(p_{ij}(t)\) is constant, so that \(p_{ij,w,b}(t)\) is also constant. In addition, the function \(g_{ij}(t)\) is a sub-fractional function and dominates \(\log t\). Therefore, the same is true for the function \(g_{ij,w,b}(t)\)._ We apply Proposition 4.5: for all except at most \(\varepsilon R\) values of \(r\in[1,R]\), we have that \[\lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor=\lfloor g_{ij,w,b}(r)+p_{ij,w,b}(r)\rfloor \text{ for all }n\in[r,r+L(r)]. \tag{82}\] For each \((i,j)\in S_{2}^{\prime}\), we include the "bad" values of \(r\) to the set \(\mathcal{E}_{R,w,b}\), so that the set \(\mathcal{E}_{R,w,b}\) now has at most \((k\ell+1)\varepsilon R\) elements. _Case \(\underline{2}:\) Now, we turn our attention to functions on the complement of the set \(S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}\). The functions \(g_{ij}\) satisfy (79) and recall that we have chosen \(k_{ij}\) to be much larger than the degrees of the \(p_{ij}\), so that the derivative of order \(k_{ij}\) of our polynomial vanishes. In conclusion, we may conclude that \(g_{ij}(t)+p_{ij}(t)\) satisfies the assumptions of Proposition 4.4 for the integer \(k_{ij}\) (and the sub-linear function \(L(t)\) that we have already chosen). Given \(A>0\), we infer that for all but \(O_{A}(L(r)\log^{-A}r)\) values of \(n\in[r,r+L(r)]\), we have \[\lfloor g_{ij,w,b}(n)+p_{ij,w,b}(n)\rfloor=\lfloor\widetilde{p}_{ij,w,b,r}(n)\rfloor, \tag{83}\] where \(\widetilde{p}_{ij,w,b,r}(n)\) is the polynomial \[\sum_{l=0}^{k_{ij}}\frac{(n-r)^{l}g_{ij,w,b}^{(l)}(r)}{l!}+p_{ij,w,b}(n).\] Additionally, the polynomials \(\widetilde{p}_{ij,w,b,r}\) satisfy \[\frac{\big{|}\{n\in[r,r+L(r)]\colon\{\widetilde{p}_{ij,w,b,r}(n)\}\in[1-\delta,1)\}\big{|}}{L(r)}=\delta+O_{A}(\log^{-A}r) \tag{84}\] for any \(\delta<1\). Practically, this last condition signifies that the polynomials \(\widetilde{p}_{ij,w,b,r}\) satisfy the equidistribution condition in Proposition 3.1, which we shall invoke later. Case 3 : Finally, we deal with the case of the set \(S_{1}\). Proposition 4.6 suggests that there is a subset \(\mathcal{B}_{w,b,r,\varepsilon}\) of \([r,r+L(r)]\) of size \(O_{k,\ell}(\varepsilon L(r))\), such that for every \(n\in[r,r+L(r)]\setminus\mathcal{B}_{w,b,r,\varepsilon}\), we have \[\lfloor p_{ij,w,b}(n)+g_{ij,w,b}(n)\rfloor=\lfloor p_{ij,w,b}(n)+g_{ij,w,b}(r )\rfloor. \tag{85}\] Additionally, the set \(\mathcal{B}_{w,b,r,\varepsilon}\) satisfies \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B }_{w,b,r,\varepsilon}}(n)\ll_{k,\ell,d}\ \ \varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1). \tag{86}\] We emphasize that the asymptotic constant in (86) depends only on \(k,l,d\), so that the constant is the same regardless of the choice of the parameters \(w,b\). First of all, we apply (82) to simplify the expression for \(\mathcal{J}_{w,b,s_{0}}(R)\). Namely, for any \(r\notin\mathcal{E}_{R,w,b}\), we have that the inner average in the definition of \(\mathcal{J}_{w,b,s_{0}}(R)\) is equal to \[\Big{\|}\,\underset{r\leq n\leq r+L(r)}{\mathbb{E}}\,\mathbf{1}_ {s_{0}\ (Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\prod_{j=1}^{\ell}\Big{(}\prod_{i \colon(i,j)\in S_{1}}T_{i}^{\big{|}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{|}+q_{ij,w, b}(n)}.\\ \prod_{i\colon(i,j)\in S_{2}^{\prime}}T_{i}^{\big{|}g_{ij,w,b}(r) +p_{ij,w,b}(r)\big{|}+q_{ij,w,b}(n)}\cdot\prod_{i\colon(i,j)\in S_{2}^{\prime \prime}}T_{i}^{q_{ij,w,b}(n)}.\\ \prod_{i\colon(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{ \prime\prime}}T_{i}^{\big{|}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{|}+q_{ij,w,b}(n) \Big{)}f_{j}\Big{\|}_{L^{2}(\mu)}.\] Thus, we have replaced the iterates of the set \(S_{2}^{\prime}\) with polynomials in the averaging parameter \(n\). Secondly, we use (83) to deduce that for all, except at most \(O_{A}(k\ell L(r)\log^{-A}r)\) values of \(n\in[r,r+L(r)]\), the product of transformations appearing in the previous relation can be written as \[\prod_{j=1}^{\ell}\Big{(}\prod_{i\colon(i,j)\in S_{1}}T_{i}^{ \big{|}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{|}+q_{ij,w,b}(n)}\prod_{i\colon(i,j) \in S_{2}^{\prime}}T_{i}^{\big{|}g_{ij,w,b}(r)+p_{ij,w,b}(r)\big{|}+q_{ij,w,b }(n)}.\\ \prod_{i\colon(i,j)\in S_{2}^{\prime\prime}}T_{i}^{q_{ij,w,b}(n)} \cdot\prod_{i\colon(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime \prime}}T_{i}^{\big{|}\widetilde{p}_{ij,w,b}(n)\big{|}+q_{ij,w,b}(n)}\Big{)}f_ {j}. \tag{87}\] The contribution of the exceptional set can be at most \[k\ell\log(Wr+WL(r)+b)\cdot O_{A}(\log^{-A}r),\] since each \(\Lambda_{w,b}(n)\) is bounded by \(\log(Wn+b)\). Therefore, if we choose \(A\geq 2\), this contribution is \(o_{r}(1)\) and we can rewrite the average over the corresponding short interval as \[\Big{\|}\operatorname*{\mathbb{E}}_{r\leq n\leq r+L(r)}\mathbf{1}_ {s_{0}\;(Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\prod_{j=1}^{\ell}\Big{(} \prod_{i:\;(i,j)\in S_{1}}T_{i}^{\big{[}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ ij,w,b}(n)}\\ \prod_{i:\;(i,j)\in S_{2}^{\prime}}T_{i}^{\big{[}g_{ij,w,b}(r)+p_ {ij,w,b}(r)\big{]}+q_{ij,w,b}(n)}\cdot\prod_{i:\;(i,j)\in S_{2}^{\prime}}T_{i}^ {q_{ij,w,b}(n)}.\\ \prod_{i:\;(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{ \prime\prime}}T_{i}^{\big{[}\tilde{p}_{ij,w,b,r}(n)\big{]}+q_{ij,w,b}(n)} \Big{)}f_{j}\Big{\|}_{L^{2}(\mu)}+o_{r}(1). \tag{88}\] Thus, we have reduced our iterates to polynomial form in this case as well. Finally, we follow the same procedure for the set \(S_{1}\). Namely, for all integers \(n\) in the interval \([r,r+L(r)]\) such that \(n\notin\mathcal{B}_{w,b,r,\varepsilon}\), we use (85) to rewrite (87) as \[\prod_{j=1}^{\ell}\Big{(}\prod_{i:\;(i,j)\in S_{1}}T_{i}^{\big{[} g_{ij,w,b}(r)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)}\\ \prod_{i:\;(i,j)\in S_{2}^{\prime}}T_{i}^{q_{ij,w,b}(n)}\cdot\prod _{i:\;(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}}T_{i}^{ \big{[}\tilde{p}_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)}\Big{)}f_{j}.\] The contribution of the set \(\mathcal{B}_{w,b,r,\varepsilon}\) on the average over the interval \([r,r+L(r)]\) can be estimated using the triangle inequality. More specifically, this contribution is smaller than \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\mathbf{1}_{s_{0}\;(Q_{0})}(n)\Big{|} \Lambda_{w,b}(n)-1\Big{|}\mathbf{1}_{\mathcal{B}_{w,b,r,\varepsilon}}(n).\] We bound the characteristic function \(\mathbf{1}_{s_{0}\;(Q_{0})}\) trivially by \(1\), so that the above quantity is smaller than \[\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\Lambda_{w,b}(n)\mathbf{1}_{\mathcal{B }_{w,b,r,\varepsilon}}(n)+\frac{1}{L(r)}\sum_{r\leq n\leq r+L(r)}\mathbf{1}_{ \mathcal{B}_{w,b,r,\varepsilon}}(n). \tag{89}\] The second term contributes \(O_{k,\ell}(\varepsilon)\), since \(\mathcal{B}_{w,b,r,\varepsilon}\) has at most \(O_{k,\ell}(\varepsilon L(r))\) elements. On the other hand, we have a bound for the first term already in (86). Thus, the total contribution is \(O_{k,\ell,d}(1)\) times the expression \[\varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1).\] In view of the above, we deduce that the average in (88) is bounded by \(O_{k,\ell,d}(1)\) times \[\Big{\|}\operatorname*{\mathbb{E}}_{r\leq n\leq r+L(r)}\mathbf{1 }_{s_{0}(Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\prod_{j=1}^{\ell}\Big{(} \prod_{i:\;(i,j)\in S_{1}}T_{i}^{\big{[}g_{ij,w,b}(r)+p_{ij,w,b}(n)+q_{ij,w,b} (n)\big{]}}\\ \prod_{i:\;(i,j)\in S_{2}^{\prime}}T_{i}^{\big{[}g_{ij,w,b}(r)+p _{ij,w,b}(r)+q_{ij,w,b}(n)\big{]}}\cdot\prod_{i:\;(i,j)\in S_{2}^{\prime \prime}}T_{i}^{\big{[}q_{ij,w,b}(n)\big{]}}.\\ \prod_{i:\;(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{ \prime\prime}}T_{i}^{\big{[}\tilde{p}_{ij,w,b}(n)+q_{ij,w,b}(n)\big{]}}\Big{)}f _{j}\Big{\|}_{L^{2}(\mu)}+\varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}( 1). \tag{90}\] Here, we moved the polynomials \(q_{ij,w,b}\) back inside the integer parts, which we are allowed to do since they have integer coefficients. The polynomials in the iterates corresponding to \(S_{1},S_{2}^{\prime},S_{2}^{\prime\prime}\), and the complement of \(S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}\) fulfill the hypothesis of Proposition 3.1. To keep the number of parameters lower, we will apply this proposition for \(\delta=\varepsilon\), where we have assumed that \(\varepsilon\) is a very small parameter. Accordingly, we assume (as we may) that \(w\) and \(r\) are much larger than \(\frac{1}{\varepsilon}\). To see why the hypotheses are satisfied, observe that for the first set, this follows from the fact that \(p_{ij,w,b}\) has at least one non-constant irrational coefficient (since \(p_{ij}\) is non-constant by the definition of \(S_{1}\)). Therefore, the number of integers \(n\in[r,r+L(r)]\) for which we have \[\{g_{ij,w,b}(r)+p_{ij,w,b}(n)+q_{ij,w,b}(n)\}\in(1-\varepsilon,1)\] is smaller than \(2\varepsilon L(r)\) for \(r\) sufficiently large. At the same time, the result is immediate for the second and third sets, since the iterates involve polynomials with integer coefficients (except, possibly, their constant terms). For the final set, this claim follows from (84). In view of the prior discussion, we conclude that there exists a positive integer \(s\), that depends only on \(d,k,\ell\), such that the expression in (90) is bounded by \[\varepsilon^{-k\ell}\big{\|}\mathbf{1}_{s_{0}\;(Q_{0})}\big{(} \Lambda_{w,b}(n)-1\big{)}\big{\|}_{U^{s}(r,r+sL(r)]}+\varepsilon^{-k\ell}o_{w }(1)+o_{\varepsilon}(1)(1+o_{w}(1))+\\ \varepsilon+o_{w}(1)\log\frac{1}{\varepsilon}+o_{r}(1). \tag{91}\] Applying Lemma 2.3, we can bound the previous Gowers norm along the residue class \(s_{0}\;(Q_{0})\) as follows: \[\big{\|}\mathbf{1}_{s_{0}\;(Q_{0})}\big{(}\Lambda_{w,b}(n)-1\big{)}\big{\|}_{ U^{s}(r,r+sL(r)]}\leq\big{\|}\Lambda_{w,b}(n)-1\big{\|}_{U^{s}(r,r+sL(r)]}. \tag{92}\] In view of the arguments above, we conclude that, for every \(r\notin\mathcal{E}_{R,w,b}\), the following inequality holds \[\Big{\|}\;\underset{r\leq n\leq r+L(r)}{\mathbb{E}}\,\mathbf{1}_ {s_{0}(Q_{0})}(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\\ \prod_{j=1}^{\ell}\Big{(}\prod_{i:\;(i,j)\in S_{1}}T_{i}^{\big{[} g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)}\cdot\prod_{i:\;(i,j)\in S _{2}^{\prime}}T_{i}^{\big{[}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)} \cdot\\ \prod_{i:\;(i,j)\in S_{2}^{\prime\prime}}T_{i}^{q_{ij,w,b}(n)} \cdot\prod_{i:\;(i,j)\notin S_{1}\cup S_{2}^{\prime}\cup S_{2}^{\prime\prime}} T_{i}^{\big{[}g_{ij,w,b}(n)+p_{ij,w,b}(n)\big{]}+q_{ij,w,b}(n)\big{)}}f_{j} \Big{\|}_{L^{2}(\mu)}\ll_{k,\ell,d}\\ \varepsilon^{-k\ell}\big{\|}\big{(}\Lambda_{w,b}(n)-1\big{)}\big{\|} _{U^{s}(r,r+sL(r)]}+\varepsilon+\big{(}\varepsilon^{-k\ell}+\log\frac{1}{ \varepsilon}+o_{\varepsilon}(1)\big{)}o_{w}(1)+o_{\varepsilon}(1)+o_{r}(1).\] We apply this estimate to the double average defining \(\mathcal{J}_{w,b,s_{0}}(R)\) in (81). This estimate holds for every \(r\notin\mathcal{E}_{R,w,b}\) and, thus, we need an estimate for the values of \(r\) in this exceptional set. In order to achieve this, we recall that the set \(\mathcal{E}_{R,w,b}\) has at most \((2k\ell+1)\varepsilon R\) elements. For each \(r\in\mathcal{E}_{R,w,b}\), we use the triangle inequality to bound the average over the corresponding short interval by \[\frac{1}{L(r)}\sum_{\begin{subarray}{c}r\leq n\leq r+L(r)\\ n=s_{0}\;(Q_{0})\end{subarray}}(\Lambda(Wn+b)+1).\] We bound the characteristic function of the residue class \(n\equiv s_{0}\;(Q_{0})\) trivially by \(1\) and apply Corollary 2.8 to conclude that this expression is \(O(1)+o_{r}(1)\), using similar estimates as the ones used in the proof of Proposition 4.6 (see (64)). Therefore, the contribution of the set \(\mathcal{E}_{R,w,b}\) is at most \(O_{k,\ell}(\varepsilon)+o_{r}(1)\). Combining all of the above, we arrive at the estimate \[\mathcal{J}_{w,b,s_{0}}(R)\ll_{d,k,\ell}\varepsilon^{-k\ell} \Big{(}\underset{1\leq r\leq R}{\mathbb{E}}\,\big{\|}\big{(}\Lambda_{w,b}(n)-1 \big{)}\big{\|}_{U^{s}(r,r+sL(r)]}\Big{)}+\varepsilon^{-k\ell}o_{w}(1)+\\ o_{\varepsilon}(1)(1+o_{w}(1))+o_{R}(1). \tag{93}\] We restate (80) here. Namely, we want to show that \[\limsup_{R\to+\infty}\,\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\,\mathcal{J}_{w,b,s_{0}}(R)=o_{w}(1).\] Applying (93), we conclude that for a fixed \(w\), we have \[\limsup_{R\to+\infty}\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\mathcal{J}_{w,b,s_{0}}(R)\ll_{d,k,\ell}\varepsilon^{-k \ell}\Big{(}\lim_{R\to+\infty}\operatorname*{\mathbb{E}}_{1\leq r\leq R}\, \max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\big{\|}\big{(}\Lambda_{w,b}(n)-1\big{)}\big{\|}_{U^{s}( r,r+L(r)]}\Big{)}+\\ \varepsilon^{-k\ell}o_{w}(1)+o_{\varepsilon}(1)(1+o_{w}(1)).\] Due to Theorem A, we have that \[\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\big{\|}\big{(}\Lambda_{w,b}(n)-1\big{)}\big{\|}_{U^{s}( r,r+L(r)]}=o_{w}(1)\] for every sufficiently large \(r\). Thus, we conclude that \[\limsup_{R\to+\infty}\,\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\,\mathcal{J}_{w,b,s_{0}}(R)\ll_{d,k,\ell}\varepsilon^{- k\ell}o_{w}(1)+o_{\varepsilon}(1)(1+o_{w}(1)).\] ### Step 5: Putting all the bounds together We restate here our conclusion. We have shown that for all fixed integers \(w\) and real number \(0<\varepsilon<1\), we have \[\limsup_{R\to+\infty}\,\lim_{N\to+\infty}\,\max_{\begin{subarray}{ c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\mathbf{1}_{s_{0}(Q_{0}) }(n)\big{(}\Lambda_{w,b}(n)-1\big{)}\prod_{j=1}^{\ell}\big{(}\prod_{i=1}^{k}T _{i}^{\lfloor a_{ij,w,b}(n)\rfloor}\big{)}f_{j}\Big{\|}_{L^{2}(\mu)}\\ \ll_{d,k,\ell}\varepsilon^{-k\ell}o_{w}(1)+o_{\varepsilon}(1)(1+o_ {w}(1)), \tag{94}\] where we recall that \(d\) was the maximum among the integers \(k_{ij}\) and the degrees of the polynomials \(p_{ij},q_{ij}\) (all of these depend only on the initial functions \(a_{ij}\)). Sending \(w\to+\infty\), we deduce that the limit in (72) (in view of (94)) is smaller than a constant (depending on \(k,\ell,d\)) multiple of \(o_{\varepsilon}(1)\). Sending \(\varepsilon\to 0\), we conclude that the original limit is \(0\), which is the desired result. ## 6. Proofs of the remaining theorems We finish the proofs of our theorems in this section. ### Proof of the convergence results Proof of Theorem 1.2.: Let \((X,\mathcal{X},\mu,T_{1},\ldots,T_{k})\) be the system and \(a_{ij}\in\mathcal{H}\) the functions in the statement. In view of Lemma 2.6, it suffices to show that the averages \[A(N):=\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)\big{(}\prod_{i=1}^{k}T_{i}^{\lfloor a _{ii}(n)\rfloor}\big{)}f_{1}\cdot\ldots\cdot\big{(}\prod_{i=1}^{k}T_{i}^{ \lfloor a_{i\ell}(n)\rfloor}\big{)}f_{\ell}\] converge in \(L^{2}(\mu)\). For a fixed \(w\in\mathbb{N}\), we define \(W=\prod_{p\leq w,p\in\mathbb{P}}p\) as usual and let \(b\in\mathbb{N}\). We define \[B_{w,b}(N):=\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)\big{(}\prod_{i=1}^{k}T_{i}^{ \lfloor a_{ii}(Wn+b)\rfloor}\big{)}f_{1}\cdot\ldots\cdot\big{(}\prod_{i=1}^{k} T_{i}^{\lfloor a_{i\ell}(Wn+b)\rfloor}\big{)}f_{\ell}.\] Let \(\varepsilon>0\). Using Theorem 1.1, we can find \(w_{0}\in\mathbb{N}\) (which yields a corresponding \(W_{0}\)) such that \[\Big{\|}A(W_{0}N)-\frac{1}{\phi(W_{0})}\sum_{\begin{subarray}{c}1\leq b\leq W _{0}\\ (b,W_{0})=1\end{subarray}}B_{w_{0},b}(N)\Big{\|}_{L^{2}(\mu)}=O(\varepsilon) \tag{95}\] for all \(N\) sufficiently large. Our hypothesis implies that the sequence of bounded functions \(B_{w_{0},b}(N)\) is a Cauchy sequence in \(L^{2}(\mu)\), which, in conjunction with (95), implies that the sequence \(A(W_{0}N)\) is a Cauchy sequence. In particular, we have \[\left\|A(W_{0}M)-A(W_{0}N)\right\|_{L^{2}(\mu)}=O(\varepsilon),\] for all \(N,M\) sufficiently large. Finally, since \[\left\|A(W_{0}N+b)-A(W_{0}N)\right\|_{L^{2}(\mu)}=o_{N}(1),\] for all \(1\leq b\leq W_{0}\), we conclude that \(A(N)\) is a Cauchy sequence, which implies the required convergence. Furthermore, if the sequence \(B_{w,b}(N)\) converges to the function \(F\) in \(L^{2}(\mu)\) for all \(w,r\in\mathbb{N}\), then (95) implies that \(\left\|A(W_{0}N)-F\right\|_{L^{2}(\mu)}=O(\varepsilon)\), for all large enough \(N\). Repeating the same argument as above, we infer that \(A(N)\) converges to the function \(F\) in norm, as we desired. Proof of Theorem 1.3.: Let \(a\in\mathcal{H}\) satisfy either (11) or (12), \(k\in\mathbb{N}\), \((X,\mathcal{X},\mu,T)\) be any measure-preserving system, and functions \(f_{1},\ldots,f_{k}\in L^{\infty}(\mu).\) Observe that in either case, the function \(a\) satisfies (7) or (8). In addition, when \(a(t)\) satisfies either of the two latter conditions, then the function \(a(Wt+b)\) satisfies the same condition, for all \(W,b\in\mathbb{N}\). Using [10, Theorem 2.1],21 we have that, for all \(W,b\in\mathbb{N},\), the averages Footnote 21: There is a slight issue here, in that we would need the assumption that the function \(a(Wn+b)\) belongs to \(\mathcal{H}\) in order to apply Theorem 2.2 from [10], However, the proof in [10] only requires some specific growth conditions on the derivatives of the function \(a(Wn+b)\) (specifically those outlined in equation 26 of that paper), which follow naturally from the assumption that \(a\in\mathcal{H}\). \[\frac{1}{N}\sum_{n=1}^{N}T^{[a(Wn+b)]}f_{1}\cdot\ldots\cdot T^{k\lfloor a(Wn+b )\rfloor}f_{k}\] converge in \(L^{2}(\mu)\). aWe conclude that the two conditions of Theorem 1.2 are satisfied, which shows that the desired averages converge. In particular, if \(a\) satisfies condition (11), we can invoke [10, Theorem 2.2] to conclude that the limit of the averages \[\frac{1}{N}\sum_{n=1}^{N}T^{[a(Wn+b)]}f_{1}\cdot\ldots\cdot T^{k\lfloor a(Wn+b )\rfloor}f_{k}\] is equal to the limit (in \(L^{2}(\mu)\)) of the averages \[\frac{1}{N}\sum_{n=1}^{N}T^{n}f_{1}\cdot\ldots\cdot T^{kn}f_{k}.\] Again, Theorem 1.2 yields the desired conclusion. Proof of Theorem 1.4.: We work analogously as in the proof of Theorem 1.3. The only difference is that in this case, we use [44, Theorem 1.2] to deduce that, for all \(W\in\mathbb{N}\), \(b\in\mathbb{N}\) positive integers \(W\) and \(b\), the averages \[\frac{1}{N}\sum_{n=1}^{N}T^{[a_{1}(Wn+b)]}f_{1}\cdot\ldots\cdot T^{[a_{k}(Wn+b )]}f_{k}\] converge in \(L^{2}(\mu)\) to the product \(\widetilde{f}_{1}\cdot\ldots\cdot\widetilde{f}_{k}\). The result follows from Theorem 1.2. Proof of Theorem 1.5.: The proof follows identically as the one of Theorem 1.4 by using [11, Theorem 2.3] instead of [44, Theorem 1.2]. ### Proof of the recurrence results We recall Furstenberg's Correspondence Principle for \(\mathbb{Z}^{d}\)-actions [19], for the reader's convenience. **Theorem F** (Furstenberg's Correspondence Principle).: _Let \(d\in\mathbb{N}\) and \(E\subseteq\mathbb{Z}^{d}.\) There exists a system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{d})\) and a set \(A\in\mathcal{X}\) with \(\bar{d}(E)=\mu(A),\) such that_ \[\bar{d}\big{(}E\cap(E+\mathbf{n}_{1})\cap\cdots\cap(E-\mathbf{n}_{k})\big{)} \geq\mu\left(A\cap\prod_{i=1}^{d}T_{i}^{-n_{i,1}}A\cap\cdots\cap\prod_{i=1}^{d }T_{i}^{-n_{i,k}}A\right),\] _for all \(k\in\mathbb{N}\) and \(\mathbf{n}_{j}=(n_{1,j},\ldots,n_{d,j})\in\mathbb{Z}^{d},\)\(1\leq j\leq k.\)_ In view of the correspondence principle, the corollaries in Section 1 follow easily. Proof of Theorem 1.6.: (a) We apply Theorem 1.3 for the functions \(f_{1}=\cdots=f_{k}=\mathbf{1}_{A}.\) Since convergence in \(L^{2}(\mu)\) implies weak convergence, integrating along \(A\) the relation \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\ p\leq N}T^{\lfloor a (p)\rfloor}\mathbf{1}_{A}\cdot\ldots\cdot T^{k\lfloor a(p)\rfloor}\mathbf{1}_ {A}=\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}T^{n}\mathbf{1}_{A}\cdot\ldots \cdot T^{kn}\mathbf{1}_{A},\] and applying Furstenberg's multiple recurrence theorem we infer that \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\ p\leq N}\mu\big{(}A \cap T^{-\lfloor a(p)\rfloor}A\cap\cdots\cap T^{-k\lfloor a(p)\rfloor}A\big{)} >0,\] which is the desired result. (b) We write \(a(t)=cq(t)+\varepsilon(t),\) where \(q(t)\in\mathbb{Z}[t],\ q(0)=0,\ c\in\mathbb{R}\) and \(\varepsilon(t)\) is a function that converges to \(0,\) as \(t\to+\infty\). Using [29, Proposition 3.8], we have that there exists \(c_{0}\) depending only on \(\mu(A),\) the degree of \(q\) and \(k,\) such that \[\liminf_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-[[cq(n)]]}A\cap \cdots\cap T^{-k[[cq(n)]]}A)\geq c_{0}.\] Now, we consider two separate cases. If \(c\) is rational with denominator \(Q\) in lowest terms, then for \(t\) sufficiently large, we have \(|\varepsilon(t)|\leq(2Q)^{-1}\). Therefore, we immediately deduce that \[[[cq(t)+\varepsilon(t)]]=[[cq(t)]].\] Thus, we conclude that \[\liminf_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-[[cq(n)+\varepsilon (n)]]}A\cap\cdots\cap T^{-k[[cq(n)+\varepsilon(n)]]}A)\geq c_{0}. \tag{96}\] If \(c\) is irrational, then the polynomial \(cq(t)\) is uniformly distributed mod \(1\). Given \(\delta>0,\) we consider the set \(S:=\{n\in\mathbb{N}:\ \{cq(n)\}\in[\delta,1-\delta]\},\) which has density \(1-2\delta.\) Therefore, we have \[\Big{|}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-[[cq(n)+\varepsilon (n)]]}A\cap\cdots\cap T^{-k[[cq(n)+\varepsilon(n)]]}A)-\\ \frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-[[cq(n)]]}A\cap\cdots\cap T ^{-k[[cq(n)]]}A)\Big{|}\leq 2\delta+o_{N}(1).\] Sending \(\delta\to 0^{+},\) we derive (96) in this case as well. Notice that since \(c_{0}\) depends only on the degree of \(q,\) we have that \[\liminf_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-[[cq(Rn)+ \varepsilon(Rn)]]}A\cap\cdots\cap T^{-k[[cq(Rn)+\varepsilon(Rn)]]}A)\geq c_{0},\] for all positive integers \(R\). Now, we apply Theorem 1.1 with \(b=1\) and the functions \(a(\cdot-1)\), where we recall that \(a(t)=cq(t)+\varepsilon(t)\) to obtain that for some sufficiently large \(w\), we have \[\liminf_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\Lambda_{w,1}(n)\mu\big{(}A\cap T ^{-\lfloor a(Wn)\rfloor}A\cap\cdots\cap T^{-k\lfloor a(Wn)\rfloor}A\big{)} \geq c_{0}/2,\] where \(W\) is defined as usual in terms of \(w\). Finally, we observe that we can replace the function \(\Lambda(n)\) in the previous relation with the function \(\Lambda(n)\mathbf{1}_{\mathbb{P}}(n)\) since the contribution of the prime powers (i.e. with exponent \(\geq 2\)) is negligible on the average. Therefore, we conclude that \[\liminf_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\Lambda_{w,1}(n)\mathbf{1}_{ \mathbb{P}}(Wn+1)\mu\big{(}A\cap T^{-\lfloor a(Wn)\rfloor}A\cap\cdots\cap T^{- k\lfloor a(Wn)\rfloor}A\big{)}\geq c_{0}/2,\] which implies the desired result. Analogously, we reach the expected conclusion for the set \(\mathbb{P}+1\) instead of \(\mathbb{P}-1\). Proof of Theorem 1.8.: Similarly to the proof of Theorem 1.6, we apply Theorem 1.4 for the functions \(f_{1}=\cdots=f_{k}=\mathbf{1}_{A}\). We deduce that \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\mu\big{(}A \cap T^{-\lfloor a_{1}(p)\rfloor}A\cap\cdots\cap T^{-\lfloor a_{k}(p)\rfloor} A\big{)}=\int\mathbf{1}_{A}\cdot\big{(}\operatorname{\mathbb{E}}(\mathbf{1}_{A}| \mathcal{I}(T))\big{)}^{k}\,d\mu. \tag{97}\] However, using that the function \(\mathbf{1}_{A}\) is non-negative and Holder's inequality, we get \[\int\mathbf{1}_{A}\cdot\big{(}\operatorname{\mathbb{E}}(\mathbf{1}_{A}| \mathcal{I}(T))\big{)}^{k}\,d\mu\geq\Big{(}\int\operatorname{\mathbb{E}}( \mathbf{1}_{A}|\mathcal{I}(T))\,d\mu\Big{)}^{k+1}=\big{(}\mu(A)\big{)}^{k+1},\] and the conclusion follows. Proof of Theorem 1.10.: The proof is similar to the proof of Theorem 1.8. The only distinction is made in (97), namely we have \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N} \mu\big{(}A_{0}\cap T_{1}^{-\lfloor a_{1}(p)\rfloor}A_{1}\cap\cdots\cap T_{k} ^{-\lfloor a_{k}(p)\rfloor}A_{k}\big{)}=\\ \int\mathbf{1}_{A_{0}}\cdot\operatorname{\mathbb{E}}(\mathbf{1}_ {A_{1}}|\mathcal{I}(T_{1}))\cdot\ldots\cdot\operatorname{\mathbb{E}}(\mathbf{ 1}_{A_{k}}|\mathcal{I}(T_{k}))\,d\mu,\] where the sets \(A_{0},A_{1},\ldots,A_{k}\) satisfy the hypothesis. Since each function \(\operatorname{\mathbb{E}}(\mathbf{1}_{A_{i}}|\mathcal{I}(T_{i}))\) is \(T_{i}\)-invariant, we deduce that the integral on the right-hand side is larger than \[\int f\cdot\operatorname{\mathbb{E}}(f|\mathcal{I}(T_{1}))\cdot\ldots\cdot \operatorname{\mathbb{E}}(f|\mathcal{I}(T_{k}))\,d\mu,\] where \(f=\mathbf{1}_{A_{0}\cap T^{\ell_{1}}A_{1}\cap\cdots\cap T^{\ell_{k}}A_{k}}\). However, since the function \(f\) is non-negative, [6, Lemma 1.6] implies that \[\int f\cdot\operatorname{\mathbb{E}}(f|\mathcal{I}(T_{1}))\cdot\ldots\cdot \operatorname{\mathbb{E}}(f|\mathcal{I}(T_{k}))\,d\mu\geq\left(\int f\;d\mu \right)^{k+1}=\mu(A)^{k+1},\] and the conclusion follows. ### Proof of the equidistribution results in nilmanifolds In this final part of this section, we offer a proof for Theorem 1.12. The main tool is the approximation of Lemma 2.12. Proof of Theorem 1.12.: Let \(X\) and \(g_{1},\ldots,g_{k},x_{1},\ldots,x_{k}\) be as in the statementthe section?, we offer a proof for Theorem 1.12. The main tool is the approximation of and let \(s\) denote the nilpotency degree of \(X\). It suffices to show that, for any continuous functions \(f_{1},\ldots,f_{s}\) on \(X\), we have the following: \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}f_{1}(g_{1}^{ \lfloor a_{1}(p)\rfloor}x_{1})\cdot\ldots\cdot f_{k}(g_{k}^{\lfloor a_{k}(p) \rfloor}x_{k})=\int_{Y_{1}}f_{1}\,dm_{Y_{1}}\cdot\ldots\cdot\int_{Y_{k}}f_{k} \,dm_{Y_{k}},\] where \(Y_{i}=\overline{(g_{i}^{\mathbb{Z}}x_{i})}\) for all admissible values of \(i\). We rewrite this in terms of the von Mangoldt function as \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)f_{1}(g_{1}^{\lfloor a_{ 1}(n)\rfloor}x_{1})\cdot\ldots\cdot f_{k}(g_{k}^{\lfloor a_{k}(n)\rfloor}x_{k })=\int_{Y_{1}}f_{1}\,dm_{Y_{1}}\cdot\ldots\cdot\int_{Y_{k}}f_{k}\,dm_{Y_{k}}, \tag{98}\] where the equivalence of the last two relations is a consequence of Lemma 2.6. Our equidistribution assumption implies that for all \(W,b\in\mathbb{N}\), we have \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}f_{1}(g_{1}^{\lfloor a_{1}(Wn+b) \rfloor}x_{1})\cdot\ldots\cdot f_{k}(g_{k}^{\lfloor a_{k}(Wn+b)\rfloor}x_{k}) =\int_{Y_{1}}f_{1}\,dm_{Y_{1}}\cdot\ldots\cdot\int_{Y_{k}}f_{k}\,dm_{Y_{k}}. \tag{99}\] We write \(Y_{i}=G_{i}/\Gamma_{i}\) for some nilpotent Lie groups \(G_{i}\) with discrete and co-compact subgroups \(\Gamma_{i}\) and denote \(Y=Y_{1}\times\cdots\times Y_{k}\). Define the function \(F:Y\to\mathbb{C}\) by \(F(y_{1},\ldots,y_{k})=f_{1}(y_{1})\cdot\ldots\cdot f_{k}(y_{k})\) and rewrite (98) as \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)F(\widetilde{g}_{1}^{ \lfloor a_{1}(Wn+b)\rfloor}\cdot\ldots\cdot\widetilde{g}_{k}^{\lfloor a_{k}(n )\rfloor}\widetilde{x})=\int_{Y}F\,dm_{Y}, \tag{100}\] where \(\widetilde{g}_{i}\) is the element on the nilpotent Lie group \(G_{1}\times\cdots\times G_{k}\) whose \(i\)-th coordinate is equal to \(g_{i}\) and the rest of its entries are the corresponding identity elements. Lastly, \(\widetilde{x}\) is the point \((x_{1},\ldots,x_{k})\in Y\). Similarly, we rewrite (99) as \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}F(\widetilde{g}_{1}^{\lfloor a_{1} (Wn+b)\rfloor}\cdot\ldots\cdot\widetilde{g}_{k}^{\lfloor a_{k}(Wn+b)\rfloor} \widetilde{x})=\int_{Y}F\,dm_{Y}. \tag{101}\] Therefore, we want to prove (100) under the assumption that (101) holds for all \(W,r\in\mathbb{N}\). We use the notation \[A(N):=\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)F(\widetilde{g}_{1}^{\lfloor a_{1}(n )\rfloor}\cdot\ldots\cdot\widetilde{g}_{k}^{\lfloor a_{k}(n)\rfloor} \widetilde{x}),\] and \[B_{W,b}(N):=\frac{1}{N}\sum_{n=1}^{N}F(\widetilde{g}_{1}^{\lfloor a_{1}(Wn+b) \rfloor}\cdot\ldots\cdot\widetilde{g}_{k}^{\lfloor a_{k}(Wn+b)\rfloor} \widetilde{x})\] for convenience. Let \(\varepsilon>0\). Observe that the sequence \(\psi(\mathbf{n})=F(\widetilde{g}_{1}^{n_{1}}\cdot\ldots\cdot\widetilde{g}_{k }^{n_{k}}\widetilde{x})\) is an \(s\)-step nilsequence in \(k\)-variables. We apply Lemma 2.12 to deduce that there exists a system \((X^{\prime},\mathcal{X}^{\prime},\mu,S_{1},\ldots,S_{k})\) and functions \(G_{1},\ldots,G_{s}\in L^{\infty}(\mu)\) such that \[\Big{|}F(\widetilde{g}_{1}^{n_{1}}\cdot\ldots\cdot\widetilde{g}_{k}^{n_{k}} \widetilde{x})-\int\prod_{j=1}^{s}\Big{(}\prod_{i=1}^{k}S_{i}^{\ell_{j}n_{i}} \Big{)}G_{j}\,d\mu\Big{|}\leq\varepsilon\] for all \(n_{1},\ldots,n_{k}\in\mathbb{Z}\), where \(\ell_{j}=(s+1)!/j\). Thus, if we define \[A^{\prime}(N):=\frac{1}{N}\sum_{n=1}^{N}\Lambda(n)\int\prod_{j=1}^{s+1}\Big{(} \prod_{i=1}^{k}S_{i}^{\ell_{j}\lfloor a_{i}(n)\rfloor}\Big{)}G_{j}\,d\mu,\] \[B^{\prime}_{W,b}(N)=\frac{1}{N}\sum_{n=1}^{N}\int\prod_{j=1}^{s+1}\Big{(}\prod_{i= 1}^{k}S_{i}^{\ell_{j}\lfloor a_{i}(Wn+b)\rfloor}\Big{)}G_{j}\,d\mu,\] we deduce that \(|B_{W,b}(N)-B^{\prime}_{W,b}(N)|\leq\varepsilon\), for all \(N\in\mathbb{N}\), whereas \(|A(N)-A^{\prime}(N)|\leq\varepsilon(1+o_{N}(1))\), by the prime number theorem. The functions \(a_{1},\dots,a_{k}\) satisfy the assumptions of Theorem 1.1. Thus, we deduce that if we pick \(w_{0}\) (which provides a corresponding \(W_{0}\)) sufficiently large and apply the Cauchy-Schwarz inequality, we will get \[\max_{\begin{subarray}{c}1\leq b\leq N\\ (b,W_{0})=1\end{subarray}}\Bigl{|}\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w_ {0},b}(n)-1\big{)}\int\prod_{j=1}^{s+1}\Big{(}\prod_{i=1}^{k}S_{i}^{\ell_{j} \lfloor a_{i}(W_{0}n+b)\rfloor}\Big{)}G_{j}\,d\mu\Bigr{|}\leq\varepsilon \tag{102}\] for every sufficiently large \(N\in\mathbb{N}\). In addition, we use (101), the inequality \(|B_{W_{0},b}(N)-B^{\prime}_{W_{0},b}(N)|\leq\varepsilon\) and the triangle inequality to infer that for \(N\) large enough, we have \[\Bigl{|}B^{\prime}_{W_{0},b}(N)-\int_{Y}F\,dm_{Y}\Bigr{|}\leq 2\varepsilon, \tag{103}\] for all \(1\leq b\leq W_{0}\) coprime to \(W_{0}\). Observe that (102) implies that for all \(N\) sufficiently large, we have \[\Bigl{|}A^{\prime}(W_{0}N)-\frac{1}{\phi(W_{0})}\sum_{\begin{subarray}{c}1 \leq b\leq W_{0}\\ (b,W_{0})=1\end{subarray}}B^{\prime}_{W_{0},b}(N)\Bigr{|}\leq 2\varepsilon,\] and we can combine this with (103) to conclude that \[\Bigl{|}A^{\prime}(W_{0}N)-\int_{Y}F\,dm_{Y}\Bigr{|}\leq 4\varepsilon\] for all \(N\) sufficiently large. Since \(|A^{\prime}(N)-A(N)|\leq\varepsilon(1+o_{N}(1))\), we finally arrive at the inequality \[\Bigl{|}A(W_{0}N)-\int_{Y}F\,dm_{Y}\Bigr{|}\leq 6\varepsilon,\] for all large enough \(N\in\mathbb{N}\). Since \(|A(W_{0}N)-A(W_{0}N+b)|=o_{N}(1)\) for all \(1\leq b\leq W\), we conclude that \[\Bigl{|}A(N)-\int_{Y}F\,dm_{Y}\Bigr{|}\leq 7\varepsilon,\] for all sufficiently large \(N\in\mathbb{N}\). Sending \(\varepsilon\to 0\), we deduce (100), which is what we wanted to show. Proof of Proposition Corollary 1.13.: The result follows readily from Theorem 1.12. The first hypothesis of the criterion is satisfied, since each of the functions \(a_{i}(t)\) satisfies (16), while condition (b) follows from [43, Theorem 1.1] and our assumption that \(a_{i}(Wt+b)\) belongs to \(\mathcal{H}\). ## 7. More general iterates In this last section of the article, we discuss how the hypotheses that the functions \(a_{i}(t)\) in the iterates belong to a Hardy field \(\mathcal{H}\) can be weakened. The starting point is Proposition 4.4, which was established for general smooth functions, subject to some growth inequalities on the derivative of some particular order (the integer \(k\) in the statement). Unfortunately, one cannot generalize theorems such as Theorem 1.4, which involve several functions to a more general class. The main obstruction is that in order to obtain the simultaneous Taylor expansions, one needs to find a function \(L(t)\) (the length of the short interval) that satisfies a growth relation for all functions at the same time, which is non-trivial to perform, because we do not know how the derivatives of one function might grow relative to the derivatives of another function. Nonetheless, this is not feasible in the case of one function, such as Theorem 1.3, which leads to Szemeredi-type results. We have the following proposition. **Proposition 7.1**.: _Let \(a(t)\) be a function, defined for all sufficiently large \(t\) and satisfying \(|a(t)|\to+\infty\), as \(t\to+\infty\). Suppose there exists a positive integer \(k\) for which \(a\) is \(C^{k+1},\)\(a^{(k+1)}(t)\) converges to 0 monotonically, and such that22_ Footnote 22: See the subsection with the notational conventions in Section 1 for the notation \(\lll\). Footnote 23: In particular, this case is much simpler than the method used to establish Theorem 1.1, in that we do not have to consider the more complicated double averaging scheme. In addition, we do not need any assumptions on \(L(t)\) other than it is positive and \(L(t)\prec t\). \[t^{5/8}\ll\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\lll|a^{(k+1)}(t)\big{|}^{- \frac{1}{k+1}}\ll t.\] _Then, for any \(\ell\in\mathbb{N},\) measure-preserving system \((X,\mathcal{X},\mu,T_{1},\ldots,T_{\ell}),\) and functions \(f_{1},\ldots,f_{\ell}\in L^{\infty}(\mu)\), we have_ \[\lim_{w\to+\infty}\;\limsup_{N\to+\infty}\;\max_{\begin{subarray}{c}1\leq b \leq W\\ (b,W)=1\end{subarray}}\;\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b} (n)-1\big{)}T_{1}^{\lfloor a(Wn+b)\rfloor}f_{1}\cdot\ldots\cdot T_{\ell}^{ \lfloor a(Wn+b)\rfloor}f_{\ell}\Big{\|}_{L^{2}(\mu)}=0.\] We remark that any improvement in the parameter \(5/8\) in Theorem A will also lower the term \(t^{5/8}\) on the leftmost part of the growth inequalities accordingly. Sketch of the proof of Proposition 7.1.: We define \(L(t)\) to be the geometric mean of the functions \(\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\) and \(\big{|}a^{(k+1)}(t)\big{|}^{-\frac{1}{k+1}}\), which is well-defined for all \(t\) sufficiently large. A standard computation implies the relation \[t^{5/8}\ll\big{|}a^{(k)}(t)\big{|}^{-\frac{1}{k}}\lll L(t)\lll|a^{(k+1)}(t) \big{|}^{-\frac{1}{k+1}}\ll t.\] Regarding the parameter \(w\) as fixed, it suffices to show that \[\limsup_{N\to+\infty}\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\;\Big{\|}\frac{1}{N}\sum_{n=1}^{N}\big{(}\Lambda_{w,b} (n)-1\big{)}T_{1}^{\lfloor g(Wn+b)\rfloor}f_{1}\cdot\ldots\cdot T_{\ell}^{ \lfloor g(Wn+b)\rfloor}f_{\ell}\Big{\|}_{L^{2}(\mu)}=o_{w}(1).\] This follows if we show that \[\limsup_{N\to+\infty}\max_{\begin{subarray}{c}1\leq b\leq W\\ (b,W)=1\end{subarray}}\;\Big{\|}\sum_{N\leq n\leq N+L(N)}\big{(}\Lambda_{w,b} (n)-1\big{)}T_{1}^{\lfloor a(Wn+b)\rfloor}f_{1}\cdot\ldots\cdot T_{\ell}^{ \lfloor a(Wn+b)\rfloor}f_{\ell}\Big{\|}_{L^{2}(\mu)}=o_{w}(1).\] This derivation is very similar to the proof of [10, Lemma 4.3], which was stated only for bounded sequences. This is proven by covering the interval \([1,N]\) with non-overlapping sub-intervals that have the form \([m,m+L(m)]\) (for \(m\) large enough), where the term of the average on the last set of the covering is bounded as in (68). 23 Footnote 23: In particular, this case is much simpler than the method used to establish Theorem 1.1, in that we do not have to consider the more complicated double averaging scheme. In addition, we do not need any assumptions on \(L(t)\) other than it is positive and \(L(t)\prec t\). Using Proposition 4.4 and the abbreviated notation \(g_{W,b}(t)\) for the function \(g(Wt+b)\), we deduce that we can write \[\lfloor g_{W,b}(n)\rfloor=\left\lfloor g_{W,b}(N)+\cdots+\frac{(n-N)^{k}g_{W,b }^{(k)}(N)}{k!}\right\rfloor\] for all except at most \(O(L(N)\log^{-100}N)\) values of \(n\in[N,N+L(N)]\). Furthermore, we also have the equidistribution assumption of Proposition 4.4, which implies that Proposition 3.1 is applicable for the polynomial \[g_{W,b}(N)+\cdots+\frac{(n-N)^{k}g_{W,b}^{(k)}(N)}{k!}\] appearing in the iterates. The conclusion then follows similarly as in the proof of Theorem 1.1, so we omit the rest of the details. An application of the previous comparison is for the class of _tempered_ functions, which we define promptly. **Definition 7.2**.: _Let \(i\) be a non-negative integer. A real-valued function \(g\) which is \((i+1)\)-times continuously differentiable on \((t_{0},\infty)\) for some \(t_{0}\geq 0,\) is called a tempered function of degree \(i\) (we write \(d_{g}=i\)), if the following hold:_ 1. \(g^{(i+1)}(t)\) _tends monotonically to_ \(0\) _as_ \(t\to\infty;\)__ 2. \(\lim_{t\to+\infty}t|g^{(i+1)}(t)|=+\infty.\)__ _Tempered functions of degree \(0\) are called Fejer functions._ For example, consider the functions \[g_{1}(t)=t^{1/25}(100+\sin\log t)^{3},\;g_{2}(t)=t^{1/25},\;g_{3}(t)=t^{17/2}(2 +\cos\sqrt{\log t}). \tag{104}\] We have that \(g_{1}\) and \(g_{2}\) are Fejer, \(g_{3}\) is tempered of degree \(8\) (which is not Hardy, see [1]). Every tempered function of degree \(i\) is eventually monotone and it grows at least as fast as \(t^{i}\log t\) but slower than \(t^{i+1}\) (see [1]), so that, under the obvious modification of Definition 2.1, tempered functions \(\mathcal{T}\) are strongly non-polynomial. Also, for every tempered function \(g,\) we have that \((g(n))_{n\in\mathbb{N}}\) is equidistributed mod \(1\).24 Footnote 24: For Fejer functions this is a classical result due to Fejer (for a proof see [32]). The general case follows inductively by van der Corput’s difference theorem. In general, it is more restrictive to work with tempered functions than working with Hardy field ones. To see this, notice that ratios of tempered functions need not have limits, in contrast to the Hardy field case. For example, the functions \(g_{1}\) and \(g_{2}\) in (104) are such that \(g_{1}(t)/g_{2}(t)\) has no limit as \(t\to+\infty\). This issue persists even when we are dealing with a single function, as ratios that involve derivatives of the same function may not have a limit either. Indeed, we can easily see that \(g_{1}\) from (104) (which was first studied in [8]) has the property that \(\frac{tg^{\prime}_{1}(t)}{g_{1}(t)}\) does not have a limit as \(t\to+\infty.\) The existence of the limit of the latter is important as it allows us to compare (via L' Hopital's rule) growth rates of derivatives of functions with comparable growth rates. In order to sidestep the aforementioned problematic cases, we restrict our study to the following subclass of tempered functions (see also [1], [31]). Let \(\mathcal{R}:=\Big{\{}g\in C^{\infty}(\mathbb{R}^{+}):\;\lim_{t\to+\infty} \frac{tg^{(i+1)}(t)}{g^{(i)}(t)}\in\mathbb{R}\;\;\text{for all}\;\;i\in \mathbb{N}\cup\{0\}\Big{\}};\) \(\mathcal{T}_{i}:=\Big{\{}g\in\mathcal{R}:\;\exists\;i<\alpha<i+1,\;\lim_{t\to+ \infty}\frac{tg^{\prime}(t)}{g(t)}=\alpha,\;\lim_{t\to+\infty}g^{(i+1)}(t)=0 \Big{\}};\) and \(\mathcal{T}:=\bigcup_{i=0}^{\infty}\mathcal{T}_{i}.\) For example, \(g_{2}\in\mathcal{T}_{0}\) and \(g_{3}\in\mathcal{T}_{8}\) (\(g_{2},g_{3}\) are those from (104)). Notice that while the class of Fejer functions contain sub-fractional functions, \(\mathcal{T}_{0}\) does not as, according to [7, Lemma 6.4], if \(g\in\mathcal{T}\) with \(\lim_{t\to+\infty}\frac{tg^{\prime}(t)}{g(t)}=\alpha,\) then for every \(0<\beta<\alpha\) we have \(t^{\beta}\prec g(t).\) We will prove a convergence result for the class \(\mathcal{T}\) through an application of Proposition 7.1. **Lemma 7.3**.: _Let \(g\) be a function in \(\mathcal{T}\) and \(0<c<1\). Then, for all large enough positive integers \(k\), we have_ \[t^{c}\prec\big{|}g^{(k)}(t)\big{|}^{-\frac{1}{k}}\lll\big{|}g^{(k+1)}(t)\big{|} ^{-\frac{1}{k+1}}\prec t.\] Proof.: Since \(g(t)\prec t^{d_{g}+1}\) and \(0<c<1\), we have \(g(t)\prec t^{k(1-c)}\) for all large enough \(k\in\mathbb{N},\) which implies \[\frac{g^{(k)}(t)}{t^{-ck}}=\frac{g(t)}{t^{k(1-c)}}\cdot\prod_{i=1}^{k}\frac{tg ^{(i)}(t)}{g^{(i-1)}(t)}\to 0.\] Hence, \(g^{(k)}(t)\prec t^{-ck}\) or, equivalently, \(t^{c}\prec\big{|}g^{(k)}(t)\big{|}^{-\frac{1}{k}}.\) For the aforementioned \(k\)'s, let \(0<q<1\) so that \(t^{kq}\prec g(t).\) Since \(\lim_{t\to+\infty}\frac{tg^{j}(t)}{g(t)}\notin\mathbb{N},\) \[\frac{t^{k(q-1)}}{g^{(k)}(t)}=\frac{t^{kq}}{g(t)}\cdot\prod_{i=1}^{k}\frac{g^{( i-1)}(t)}{tg^{(i)}(t)}\to 0,\] so \(t^{k(q-1)}\prec g^{(k)}(t).\) As \(\lim_{t\to+\infty}\frac{tg^{(k+1)}(t)}{g^{(k)}(t)}\in\mathbb{R}\setminus\{0\},\) we get \(g^{(k+1)}(t)\ll t^{-1}g^{(k)}(t),\) so, if we let \(\delta=\frac{q}{k+1},\) we have \[\frac{\big{|}g^{(k+1)}(t)\big{|}^{-\frac{1}{k+1}}}{\big{|}g^{(k)}(t)\big{|}^{ -\frac{1}{k}}}\gg\frac{t^{\frac{1}{k+1}}\big{|}g^{(k)}(t)\big{|}^{-\frac{1}{k+ 1}}}{\big{|}g^{(k)}(t)\big{|}^{-\frac{1}{k}}}=t^{\frac{1}{k+1}}\big{|}g^{(k)}( t)\big{|}^{\frac{1}{k(k+1)}}\succ t^{\frac{1}{k+1}}\cdot t^{\frac{q-1}{k+1}}=t^{\delta},\] completing the proof of the lemma (the rightmost inequality follows by [7]). Using Proposition 7.1 and [10, Theorem 2.2] we get the following result. More precisely, we use the fact here that [10, Theorem 2.2] holds for a single function \(a\) which has the property that, for some \(k\in\mathbb{N},\)\(a\) is \(C^{k+1},\)\(a^{(k+1)}(t)\) converges to \(0\) monotonically, \(1/t^{k}\prec a^{(k)}(t),\) and \(|a^{(k)}(t)|^{-1/k}\prec|a^{(k+1)}(t)|^{-1/(k+1)}\) (see comments in [10, Subsection 2.1.5]). We omit its proof as it is identical to the one of Theorem 1.3. **Theorem 7.4**.: _Let \(g\in\mathcal{T}.\) For any \(k\in\mathbb{N},\) measure-preserving system \((X,\mathcal{X},\mu,T),\) and functions \(f_{1},\ldots,f_{k}\in L^{\infty}(\mu)\), we have_ \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}T^{\lfloor g (p)\rfloor}f_{1}\cdot\ldots\cdot T^{k\lfloor g(p)\rfloor}f_{k}=\lim_{N\to+ \infty}\frac{1}{N}\sum_{n=1}^{N}T^{n}f_{1}\cdot\ldots\cdot T^{kn}f_{k}, \tag{105}\] _where the convergence takes place in \(L^{2}(\mu).\)_ As in the Hardy field case, we have the corresponding recurrence result. **Theorem 7.5**.: _Let \(g\in\mathcal{T}.\) For any \(k\in\mathbb{N},\) measure-preserving system \((X,\mathcal{X},\mu,T),\) and set \(A\) with positive measure, we have_ \[\lim_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\mu(A\cap T^{ -\lfloor g(p)\rfloor}A\cap\cdots\cap T^{-k\lfloor g(p)\rfloor}A)>0.\] The latter implies the following corollary, which guarantees arbitrarily long arithmetic progressions, with steps coming from the class of tempered functions evaluated at primes. **Corollary 7.6**.: _Let \(g\in\mathcal{T}.\) For any set \(E\subseteq\mathbb{N}\) of positive upper density, and \(k\in\mathbb{N},\) we have_ \[\liminf_{N\to+\infty}\frac{1}{\pi(N)}\sum_{p\in\mathbb{P}:\;p\leq N}\bar{d} \big{(}E\cap(E-\lfloor g(p)\rfloor)\cap\cdots\cap(E-k\lfloor g(p)\rfloor) \big{)}>0.\] **Comment**. In Theorem 7.4, and, thus, in Theorem 7.5 and Corollary 7.6, the floor function can be replaced with either the function \(\lceil\cdot\rceil\) or the function \([[\cdot]].\) Furthermore, in each of these results, one can alternatively evaluate the sequences along the affine shifts \(ap+b,\) for \(a,b\in\mathbb{R}\) with \(a\neq 0.\) As we saw, the comparison method provides results along primes through the corresponding results for averages along \(\mathbb{N},\) though in the case of tempered functions, we do not have a comparison result of the same strength as Theorem 1.1. Nonetheless, it is expected that convergence results along \(\mathbb{N}\) for iterates which are comprised of multiple tempered functions (or even combinations of tempered and Hardy field functions) can be transferred to the prime setting. Even in the case of averages along \(\mathbb{N},\) the convergence results are still not established under the most general expected assumptions. For a single function and commuting transformations, a result in this direction was proven in [7]. We note that [7, Theorem 6.1] reflects the complexity of the assumptions we have to impose on the growth rates of functions to deduce such results. This analysis is beyond the scope of this paper.
2309.10780
Towards affective computing that works for everyone
Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.
Tessa Verhoef, Eduard Fosch-Villaronga
2023-09-19T17:31:29Z
http://arxiv.org/abs/2309.10780v1
# Towards affective computing that works for everyone ###### Abstract Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field. affective computing, emotion recognition, diversity, discrimination, bias, fairness, inclusion ## I Introduction Diversity and inclusion are critical aspects of the responsible development of artificial intelligence (AI) technologies, including affective computing. Affective computing, which focuses on recognizing, interpreting, and responding to human emotions, has the potential to revolutionize various domains, such as healthcare, education, and human-machine interaction [1]. Capturing subjective states through technical means is challenging, though, and errors can occur, as seen with lie detectors not working adequately [2] or gender classifier systems misgendering users [3]. If used for ulterior decision-making processes, such inferences could have disastrous consequences for people, the impacts of which may vary depending on the context of an application, i.e., flagging innocent people as potential criminals in border control [4] or detrimtally affecting vulnerable groups in mental health care [5]. Given that single, unimodal data streams seem not robust enough to recognize human emotions, a growing trend in affective computing is using multimodal data to achieve machine interpretation, prediction, and understanding of human affective processes [6, 7]. These modalities include recognizing facial expressions, vocal tones, body posture and gestures, and other physiological signals such as heart rate and skin conductance [8]. Multimodal affective computing promises to measure user reactions to particular content better, create more interactive and engaging user experiences, and gain deeper insights into user behavior much more accurately than with unimodal systems [9]. However, combining different information strains is not straightforward [5, 10] and to what extent multimodal approaches will be able to solve existing problems in the field of affective computing remains an open question. Issues such as the existing disagreement on the nature and scientific understanding of emotions [11], or problems related to bias, discrimination, and injustice in affective computing [1, 12] likely remain. For instance, although emotion recognition algorithms have gained significant attention, they may have different outcomes for different groups due to health conditions, age, and gender [13, 14]. Mental health conditions such as depression or schizophrenia can affect facial expressions and speech, making it difficult to identify emotions accurately [15]. Non-neurotypical individuals may also have difficulty expressing emotions, while individuals with PTSD or phobias may display exaggerated or blunted emotional responses [16]. Concerning age, children's facial expressions and speech may be less distinct than adults and older adults' facial appearance changes due to aging can make it harder for algorithms to detect emotions accurately [17]. Given the growing interest in using these techniques in sensitive contexts such as healthcare and education, we explore how multimodal affective computing impacts diversity, equity, and inclusion in this article. In particular, we review the various ways human traits influence emotional expression and discuss its consequences for the current state of diversity and inclusion in affective computing. We then analyze an extensive list of datasets commonly used in affective computing and highlight how diverse and inclusive they are by juxtaposing them with the different grounds for discrimination that the law provides (i.e., religion or belief, origin, sexual orientation, sex, skin color, race, civil status, disability or chronic illness, or age), focusing on those characteristics that may affect emotion recognition the most. We anticipate that systems trained on the datasets currently available and used most widely may not work equally well for everyone and will likely have racial biases, biases against users with (mental) disabilities, and age biases because they derive from limited samples that do not fully represent societal diversity. We conclude by highlighting that there is still a long way to go for the field of affective computing to combat biases and inequalities that are typically exacerbated by the lack of diversity in datasets, technical teams, and the community [18]. Finally, we propose recommendations for improving diversity and inclusion in affective computing research. ## II Human traits influencing emotional expression Although physical and physiological markers to recognize emotions have gained significant attention in recent years, algorithms based on these markers may have different outcomes for different groups due to several factors that influence the recognition of emotions, such as age, health conditions and gender. Emotion recognition algorithms may have limitations in detecting emotions in different age groups [19, 20]. For instance, children's facial expressions and speech may be less distinct than adults' [21] and may not have a full range of emotions, making it difficult to identify their emotional state accurately [22]. Furthermore, children may understand emotions differently than adults, and their expressions may not match their feelings, further complicating emotion recognition. One of the main challenges in emotion recognition algorithms for older adults is changes in facial appearance due to aging. As people age, their faces undergo various changes, such as the loss of muscle tone, wrinkle formation, and reduced eyebrow movements, making it harder for algorithms to detect emotions such as surprise or anger [23]. Also, if people undergo plastic surgery, which tends to happen later in life, this may affect emotion recognition systems [24]. In addition, health conditions that predominantly affect older individuals are known to affect emotional expressions. Alzheimer's Disease, for instance, is associated with impairments in the production of facial expressions and mood disorders [25], and Parkinson's Disease affects speech prosody and other communicative functions accompanied by an impact on mood [26]. Some mental health conditions affect facial expressions and speech, making it difficult to identify emotions accurately. For example, individuals with depression or schizophrenia often have a flattened affect, meaning they display fewer emotional expressions that, although not always affecting their subjective experience, make it challenging to detect emotional states [15, 27]. Similarly, non-neurotypical individuals may have difficulty expressing emotions, making it difficult to identify emotions accurately using physical markers [28]. Individuals with post-traumatic stress disorder (PTSD) or phobias may display exaggerated or blunted emotional responses, impacting emotion recognition [16]. Cultural differences may play a role as well [29], and emotion recognition algorithms may for example need different approaches for accurately detecting emotions in the deaf community [30]. Differences in body posture and facial expressions may carry linguistic meaning in sign languages [31] and may therefore be less reliable for detecting emotions in sign language users as compared to the hearing community. Recent studies have also revealed that some systems based on facial features perform better in one gender than another, with generally lower accuracy for female faces [32]. This is not unlike well-known biases in face recognition, where for instance, algorithms developed by major tech companies were significantly less accurate in recognizing darker-skinned individuals, particularly women, than lighter-skinned individuals [33]. Several rationales may explain these disparities in accuracy and apparent errors in recognition performance. First, algorithms developed using non-diverse datasets may have gender biases in emotion recognition accuracy in underrepresented groups. Better emotion recognition was reported in female individuals, for instance, when females were overrepresented in the data [20]. Gender-balanced data does not guarantee balanced performance though, since other factors play a role as well. E.g. female faces have been found to be on average more similar to each other than male faces [32] and controlling for specific features known to be more prevalent in one gender over others, such as beards for males or make-up for females, balances performance more [32]. Affective computing researchers increasingly focus on the multimodal integration of multiple data sources as the solution for improving emotion recognition systems. However, the field of Machine Learning has identified problems that may arise when measurements from multiple datasets are combined, which may introduce increased "structured missingness" [34], referring to non-random patterns of missing data or underrepresentation. This problem is especially prevalent in data describing highly heterogeneous population characteristics, such as the expression of emotions. Besides problems in learning performance and prediction accuracy, structured missingness may perpetuate or exacerbate existing inequalities, especially when data from underrepresented groups is missing entirely. For the field of affective computing, the extensive use of multimodal integration potentially involves combining datasets which are individually already relatively sparse and do not cover the range of genders, variety in mental health conditions, age groups, or other conditions that may have a direct effect on emotions. This can be very problematic and amplify biases that are already a major problem in single datasets. As an example, the most widely used facial expression dataset (Extended Cohn-Kanade (CK+) [35] has over 200 subjects in it, making it one of the most extensive datasets in terms of the number of subjects included for data collected in the lab, but it contains only two genders, not equally distributed (69% female), and a very skewed racial distribution with 81% euro-american, 13% afo-american, and 6% 'other.' The authors who released the original Cohn-Kanade dataset 23 years ago [36] already pointed out that many critical individual differences exist in facial expression features that vary with sex, age, and race. Moreover, they noted how various health conditions can affect facial expressions as well. They suggested including large samples of subjects with diverse backgrounds and health stauses to train emotion recognition systems that are robust to individual differences and work effectively for everyone. As we will show, the field has, unfortunately, yet to progress in this ideal direction. If this probable inequality in access to functioning affective computing technology is not addressed, it's use in products and services will be problematic and potentially even illegal [18]. Given that religion or belief, origin, sexual orientation, sex, skin color, race, civil status, disability or chronic illness, or age are grounds for discrimination, errors in this area could lead to bias and discrimination, which is prohibited by law. Although it is unsurprising that discriminatory outcomes may result from poor datasets that do not account for inter-sectional differences such as age, mental health, and gender, at this point, we do not know the magnitude of the problem, which is what we aim to unearth in this contribution. ## III Analysis of datasets commonly used in affective computing To understand the magnitude of this issue, we analyze a diverse list of datasets commonly used in the field of affective computing. We base our selection on the most recent comprehensive review paper we could find [8], in which many datasets based on various signal modalities were listed. An increasing number of datasets used in affective computing tasks contain (large amounts of) data collected from the web, such as written reviews on Amazon [37, 38] or IMDB [39] for textual sentiment analysis, or images and movies for bodily gesture and facial expression recognition through Google image search or YouTube [40, 41]. Since such datasets did not involve the recruitment of test subjects in a lab, no demographic information is available about the people performing the recorded emotions, making it impossible to assess our diversity dimensions for these sources. We, therefore, base our analysis on datasets that were created in the lab. In addition, we are aware of the fact that affective computing has been used to develop systems and applications targeted at specific populations, such as emotion detection and regulation systems for autism spectrum disorder (ASD) [42], Virtual Reality (VR) Therapy to help individuals with mental health problems, such as anxiety disorders or post-traumatic stress disorder (PTSD) [43], Affective Tutoring Systems for special needs education [44], or emotion-sensing Chatbots for mental health support [45]. Those systems are clearly meant to work for a specific target user group. However, here we focus on work published in general affect recognition aimed to be used ubiquitously by everyone, in general human-computer interaction, healthcare, social robotics, entertainment, advertising, automotive and education settings. In these application areas, it is essential that the technology is available and performs adequately for all potential users equally. Considering all these criteria, we eventually included 26 datasets in our analyses with in total 1121 subjects. These datasets were released between 1998 and 2018 and span five different modalities: Speech, Face, Body, Physiological signals (e.g., EEG, GSR, temperature), and Multimodal (combinations of the first four). Table I lists all datasets with a short explanation of the type of collected data and a reference to the source paper, as well as analyzed demographic features, which are discussed in the results section. ## IV Results Table I shows an overview of all datasets and their analyzed characteristics. We list which modality the data was based on, the year the dataset was released, the number of subjects included, the mean age of the subjects if it was mentioned in the paper, the percentage of female subjects included if specified, and the racial diversity of the subjects if mentioned. We analyzed the papers for the complete list of different grounds for discrimination that the law provides, but sexual orientation, religion, and civil status were never mentioned; therefore, we will not discuss them further in this paper. ### _Race or cultural background: an incomplete task_ When we look at the inclusion of subjects with different ethnic or cultural backgrounds, most papers actually do not mention it at all. Some explicitly state that all participants have the same background (indicated in the table as'single'), while others include more diverse groups (indicated as'multi'). For the multi-background datasets, we counted how many ethnic groups were represented out of four categories, Asian, Black, Latino and White. Fig. 1 plots these numbers for each modality and shows that this has been taken seriously mainly in the datasets for facial expression recognition, while datasets in the other modalities lack diversity. Most papers that describe datasets with diverse subject backgrounds also mention the exact composition [35, 46, 47, 52, 53, 57, 58, 59, 61, 63]. For the pie chart in Fig. 2 all racial composition data from these studies were gathered, and we can see that some groups, especially Black and Latino, are highly underrepresented. Similar findings were highlighted by [20] for datasets based on facial features and by [71] for audiovisual datasets. ### _Sex and gender are different, but not always accounted for_ When looking at the inclusion of different sexes, we see that most papers report the composition of their subject pool based on the percentage of male and female participants. Only Fig. 1: Racial diversity (number of ethnic groups included), grouped by modality one exception is observed (CASME II) where no information on this is mentioned. As also reported by [71], most datasets include equal or almost equal numbers of male and female subjects, besides a few exceptions where either female or male participants are outnumbered [35, 54, 61, 65] or not present at all [49, 52]. That said, this binary distinction may not work for contemporary societies in which other communities are not represented (intersex, transgender). Moreover, gender is different from sex and plays a crucial role in shaping one's self expression [3]. ### _Prominent young age may disregard older groups_ For some, especially older dataset papers (N=7), no participants' age information was mentioned. Some papers reported the mean age of the subjects included in their datasets (N=14), sometimes including standard deviations (N=8), while others reported a range (N=14). In Table 1 we can see that the mean age of subjects is almost exclusively (with one exception) between 20 and 30 years of age across datasets. This is probably because many research groups use undergraduate or graduate students from their programs to participate in their studies. For the studies that reported an age range, Fig. 3 illustrates these for each dataset. We can see here that especially older age groups (50 and up) are highly underrepresented in the data. The one paper that includes a very large range (5-75, [58]), also reported percentages for different age groups. The very young and slightly older categories were still much less represented, with only 2.8% and 5.5%, respectively. ### _(Mental) health or disability: an unfinished agenda_ The vast majority of papers introducing general-purpose affective computing datasets do not mention any inclusion of populations of subjects with varying (mental) health conditions. A few papers [62, 64, 68] mention explicitly that their participants were 'healthy', and these happen to all be datasets that include physiological data. One paper [65] was more specific and explicitly excluded participants with "pregnancy, heavy smoking, mental disorders, chronic and cardiovascular diseases." Another study [63] reported that they selected the subjects using the Eysenck Personality Questionnaire (EPQ), which characterizes personality in terms of Extraversion/Introversion, Neuroticism/ Stability and Psychoticism/Socialisation. The researchers noticed that their technology was less able to pick up on the physiological signals of introverted people or those with unstable mood; therefore, such individuals were deliberately excluded from participation. ## V Discussion Our findings shed light on the current state of inclusivity in affective computing datasets. The results reveal gaps in the representation of diverse populations in these datasets, particularly regarding race, age, and (mental) health/disability. One noteworthy observation is that race is often not mentioned in the papers. When it is, most datasets lack diversity in terms of cultural backgrounds or origins. This is particularly evident in datasets for modalities other than facial expression recognition. This distinction is important because people do not only differ based on how they look but culture can also influence how people express emotions, for example, by emphasizing eye or mouth usage to identify people [29]. The underrepresentation of certain racial/ethnic groups, such as Black and Latino populations, is concerning and highlights the need for more inclusive sampling strategies to ensure these technologies work equally well across racial and cultural groups. The age of participants is also an essential factor to consider in affective computing datasets. Mirroring findings for facial expressions datasets [20] and audiovisual datasets [71], the majority of datasets in our sample report a mean age between 20 and 30 years, indicating a bias towards younger age groups. This leads to an under-representation of older age groups, with limited data available for populations aged 50 and above. This is especially problematic since, as we reviewed, physical changes and age-related health conditions may significantly affect the expression of emotions in older populations. Furthermore, the inclusion of populations with varying (mental) health conditions is lacking in affective computing datasets. Many papers do not mention any specific inclusion or exclusion criteria related to (mental) health or disability, and some even explicitly mention that their participants were "healthy." This lack of diversity in terms of mental health or disability status may limit the generalizability of affective computing technologies to populations with different mental health conditions and may perpetuate stigmas and biases related to mental health. We observed that especially for Fig. 3: Age ranges across datasets that mention age range Fig. 2: Some groups are highly underrepresented datasets based on physiological data, participants with health problems are sometimes explicitly excluded, which means that any multimodal dataset that includes physiological signals will have non-random missing values which can result in biased models that perform well for healthy individuals but poorly for those belonging to the underrepresented group. These findings highlight the need for researchers in the field of affective computing to be more mindful of inclusivity and actively consider the representation of diverse populations in their datasets to ensure that the technologies developed are more representative, equitable, and beneficial for all individuals. Careful data collection with subjects in the lab is time-consuming and costly. It is, therefore, no surprise that recent datasets are often created by scraping data from the web. This has advantages, such as the large volume of data that can be collected in this way, which will also increase the inclusion of more diverse data sources. However, a disadvantage is that demographic information on the subjects in the dataset is not available, making it hard to measure and correct potential biases in the data. In this respect, emotion recognition algorithms also rely heavily on human annotations, which can be influenced by the annotators' demographic characteristics, and this can significantly impact the algorithms' accuracy [71]. To ensure consistent model performance for all target groups, sensitive applications such as emotion recognition must address representational bias in the data of both emotion expressors and annotators. Another way to potentially mitigate the adverse consequences of bias would be introducing a standardized (mandatory) way to document the inclusion of various relevant demographic factors in datasets. The recommendation proposed by [34] for machine learning in general, i.e., "appropriate sensitivity to social processes that underlie data generation and contextual awareness of potential social, cultural and historical determinants of discriminatory patterns are crucial for effective bias mitigation. Thus, involving experts with domain knowledge and social scientific training is vital;" applies to affective computing to a great extent. The mandatory Ethical Impact Statement in the papers presented at the Affective Computing and Intelligent Interaction (ACII) conference is a significant first step in this direction. So far, we have focused mainly on the inclusion of diverse populations in datasets that are used for _training_ affective computing systems, but it is equally crucial that systems are _tested_ on diverse participants to make sure the recognition accuracy is generalizing and works equally well for diverse groups in society. Especially with the kinds of datasets that extensively use data downloaded from the internet, it is essential to assess potential biases by testing the technology on diverse users directly. Given the sometimes very sensitive application areas of affective computing, including the (mental) healthcare industry, it might not be excessive to apply similar guidelines surrounding diversity and inclusion used for clinical trials in medical sciences1 to the testing of affecting computing technologies. It is good to remember that inferences based on subjective data could lead to disastrous consequences depending on the application context, where stakes could be extremely high. In other words, a recommender system that suggests a new song that you may like or a new movie to watch is not the same as a system that is meant to diagnose you with a particular disorder or disease [18] or a system that may be used in border control [4]. Footnote 1: [https://www.nimhd.nih.gov/resources/understanding-health-disparities/diversity-and-inclusion-in-clinical-trials.html](https://www.nimhd.nih.gov/resources/understanding-health-disparities/diversity-and-inclusion-in-clinical-trials.html) In conclusion, this study revealed that affective computing datasets generally lack diversity, with a limited representation of certain racial/ethnic groups and cultural backgrounds, sex and gender imbalances, skewed age demographics, and a total neglect of (mental) health/disability factors. This highlights the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets. Additionally, testing affective computing systems on diverse populations is crucial to ensure generalizability and accuracy. The sensitive nature of affective computing applications calls for guidelines similar to clinical trials in medical sciences. It is imperative to be mindful of the potential consequences of bias in subjective inferences, especially in such high-stakes contexts as those usually involved in affective computing. ## Ethical Impact Statement This research addresses the ethical implications of inclusivity in affective computing datasets and its consequences for emotion recognition algorithms. This study underscores the need for greater diversity and representation in research samples by highlighting current datasets' limitations and potential biases in affective computing systems. The ethical implications of biased and inaccurate emotion recognition systems are significant, as they can impact vulnerable populations, including individuals with mental health conditions, children, older adults, and different genders. The potential consequences of bias in high-stakes contexts, such as decision-making and human-computer interactions, are also discussed, emphasizing the need for fair and accurate emotion recognition systems. The paper advocates for introducing inclusive sampling strategies, standardized documentation of demographic factors, and diversity and inclusion guidelines akin to those for clinical trials. The authors highlight that having more balanced datasets does not automatically lead to fair algorithms, and caution should be applied when considering their recommendations. ## Acknowledgments We thank Joost Batenburg for providing support through the SAILS Program, a Leiden University wide AI initiative. This paper has also been partly funded by the Safe and Sound project, a project that received funding from the European Union's Horizon-ERC program (Grant Agreement No. 101076929).
2307.16885
LEONARDO: A Pan-European Pre-Exascale Supercomputer for HPC and AI Applications
A new pre-exascale computer cluster has been designed to foster scientific progress and competitive innovation across European research systems, it is called LEONARDO. This paper describes the general architecture of the system and focuses on the technologies adopted for its GPU-accelerated partition. High density processing elements, fast data movement capabilities and mature software stack collections allow the machine to run intensive workloads in a flexible and scalable way. Scientific applications from traditional High Performance Computing (HPC) as well as emerging Artificial Intelligence (AI) domains can benefit from this large apparatus in terms of time and energy to solution.
Matteo Turisini, Giorgio Amati, Mirko Cestari
2023-07-31T17:50:16Z
http://arxiv.org/abs/2307.16885v1
# Leonardo: A Pan-European Pre-Exascale Supercomputer for HPC and AI Applications ###### Abstract A new pre-exascale computer cluster has been designed to foster scientific progress and competitive innovation across European research systems, it is called LEONARDO. This paper describes the general architecture of the system and focuses on the technologies adopted for its GPU-accelerated partition. High density processing elements, fast data movement capabilities and mature software stack collections allow the machine to run intensive workloads in a flexible and scalable way. Scientific applications from traditional High Performance Computing (HPC) as well as emerging Artificial Intelligence (AI) domains can benefit from this large apparatus in terms of time and energy to solution. _Keywords--_ Parallel Computing, Pre-Exascale, Scalability ## 1 Introduction LEONARDO is a new European computer cluster with pre-exascale computing capabiliy, at the level of \(0.2\times 10^{18}\) floating point operations per second (FLOPS). The project has been conceived by LEONARDO _Consortium_, a group of six signatory countries1 of the European declaration on High Performance Computing (Declaration, 2018) whose purpose is to foster scientific and technological federative innovation across the European Union. LEONARDO is owned by the European High Performance Computing Joint Undertaking initiative (EuroHPC JU, 2018) and is hosted by CINECA interuniversity consortium (CINECA, 2023) at the Tecnopolo Manifattura Data Valley Hub in Bologna, Italy (Tecnopolo, 2023). Footnote 1: The countries are: Italy (project leader), Austria, Greece, Hungary, Slovakia and Slovenia. The foreseen operational lifetime of the machine is 5 years. In this period it is going to serve as a research facility for a broad class of scientific investigations, due to a complete set of state-of-the art hardware and software technologies that are presented in this paper. The most relevant are a massive amount of computational power available at single node (i.e. a peak performance of 78 teraFLOPS), a fast access storage (over a TB/s bandwidth) and a flexible scalability for multi-node computations. With LEONARDO, researchers from academia and industry can tackle many challenges in different crucial fields, like Digital Twins applications, e.g. DTGEO (2023), Data-driven projects, e.g. GEOIN (2023) and Urgent-Computing, e.g. CHEESE2 (2023) to name a few. and is composed by two compute partitions that are coupled with interconnects, storage and service subsystems. A general purpose Data-Centric (DC) partition is intended to fulfill a vast range of traditional HPC applications by leveraging the latest central processing unit (CPU) technologies. It measures 1536 compute nodes, based on the Intel's 4th generation of Xeon Scalable processor, codenamed _SapphireRapids_. The CPU model is the 56-core 8480+ that features several hardware accelerators to support Single Instruction Multiple Data extension (SIMD) on top of the x86 instruction set. Accelerated functionalities include cryptography and vector algebra (Intel, 2023). The other compute partition is a heterogeneous module called Booster which is dedicated to applications that can benefit from the parallelism offered by general purpose graphical processing unit (GPU). The Booster consists in 3456 nodes configured with a single-socket host Intel _Ice Lake_ CPU (Intel, 2019) and four NVIDIA A100 _Tensor Core_ GPU chips (NVIDIA, 2020). The internal network, used for inter-node communication, relies on 200 Gbps Mellanox's InfiniBand High Data Rate (HDR) technologies (NVIDIA, 2020c) and is organized in a _dragonfly+_ topology. The storage is composed of a mix of high-speed and high-capacity appliances to accommodate the requirements of modern Big Data and AI applications, including Cloud services and Interactive computing. The infrastructure is completed by an operational 100 Gbps Ethernet network, 11 management nodes and 32 frontend servers where users can land, develop codes, submit jobs, and analyze results. Figure 1 presents a schematic overview of LEONARDO. All subsystems are shared between the two compute partitions. A set of four Ethernet/InfiniBand gateways allows the cluster to be connected to external networks. This paper describes the overall architecture of LEONARDO and focuses on the Booster module. Section 2.1 presents Booster's node including computing elements and organization. Details on network and storage partitions can be found in 2.2 and 2.3. This is followed by paragraph 2.4 on frontend and service resources. Software tools and libraries are listed in 2.5. Finally the power supply and the cooling systems are presented in 2.6. Additionally, some benchmark results are reported in Appendix A and the list of hardware components can be found in Appendix B. The DC module will be detailed in a separate article. Figure 1: Architectural overview ## 2 System details LEONARDO is a quite large apparatus consisting of 155 racks, 2 tons each. The compute partitions are made up of 138 racks based on the ATOS BullSequana XH2000 cabinet, a platform that offers high level integration density and Direct Liquid Cooling capabilities (ATOS, 2020). Table 1 shows how compute racks are organized in cells and how blade servers and node units compose each rack. One cell encompasses both Booster and DC type nodes and is called _Hybrid_ cell. An additional cell (the twenty-third) houses storage and service equipment. This includes 12 racks equipped with DDN's appliances and 5 further ATOS racks dedicated to management and frontend servers. ### Booster partition The Booster is the first LEONARDO's compute partition to go in full production in 3Q 2023. It consists of 3456 heterogeneous nodes designed to create a significant speedup in traditional HPC and new AI applications. In facts, this supercomputer is one of the top level facility in the world to support scientific investigation in many fields: with 238.7 petaFLOPS of sustained Linpack performance, it reached the 4th spot in the TOP500 ranking in June 2023 (Top500, 2023), being at the same time the largest supercomputer based on NVIDIA Ampere architecture, with about 14k GPUs. However, the improvement brought by LEONARDO is not only a matter of pure performance, instead, the design of the machine has been intended to accompany the evolution of computing architectures towards hardware specializations and to extend the support for workloads related with the training and the usage of large AI models. #### 2.1.1 GPU accelerator device The A100 _Tensor Core_ GPU is an accelerator device introduced by NVIDIA in 2020 based on the _Ampere_ micro-architecture (NVIDIA, 2020). In the fast changing accelerators market, it represented a breakthrough in terms of flexibility, computational power (+24% floating point, FP) and communication speed (+73% memory bandwidth) in comparison to its predecessor, the V100 _Tensor Core_ GPU based on the _Volta_ architecture (NVIDIA, 2017). The _Ampere_ offers an upgraded compute unit structure (third-generation Tensor Core, TC) that extends hardware support for tensor math to a wider set of datatypes including both floating point and integer numerical formats. Concerning floating point computation in double precision (FP64), the device offers an impressive peak performance of about 20 teraFLOPS for tensor operations and around 10 teraFLOPS for non-tensor math. Together with the single precision performance treated below, this is particularly engaging for HPC communities that rely on very high precision representations in their models. In fact, moving from V100 to A100, a speedup between x1.5 and x2.1 has been measured in HPC benchmarks spanning from molecular dynamics to geo-sciences (Krishinsky et al., 2020). On the AI side, a new numerical format called _Tensor Float 32_ (TF32) definitively enables the use of TC to accelerate the training of a vast number of neural network models. The TF32 is a custom floating point format with 8-bit range (as in FP32) and 10-bit precision (as in FP16). The halved precision does not affect the accuracy of the computations in the AI context and brings a significant speed \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Type & Cell & \(\frac{Rack}{Cell}\) & \(\frac{Blade}{Rack}\) & \(\frac{Nodes}{Blade}\) & Rack & CPU nodes & GPU nodes \\ \hline Booster & 19 & 6 & 30 & 1 & 114 & - & 3420 \\ \hline DC & 2 & 8 & 26 & 3 & 16 & 1248 & - \\ \hline Hybrid & 1 & 2 & 18 & 1 & 2 & - & 36 \\ \cline{2-8} & & 6 & 16 & 3 & 6 & 288 & - \\ \hline \hline Total & 22 & - & - & - & 138 & 1536 & 3456 \\ \hline \end{tabular} \end{table} Table 1: Compute partitions racks up instead. The FP32 data path has been kept for I/O operations and the TF32 is the default choice for computation, so the speedup benefit is transparent to the user (no code change). For maximum speed in training, the supported tensor math includes the standard FP16 datatype (inherited from the previous generation TC) and the new AI dedicated BF16 datatype (8-bit range, 7-bit precision) which allow a factor x2 in throughput respect to TF32 and a factor x20 compared to non-tensor operations. Integer arithmetic is supported as well, for example 8-bit operations have a peak performance of 624 teraOPS and INT4 and binary even more. Table 2 displays the main specifications of the two generations (_Ampere_ and _Volta_) and present the characteristic of the _Da Vinci_ A100 variant installed in LEONARDO. The latter is a _custom_ model consisting in a 97% implementation of the full A100 GPU design (124 vs 128 Streaming Multiprocessors, SM), while the _standard_ A100 uses 84% of it (104 SM). In addition, the A100 offers an instructions set called _Sparse Tensor Core_ that double the TC performance reported in Table 2 when working with AI applications. With this approach, which is referred to as _Structural Sparsity_ by the vendor (NVIDIA, 2020b), the pruning of the weights matrix is structurally constrained by zeroing two elements out of four in a row. At inference time, an efficient use of hardware resources allows to gain a clean factor two in throughput. #### 2.1.2 GPU blade The Booster's blade is a single node blade, based on the latest high-end GPU server board by ATOS company (BullSequana X2135). The blade is called _Da Vinci_ and a picture of it is shown in Figure 2. The entire blade is liquid-cooled, so there are no fans onboard. The host processor is a single socket Intel Xeon Platinum 8385 CPU (Intel, 2019) with 32 cores and 48 MB cache (codenamed _Ice Lake_). The IceLake CPU is AVX-512 capable. Each core contains two AVX-512 execution units which results in a 1024 operations per clock cycle and a peak performance of 2.6 teraFLOPS per core at the nominal frequency of 2.6 GHz. The memory subsystem is DDR4, clocked at 3200 MHz (6400 MT/s). There are eight 32-bit memory controllers. Each is capable of 25 GB/s for a total maximum bandwidth of 200GB/s for CPU-RAM communication. The corresponding eight DIMM slots are equipped with 64 GB capacity modules, so the total RAM available on node is 512 GB. Four _Da Vinci_ A100 GPUs (see 2.1.1) in SXM4 form factor are integrated in the blade. The local memory subsystem of the GPU is placed in the same physical chip of the processing \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Ampere A100 (custom) & Ampere A100 & Volta V100 \\ \hline FP64 [teraFLOPS ] & 11.2 & 9.7 & 7.8 \\ FP32 [teraFLOPS ] & 22.4 & 19.5 & 15.7 \\ \hline FP64 TC [teraFLOPS ] & 22.4 & 19.5 & n.a. \\ TF32 TC [teraFLOPS ] & 179 & 156 & n.a. \\ FP16 TC [teraFLOPS ] & 358 & 312 & n.a. \\ INT8 TC [teraOPS] & 716 & 624 & n.a. \\ INT4 TC [teraOPS] & 1432 & 1248 & n.a. \\ \hline SM [\#] & 124 & 108 & 80 \\ CUDA FP64 core [\#] & 3968 & 3456 & 2560 \\ CUDA FP32 core [\#] & 7936 & 6912 & 5120 \\ CUDA Tensor core [\#] & 496 & 432 & 640 \\ \hline Max Clock [MHz] & 1395 & 1410 & 1530 \\ L2 Cache [MB] & 32 & 40 & 6 \\ Memory [GB] & 64 & 40 & 16 \\ Memory BW [GB/s] & 1640 & 1555 & 900 \\ TDP [W] & 440 & 400 & 300 \\ \hline \end{tabular} \end{table} Table 2: Comparison of GPU chips specifications and peak performance. Figure 2: GPU blade top view the second generation High Bandwidth Memory express interface technology (HBM2e). Each GPU has 64 GB of addressable memory that is organized into four 16 GB HBM2e stacks. Each stack is controlled by two 512-bit memory controllers capable of 3200 MT/s. Overall, more than a terabit per second can be delivered by each GPU, namely 1638 GB/s. In total, the local storage for GPU computation is 320 GB in capacity and can be accessed with an impressive 6.5 TB/s aggregated bandwidth. Intra-node communication pattern is depicted in Figure 3. The CPU utilizes four bundles of PCIe lanes to communicate independently with individual GPU. A bundle consists of 16 PCIe Gen 4.0 lanes for a total of 32 GB/s bandwidth per CPU-GPU communication. Total bandwidth available along the 64 lanes of the CPU is 128 GB/s. Multi-GPU systems are supported by a proprietary fast high speed interconnect (NVIDIA NVLink 3.0) that provides 200 GB/s bidirectional bandwidth per GPU pair, 600 GB/s in total. The _Da Vinci_ A100 blade is equipped with 2 dual-port Mellanox HDR100 ConnectX-6 InfiniBand network interface cards (NIC) for inter-node communication. They provide an aggregated 400 Gbps bandwidth as well as CPU offloading features that are described in the next sections. ### Network system The internal network of a cluster connects the compute nodes together. LEONARDO's network follows a scalable hierarchical cell-based architecture, with a cell being a collection of server nodes. At top level, there are 23 cells fully connected in a _dragonfly_ topology (Kim et al., 2008) as shown in Figure 4. Locally, intra-cell routers are organized in a bipartite graph in which a first tier is directly connected to servers (_leaf routers_) and a second tier (_spine routers_) is equally provisioned with down-links. Such scheme, called _dragonfly+_, allows twice the group size and a factor four in scalability, when compared to the standard _dragonfly_ topology (Shpiner et al., 2017), it is denser and request less switches. Nodes in the cluster are tighly coupled using 200 Gbps InfiniBand Mellanox's High Data Figure 3: Booster blade intra-node communication pattern (logic view) Rate (HDR) technologies components (NVIDIA, 2020c). The switch model is the QM8700, offering a latency of 90 nanoseconds port-to-port and up to 390 million messages delivery per second per port (NVIDIA, 2020d). It can be used in two configurations, 40 ports at 200 Gbps or 80 ports at 100 Gbps bandwidth, with the latter widely adopted at leaf level and referred to as HDR100. The total number of HDR switches is 823. Spines and leafs have different arrangements, depending on the technology of the underlying node: * the number of spine switches is 18 per cell regardless the cell type. They are configured in 200G 40-port mode with 22 up-links and 18 down-links. This corresponds to a pruning factor of 0.82 that is used to compensate for a 1.11 near non-blocking factor of the leaf layer in the Booster cells. * Leaf switches organization depends on the cell type and is always HDR100, except the _Fast Tier_ where each links use full 200G HDR bandwidth per port (see Section 2.3). Each node in the Booster partition is connected to two _leaf_ switches. Differently, in the Data-Centric partition the nodes are directly connected to a single _leaf_ switch using a HDR100 link, i.e. 16 HDR ports on a switch serve 32 CPU nodes. For the hybrid cell, the two arrangements just described are combined, namely 6 out of 8 racks of the cell are DC style and the remaining 2 are Booster style. The number of leaf switches per cell is 18 for Booster and Hybrid type and 16 for the DC type. The I/O cell uses 13 leaf switches. At node level, the network adapter is the ConnectX-6 card (CX6) which can sustain up 200 millions of messages per second with a latency of 600 ns (NVIDIA, 2020a). The CX6 support PCIe Gen4 communication on 32 lanes including pass-through functionality. Applications that do not require the entire bandwidth can benefit from the integrated PCIe switch that allows to serve up to 8 virtual machines on host2. In addition, the CX6 comes with acceleration engines Figure 4: Internal network _dragonfly_ topology. Colors indicate the technology of the underlying nodes. Green is used for Booster cells, blue for DC cells, pink for the I/O. See text for details. that provide CPU offloading for important HPC and AI tasks like: Remote Direct Memory Access (RDMA) for direct data movement from storage infrastructure to local GPU memory, transport operations like adaptive routing and congestion management, MPI collectives and tag matching, encryption based on personal user key. Considering the latencies of the switch and of the NIC mentioned above and the following lengths of optical fiber - 1 meter from NIC to leaf, 5 meters from leaf to spine and 20 meters between the spines - the maximum latency between two nodes located at opposite side of the cluster is 3 microseconds. In general, inter-node communication latency is dominated by the sending and receiving NICs that introduce 1.2 microseconds delay, independently from the destination. Finally, four gateways routers are used to interface the cluster with external networks. Each of these units provides eight 200 Gbps Ethernet-InfiniBand protocol translators for a total bandwidth per unit of 1.6 Tbps and 6.4 Tbps aggregated (NVIDIA, 2021). In addition, an Ethernet administrative network is used for management, with dedicated switches at rack level and single port adapters on each node. ### Storage system A 12-rack system provides storage functionality to the whole computer cluster. The system is based on DDN's appliances and consists of two tiers to accommodate all requirements of modern HPC and AI diverse workloads. * _Fast Tier_ provides 5.7 PB of raw memory for IOPS eager applications and offers burst buffer capability for _hot_ data generally. It is composed by 31x ES400NVX2 appliances configured with \(\simeq\) 150 TB of solid state drives (SSD) using Non-Volatile Memory Express (NVMe) technologies. * _Capacity Tier_ is a 137.6 PB raw capacity storage partition using SAS Hard Disk Drive (HDD) components. It consists of 31 modules composed by a controller head (ES7990X) and two SAS expansion enclosures (SS9012) housing a total of 246 by 18 TB HDD, providing 4.4 PB of storage capacity per module. Metadata is handled by four additional flash-based ES400NVX units. Overall, the storage system consists of 66 DDN's appliances together with the related software stack that is essential for high speed data movement in the different computing scenarios that LEONARDO can serve, in addition to standard fault tolerance and security functions. Table 3 shows the mapping of the three global namespaces to the hardware resources just described, together with the related size and bandwidth characteristics. The filesystem is based on Lustre (Lustre, 2023) and supports encryption and multi-tenancy. The first is a feature for security and isolation that allows to access selected portion of the storage namespace to authenticated users only. This is based on CryptoFS (CryptoFS, 2023). The second feature allows multiple access (multiple client) to files. Of prominent importance for AI workloads, _GPUDirect_ technology is also supported by the storage system, it can directly use the GPU memory for I/O, avoiding the use of system memory (RAM) as bounce buffer. With objects striped across multiple disks, Lustre provides also parallel access to large files at near-wire speed. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Work area & ES7990X & ES400NVX2 & ES400NV & NetSize & Bandwidth \\ & \# & \# & \# & PiB & GB/s \\ \hline /home & - & 4 & - & 0.5 & 240 \\ /archive & 18 & - & 2 & 53.9 & 360 \\ /scratch & 13 & 27 & 2 & 42.4 & 1300 \\ \hline \end{tabular} \end{table} Table 3: Filesystem organization and specifications ### Frontend and service partitions The Frontend partition provides the user with access to the system. Typical operations on frontend nodes encompass software development, code compilation, data management, interface to other systems on site, job submission, data pre-processing, data post-processing and results visualization. In LEONARDO the frontend nodes see both compute partitions and the global filesystems as well. The number of frontend servers is 32, each equipped with dual socket Xeon Scalable processor (32 cores, Intel 3rd Gen, the same model of the Booster's compute node) and 16 DDR4-3200 channels per socket. Sixteen nodes are dedicated to login and are configured with a local 6 TB disk space in RAID-1 configuration. The other 16 nodes are specialized for post-processing visualization and are equipped with NVMe disks (6.4 TB total local capacity) and 2 NVIDIA PCIe Quadro RTX8000 by 48 GB RAM each. In order to deploy, manage and monitor the LEONARDO cluster that accounts for about 5000 compute nodes, 11 tailored servers are used, called Operational Management Nodes (OMN). OMNs feature a single AMD EPYC _Rome_ CPU with 64 cores and 3 three dual ports NICs supporting 10 GbE, 50 GbE and HDR100. The complete list of hardware components is reported in Appendix B. ### Software ecosystem LEONARDO runs Red Hat Enterprise Linux 8 operating system on all nodes and uses SLURM as workload manager (SLURM, 2023). Two architecture-specific suites are installed, namely Intel OneAPI, NVIDIA HPC SDK. The latter includes a complete software stack to build AI applications with highly optimized libraries such as cuDNN for deep neural networks and NCCL for multi-GPU communication. The GNU compiler collection is also installed. Software management is done using Spack (Gamblin et al., 2015) and Environment Modules (Environment Modules, 2023). A large set of HPC programming tools is available for developers, based both on closed and open source products. The software for scientific production is organized on a category basis, serving each research community with dedicated pre-installed tools e.g. chemestry-physics, deep learning, life sciences and meteo. Further details and updates can be found in the user guide available on the website of CINECA (LEONARDO UserGuide, 2023). Baseline tools are listed below. * Parallel profilers and debuggers * GNU debugger (GDB) * Intel debugger (IDB) and VTune profiler * NVIDIA Nsight profiler (System and Compute) and CUDA-GDB * Communication libraries * Intel MPI * Numerical application libraries * Intel Math Kernel Library, * GNU scientific library, * Math and Python libraries * Containerization is supported through several different tools: * Syslab Singularity Enterprise edition * NVIDIA Container Framework and Pyxis Slurm plugin * ParTec Parastation also supports the execution of containerized applications, improving the flexibility of a pure Singularity approach; * Monitoring is operated via Atos SMC xScale suite, based on Prometheus, and using Grafana as frontend. Detection and tracking of issues is performed by Parastation HealthChecker. ### Power consumption, cooling and management LEONARDO is hosted by CINECA in its new data center at the Big Data Technopole (Tecnopolo, 2023) in Bologna, Italy. The room floor has been designed with a plan in two steps to support the current pre-exascale and a future exascale machine. Presently, the data center features 10 MW of IT load with 1240 \(m^{2}\) of computing floor space and 900 \(m^{2}\) of ancillary space. The second step considers an increased power support up to 20 MW IT load and 2600 \(m^{2}\) additional computing floor. All major components of LEONARDO are cooled down using warm water-cooling technology, including the power supplies. The inlet water temperature is 37 Celsius degrees and the total Direct Liquid Cooling capacity is 8 MW. The system is pretty efficient with a 1.1 value of Power Usage Effectiveness (PUE). This means that the overhead needed to cool down LEONARDO is the 10% of the power used to feed it. Energy consumption of the cluster is controlled by means of various tools including two ATOS proprietary software products (Bull Energy Optimizer and Bull Dynamic Power Optimizer). One allows to log time profiles of energy and temperature via IPMI and SNMP protocols and to cap the clock frequency of the CPUs depending on the total power consumption. The other is used to find the best workpoint in terms of energy consumption and performance of a running application i.e. reducing the power absorption by adjusting the clock frequency with limited performance degradation. Concerning GPU, a vendor specific manager tool (NVIDIA Data Center GPU Manage) is used to limit device clock when a configurable energy threshold is surpassed. ## 3 Access to LEONARDO LEONARDO is a EuroHPC-JU system that is hosted and operated by CINECA supercomputing center. Researchers from academia, research institutes, public authorities, and industry can apply for access to computing time. The access is mainly based on _Calls for Proposal_ from EuroHPC (50%) and CINECA (50%) via its ISCRA program (Italian SuperComputing Resource Allocation). Submitted proposals are peer reviewed for scientific merit and undergo a technical assessment for suitability to perform on LEONARDO architectures, in order to ensure the highest scientific reach of the selected project. Detailed information are available at (EuroHPC JU, 2023) and (ISCRA, 2023) webpages. ### Acknowledgement The acquisition and operation of LEONARDO supercomputer is funded jointly by the Italian Ministry of University and Research and by the EuroHPC Joint Undertaking (EuroHPC JU) under grant agreement _N. encet.ddg1.c.2(2019)8804531 - LEONARDO supercomputer_ through the European Union's Connecting Europe Facility and the Horizon 2020 research and innovation programme. The EuroHPC JU is a legal and funding entity created in 2018 to enable the European Union and EuroHPC participating countries to coordinate their efforts and pool their resources with the objective of making Europe a world leader in supercomputing. The authors thank ATOS for the solution provided and all the key technology partners NVIDIA, Intel, DDN, for their support during design, construction, delivery and testing.
2306.01008
Credit Card Fraud Detection Using Asexual Reproduction Optimization
As the number of credit card users has increased, detecting fraud in this domain has become a vital issue. Previous literature has applied various supervised and unsupervised machine learning methods to find an effective fraud detection system. However, some of these methods require an enormous amount of time to achieve reasonable accuracy. In this paper, an Asexual Reproduction Optimization (ARO) approach was employed, which is a supervised method to detect credit card fraud. ARO refers to a kind of production in which one parent produces some offspring. By applying this method and sampling just from the majority class, the effectiveness of the classification is increased. A comparison to Artificial Immune Systems (AIS), which is one of the best methods implemented on current datasets, has shown that the proposed method is able to remarkably reduce the required training time and at the same time increase the recall that is important in fraud detection problems. The obtained results show that ARO achieves the best cost in a short time, and consequently, it can be considered a real-time fraud detection system.
Anahita Farhang Ghahfarokhi, Taha Mansouri, Mohammad Reza Sadeghi Moghadam, Nila Bahrambeik, Ramin Yavari, Mohammadreza Fani Sani
2023-05-31T19:32:38Z
http://arxiv.org/abs/2306.01008v1
# Credit Card Fraud Detection Using Asexual Reproduction Optimization ###### Abstract As the number of credit card users has increased, detecting fraud in this domain has become a vital issue. Previous literature has applied various supervised and unsupervised machine learning methods to find an effective fraud detection system. However, some of these methods require an enormous amount of time to achieve reasonable accuracy. In this paper, an Asexual Reproduction Optimization (ARO) approach was employed, which is a supervised method to detect credit card frauds. ARO refers to a kind of production in which one parent produces some offspring. By applying this method and sampling just from the majority class, the classification's effectiveness is increased. A comparison to Artificial Immune Systems (AIS), which is one of the best methods implemented on current datasets, has shown that the proposed method is able to remarkably reduce the required training time and at the same time increase the recall that is important in fraud detection problems. The obtained results show that ARO achieves the best cost in a short time, and consequently, it can be considered as a real-time fraud detection system. Keywords:Machine Learning Asexual Reproduction Optimization Credit Card Fraud Detection Fraud Detection Artificial Immune Systems. ## 1 Introduction Credit card fraud inflicts plenty of costs on banks and card issuers and threatens their reputation [1]. A huge amount of money disappears annually from legitimate accounts by fraudulent transactions [2]. In fact, E-business has become one of the most important global markets which demands strong fraud detection systems [3, 4]. In 2017, Online Fraud Report of Cyber Source distinguishes average annual fraud loss among different order channels1. 0.9% of the annual e-commerce revenues is lost due to payment frauds through Web store channel in North America. This value is 0.8% for Mobile channels and 0.3% for phone/mail order channel. Different definitions of fraud have been presented by different organizations. Based on The World Bank Group's definition of fraud, the fraudulent practice covers solicitation, offering or taking bribes, or the manipulation of loans in the form of misrepresentation [5]. According to the division of the Association of Certified Fraud Examiners, there are two types of fraud, i.e., internal frauds and external frauds. Internal fraud occurs when an employee deliberately misuses an organization's properties [6]. External frauds include a more comprehensive variety in comparison with internal frauds. Dishonest vendors who take bribes are a desirable example to mention. Untruthful customers might alter account information to mislead payments. Besides, third parties may use intellectual properties [7]. The credit card fraud techniques have changed over time, from physically stealing the cards to online frauds [4]. Credit card frauds are categorized into two categories, i.e., application frauds and behavioral frauds. An application defrauder is a person who gets a new credit card from issuing companies by utilizing the wrong information. A behavioral defrauder is a person who has attained the information of a legitimate card fraudulently and makes purchases when the cardholder is not present [8]. As the number of frauds increases, the fighting techniques against fraud become more significant [9]. Protection techniques against fraud include prevention and detection systems. The first layer to protect the system against fraud is prevention. Fraud prevention stops the fraud from occurring at the initial level. Fraud detection is the next protection step. It identifies fraudulent activities when they penetrate the system [10]. People use credit card-based online payments more and more these days, forcing the banks to deploy fraud detection systems [11]. Expert-driven, data-driven, and the combination of both are the three kinds of fraud detection systems. Expert-driven systems are based on fraud scenarios. If the data-stream matches the scenario from the FDS viewpoint, the fraud has happened. Data-driven methods learn the fraud patterns and find them in data streams [12]. Credit card fraud happens when a transaction on someone's credit card is done by another person [13]. If the fraud becomes a prevalent issue in a competitive environment without any preventive systems, it will threaten businesses and organizations seriously [6]. On the other hand, the number of credit card transactions is increasing rapidly, which results in the growth of fraudulent activities [14] It is pretty expensive to analyze the transaction is done by the client or not [15]. The fraud detection system is aimed to stop it as soon as possible. Whether the fraud detection system is manual or automatic, it has to be effective. The system should identify a high percentage of fraudulent transactions while keeping the false alarm rate low. Otherwise, the users will become apathetic to alarms [16]. To reduce the cost of detection, many machine learning techniques have been implemented. Supervised methods are more common than unsupervised techniques [8]. Nowadays, different data mining techniques have been developed [17; 18; 19; 20; 21; 22] and by acknowledging the development of data mining methods, efficient ways have been found to detect fraud [23]. However, many of these methods need a time-consuming training phase. This limitation decreases the applicability of these methods. To address this problem, we propose to use Asexual Reproduction Optimization (ARO). In this paper, we implemented and applied this method on a publicly available dataset. The experimental results show that using the proposed method enables us to achieve reason able accuracy faster, compared to one of the state-of-the-art fraud detection methods, i.e., Artificial Immune Systems (AIS). The remaining part of the paper has been organized as follows. Section 2 provides a literature review on credit card fraud detection methods. Following, Section 3 describes the ARO and AIS models. Afterwards, experimental results are presented in Section 4 and analyzed in Section 5. Finally, Section 6 concludes the paper and provides some new directions to continue this research. ## 2 Credit Card Fraud Detection Methods Fraud detection merges anomaly-based detection and misuse-based detection by applying data mining techniques. Anomaly-based detection consists of supervised, unsupervised, and semi-supervised algorithms [123]. Supervised algorithms require all existing transactions, which are labeled as fraudulent and non-fraudulent transactions. These algorithms assign a score to a new transaction, which determines the transaction's label [6, 8]. Unsupervised methods work with unlabeled test dataset and try to find the unusual transactions. These algorithms represent a baseline distribution for the normal behavior. Transactions with a great distance from it are considered unusual ones [8, 124]. Semi-supervised methods contain both labeled and unlabeled instances. Semi-supervised learning aims to design the algorithms, which can use these combined instances [124]. In general, the concept of anomaly/outlier is problem-dependent and it is challenging to capture all aspects of behavior in one single metric [125]. In Table 1, we presented some of the data mining based approaches which are used for credit card fraud detection, carried out in the literature [123, 126, 127, 128]. In [83], the authors presented a neural network-based system with a user-friendly interface for fraud detection, implemented on synthetic datasets [83]. In credit card fraud detection, datasets have skewed distributions. Chen et al. employed Binary Support Vector System (BSVS), which could handle this problem better than oversampling techniques. For support vector selection, the genetic algorithm is used. Based on these vectors, they proposed BSVS [60]. Gadi et al. employed BN, NB, AIS, and DT techniques on the Brazilian bank dataset that we used in this paper. They showed that generally applying cost-sensitive and robust optimization leads to better results [34]. \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{2}{c}{Machine Learning Methods Method} & \multicolumn{2}{c}{estimates} \\ \hline \hline \multicolumn{2}{c}{\begin{tabular}{c} N-networking (ANN) \\ \end{tabular} } & \begin{tabular}{c} [44], [123], [124], [27], [28], [29], [30], [31] \\ \end{tabular} & \begin{tabular}{c} [44], [123], [124], [124], [124], [124], [124] \\ \end{tabular} \\ \hline \multicolumn{2}{c}{\begin{tabular}{c} Discursurs Network (NN) \\ \end{tabular} } & \begin{tabular}{c} [30], [30], [30], [30], [31], [30], [31], [32], [33] \\ \end{tabular} & \begin{tabular}{c} [30], [30], [30], [31], [32], [33] \\ \end{tabular} \\ \hline \multicolumn{2}{c}{\begin{tabular}{c} Discurs Network (HT) \\ \end{tabular} } & \begin{tabular}{c} [30], [30], [30], [30], [31], [32], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], \\ \hline \multicolumn{2}{c}{\begin{tabular}{c} Labeled Instance Systems (ANN) \\ \end{tabular} } & \begin{tabular}{c} [30], [30], [30], [30], [31], [32], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33], [33] \\ \end{tabular} \\ \hline \multicolumn{2}{c}{\begin{tabular}{c} Support-Standardly} \\ \end{tabular} & \begin{tabular}{c} [30], [30], [30], [31], [32] Because of optimizing the parameters, AIS is the best technique [15]. Sanchez et al. applied the association rules in the credit card fraud detection system. The system determined patterns for legitimate transactions. The transactions that do not match with the patterns are recognized as fraudulent [114]. Instead of looking individually at data, authors in [68] consider them sequentially. They applied SVM and Long Short-Term Memory Recurrent Neural Network (LSTM) for modeling time series in fraud detection records. LSTM was a more suitable classifier [68]. Rani et al. suggested a method using HMM which could conserve user's data effectively and bring back the information with ease [121]. Modi et al. examined a single-layer feed-forward neural network for fraud detection. The fraud categorization was divided into four groups of low to high risk. If a transaction is recognized as a fraudulent one, it will belong to one of these groups [102]. Using negative selection in addition to clonal selection, Halvaiee and Akbari improved AIS. They suggested a new method AIS-based Fraud Detection Model (AFDM) for calculating the samples' fitness. Furthermore, in their proposed model, they used cloud computing for training, which reduced the processing time [1]. Zareapoor and Shamsolmoali examined bagged ensemble decision tree on a real dataset and compared it with SVM, KNN, and NB. It achieved the highest detection rate. The time was reduced significantly, and the ensemble technique could solve the imbalanced dataset problem [31]. Carneiro et al. aim the development and implementation of a fraud detection system at an e-tail merchant. They showed that choosing the right variables in the dataset is a key factor. Random forests, logistic regression, and support vector machines were tested. A random forest can be an appropriate practical model [23]. Fiore et al. used Generative Adversarial Networks (GAN) to detect credit card fraud. GAN is a multiple-layer neural network consisted of a generator and a discriminator. They employed GAN for solving imbalanced dataset problem. GAN generates an augmented dataset that has more fraudulent transactions than the initial dataset [11]. Behera and Panigrahi proposed a two-stage system. The first stage tries to match the patterns. It consists of a fuzzy module which computes a score. Given this score, one can envisage three categories: legitimate, fraudulent, and suspicious. The next stage concludes a neural network, which determines whether the suspicious one belongs to a fraudulent or legitimate group [84]. De Sa et al. implemented a customized Bayesian Network Classifier (BNC) on the dataset of a Brazilian payment service. They used a Hyper-Heuristic Evolutionary Algorithm for generating BNC. The proposed method increased economic efficiency remarkably [32]. Gomez et al. used an end-to-end neural network for credit card fraud detection. They focused on solving imbalanced dataset and cost evaluation problems and obtained valuable results [90]. Lucas et al. modelled a sequence of credit card transactions from three different perspectives. Each of these sequences with HMM and the likelihood associated with HMM is used as additional features in the Random Forest classifier for fraud detection [75]. Gianini et al. used a game theory-based approach for detecting credit card fraud by managing a pool of rules [129]. Monirzadeh et al. increased the efficiency of the neural network by using the genetic algorithm. Their research showed that the most effective criterion is the information related to the transaction. Age, gender, and such factors do not affect the detection [78]. In any fraud detection system, the chief problem is always to increase the accuracy of approving a legal transaction, whether in the shortest possible time or at the lowest cost for financial institutions [15]. Therefore, the principal purpose of all the models presented for this issue is to reduce the detection time, increase the accuracy, reduce the costs, and present a model that can improve these factors with better performance. According to the description, the algorithms' performance has been compared through the three aspects of fraud detection speed, accuracy, and cost presented in Table 2[14]. As shown in Table 2, most of the algorithms have some disadvantages in the mentioned indicators, and among them, AIS performs the best. This confirms the results presented in [34], where different techniques are compared with each other and AIS is the best technique based on their costs [15]. For this reason, it is chosen for comparison with the ARO algorithm. We employed ARO, which is a supervised method for credit card fraud detection. ARO is an asexual reproduction optimization algorithm. Like Particle Swarm Optimization (PSO), Genetic Algorithms (GA), and Ant Colony Optimization (ACO), ARO is also an Evolutionary Single-Objective Optimization technique [130]. ARO has some advantages that make it completely different from other algorithms. First, it is an individually based technique which reaches the global optimal point, astonishingly faster than other algorithms. Thus, unlike population-based algorithms that require a large number of computational resources to convert, ARO consumes much fewer resources and converges faster. The second case is about mathematical convergence. It has good exploration and exploitation rates. Third, ARO does not require parameter settings, so it is unlikely to have trouble in setting parameters, which is a common meta-cognitive problem of Genetic Algorithms (GA), Annealing Simulation (SA), Taboo Search (TS), and Particle Swarm Optimization (PSO). Besides, ARO does not use any selective mechanism such as a roulette wheel. Inappropriate adoption of selection mechanisms may lead to problems such as premature convergence due to excessive selection pressure. Fourth, ARO is a free model algorithm that can be applied to various types of optimization [131, 132]. For all of the above reasons, ARO can be selected for comparison with the AIS in fraud detection problems. ARO has not been used in fraud detection up to this point. In this paper, a comparison is made between ARO and AIS. We run ARO on the same dataset on which the AIS has been implemented [1, 34]. ## 3 Using ARO for Fraud Detection In this section, we explain ARO in more details and how we implement that to detect fraud. Moreover, the AIS algorithm that has the highest performance is briefly explained [1]. As the ARO method is a supervised method, we need to separate the data \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline Algorithm & NN & BN & SVM & KNN & DT & fuzzy & AIS & GA & HMM \\ \hline Fraud detection speed & Fast & Very fast & Low & Good & Fast & Very low & Very fast & Good & Fast \\ \hline Accuracy & Medium & High & Medium & Medium & Medium & Very high & Good & Medium & Low \\ \hline Cost & Expensive & Expensive & Expensive & Expensive & High expensive & Inexpensive & Inexpensive & High expensive \\ \hline \end{tabular} \end{table} Table 2: Comparison of algorithms. into train and validation parts. Therefore, like any other supervised method, we use the train part of data for the training and the validation part for the testing phase. #### 2.0.1 ARO algorithm ARO is taken from asexual reproduction. Asexual reproduction refers to a kind of production in which one parent produces offspring identical to herself [133]. In populations like bacteria, asexual reproduction is prevalent [134]. There are several kinds of asexual reproduction like budding [135], asexual spore production [136], and binary fission [137]. ARO is inspired by the budding method. In the budding method, the parent produces a smaller copy of itself called a bud. The bud separates itself from the parent to become an independent one [130]. Here, we explain how we use ARO for detecting frauds. According to the label of transactions in the training data, two separate matrices were created for fraud and legal transactions. For each feature in the legal matrix, the maximum and the minimum values are determined and placed in the maximum and the minimum legal matrices, and a parent is created randomly between the values of the maximum and the minimum matrices. Note that the value of each bit in this parent is a random value between its corresponding bit in the maximum and the minimum legal matrices. The value of the parent fitting was calculated using the fitting function given in Equation 2 and named "parent-fitting". \[distance_{record-normal-transitions}=\frac{\sum_{i=1}^{M}\sum_{j=1}^{N}(\frac{|r_{i}-nt_{ji}|}{ max_{i}-min_{i}})}{kN} \tag{1}\] Where \(M\) is the number of features in our dataset. The cut point in a dataset is the best fitness achieved in that dataset. \[distance_{record-normal-transitions}=\frac{\sum_{i=1}^{M}\sum_{j=1}^{N}(\frac{|r_{i}-nt _{ji}|}{max_{i}-min_{i}})}{kN} \tag{2}\] Afterwards, we repeat the following process until the parent fit is smaller than the cut point of the data set. * Select the starting bit (S) as a random number within the range of the number of features. Select the end (E) between the starting bit and the last number of features. Calculate the probability of mutation through Equation 3: \[P=\frac{1}{1+Ln(E-S+1)}\] (3) * Put the bud equal to the parent. * For the bits between the starting and ending bit selected randomly as above, if the probability P calculated in Equation 3 is greater than or equal to an arbitrary random number between zero and one in MATLAB, the value of the bit will be mutated. In this way, the bud will be mutated. * The value of the mutated bud in Equation 4 is calculated (using the fitting function) and named as "bud fit". \[distance_{record-final-transitions} = distance_{record-fraud-transitions}-distance_{record-normal-transitions}\] (4) * If the bud is fitted, it is more than the parent, and the bud replaces the parent. Fitting the bud replaces the parent fitting. The bud is added to the identifier matrix, and one unit is added to the count of the matrix rows of the identifier. * In each fitting calculation, separate bud fits are calculated for the fraudulent matrix and the legal matrix. Fit the bud for the fraudulent matrix added to fraudulent matrix Equation 5. One is added to the counter of the fraudulent fitting matrix. The fitting of the bud for the legal matrix is added to the legal matrix Equation 2, and one is added to the count of the legal matrix rows. The loop termination condition reaches to a value more than or equal to the parent fit compared to the cut point. \[distance_{record-fraud-transtions}=\frac{\sum_{i=1}^{M}\sum_{j=1}^{F}(\frac{|r_ {i}-ft_{iji}|}{max_{i}-min_{i}})}{kF}\] (5) The schematic view of ARO algorithm is presented in Figure 1. In ARO algorithm, an individual is shown by a vector of variables \(X\)=\((x_{1},x_{2},\ldots,x_{n})\), \(X\)\(\in\)\(\mathcal{R}_{n}\). Each variable is considered as a chromosome. A binary string represents a chromosome consisted of genes. The length of the string is \(L\)=\(l_{1}+l_{2}+1\). It is supposed that every generated answer exists in the environment, and because of limited resources, only the best solution can remain alive. The algorithm starts with a random individual in the answer scope. This parent reproduces the offspring named bud. Just the parent or the offspring can survive. In this competition, the one which outperforms in fitness function remains alive. If the offspring has suitable performance, it will be the next parent, and the current parent becomes obsolete. Otherwise, the offspring perishes, and the present parent survives. The algorithm recurs until the stop condition occurs. In the reproduction stage, a substring with \(\lambda\)bits is picked out in all chromosomes, which is named larva. \(\lambda\) is a random number between 1 and L. In the exploration phase, the substring is mutated, in each gene in the substring, 1 is swamped by 0 and 0 by 1. In the exploitation phase, the parent and larva merge as shown in Figure 3. Process of bud reproduction. If \(P\) which is calculated from \(P\)=\(\frac{1}{1+L_{n}(\lambda)}\) is higher than \(0.5\), the bud gene is picked out from the larva, otherwise the bud gene will be picked out from the parent chromosome. Equation 6 relates the exploitation and exploration. If \(\lambda\) is a big number, less exploitation is needed and vice versa. In fact, exploration and exploitation are inversely related. \[P\mathrm{=}\frac{1}{1+L_{n}(\lambda)} \tag{6}\] The fitness of both bud and parent is calculated to choose the best one for the algorithm's next run after reproduction [130; 131]. Note that we do this procedure for all records and all features. Each record has a fraud or normal label. There are the following hints to mention: 1. According to Figure 2, a chromosome has three parts. Here, just the integer part is considered because we do not have the sign or decimal part. 2. Genes are not binary and they contain integer numbers. 3. Only the normal (or legal) records are sampled because the dataset is skewed toward normal transactions. The number of normal transactions is significantly more than the fraudulent transactions. Thus, normal records society is suitable for sampling versus the fraudulent society. Figure 1: Flowchart of ARO. Figure 2: A model for a chromosome in ARO. 4. For generating the first parent, one should determine the range for each bit (gene). Then, for each gene in the first parent chromosome, a random number between the maximum and the minimum of that gene is chosen. 5. The fitness function will be used, which is described in the next section. 6. First, a larva should be generated when reproducing a bud. For generating a larva, a random length should be created. Each gene in this length assumes a random number between the maximum and the minimum of that gene. This length would be a larva. The next step for reproducing a bud is choosing the gene between larva and parent, like in Figure 3 (the process of bud reproduction). In this step, for choosing each gene between larva and parent, a random number is generated. If the random number is less than \(p\), which is obtained from (1), the gene is selected from the larva, otherwise the gene is selected from a parent. Parameter setting causes plenty of problems in methods such as PSO and GA. ARO does not need parameter setting. ARO is an individual-based technique which saves time, unlike population-based techniques that waste time. ARO can be used in different kinds of optimization issues despite many algorithms, which can only be used in one sort of optimization problems. Adjustment with a diverse genetic environment is one of the problems faced in ARO. However, it can be solved by special reproduction operators [130]. **Fitness function** In case one decides to evaluate fitness for a specific record, at first, the distance between the record and all fraud transactions is calculated by Equation 4, and then the distance between the record and all normal transactions is calculated by Equation 1. The difference between these two numbers, as shown in Equation 3, would be the fitness. Due to only sampling the normal records, the higher is the fitness number, the better it is because it shows that the record is closer to the normal transactions than the fraud ones. Thus, it can be a suitable normal sample. In Equations 4 and 1, each record is considered to have \(k\) fields. Here, \(k\) is 17. The value of the \(i\)'th field of the record is \(r\), the value of the \(i\)'th field of the \(j\)'th normal transaction is \(nt_{ji}\), and the value of the \(i\)'th field of the \(j\)'th fraud transaction is \(ft_{ji}\). The maximum and the minimum of \(i\)'th field in all records of dataset are represented by \(max_{i}\) and \(min_{i}\). The number of all normal transactions in the considered dataset is \(N\), and the number of all fraudulent transactions is \(F\). #### 4.2.2 AIS algorithm AIS is inspired by the immune system of the human body. It creates the detectors called lymphocytes for identifying non-self-cells like viruses. Negative Figure 3: Process of bud reproduction. selection and clonal selection are two stages of the AIS. Through negative selection, lymphocytes are created by a random combination of protein patterns. Lymphocytes should not detect self-cells. Thus, the immune system eliminates the lymphocytes that react to self-cells. In fact, all of the lymphocytes generated randomly that react to self-cells are eliminated immediately after creation, and other lymphocytes survive. This procedure is named negative selection. After negative selection, a short life starts for the remaining lymphocytes. They meet any non-self-cells. If any lymphocyte reacts to a non-self-cell, it can survive to protect the body against those non-self-cells. This procedure is named clonal selection. The lymphocyte which detects a non-self-cell is cloned by mutation. The colony cells, which are closer to the non-self-cell, are chosen to survive. These colony cells are considered as memory cells and will react to non-self-cells like viruses [1]. Both non-self-/self-cells are considered as vectors. At first, the training-set should be normalized. Initializing the parameters is the next step. Then, \(N_{pop}\) of normal records is selected randomly as primary detectors (Just the normal records like ARO were sampled). The affinity of these records is calculated using the distance function. \(N_{c}\) of the records with higher affinity is selected. A colony is expanded from them. It means the records with more affinity will be replicated more. The colony is mutated. \(N_{m}\) of the best-mutated population is chosen to replicate \(N_{m}\) of the worst memory cells. This algorithm continues until the stop condition occurs [1]. Here, the loop repeats are considered 150 times. \(N_{pop}\), \(N_{c}\), and \(N_{m}\) are 25, 7, and 5, which have been driven from Gadi et al. and Halvaiee [1, 15]. ``` 1:Determine \(N_{pop}\)\(\%\) the number of all detectors 2:Determine \(N_{c}\)\(\%\) the number of detectors best match with non - self cell 3:Determine \(N_{m}\)\(\%\) the number of best mutated detectors 4:while stop conditions do not occur do 5: Choose \(N_{pop}\) of population randomly, call it \(first-pop\) 6: Choose \(N_{c}\) of best \(first-pop\) based on their fitness, call it \(best-first-pop\) 7: Expand a colony from \(best-first-pop\), call it \(colony-pop\) 8: Mutate \(colony-pop\), call it \(mutated-pop\) 9: Choose \(N_{m}\) of best \(mutates-pop\) based on their fitness, call it \(best-mutated-pop\); 10: Replace \(N_{m}\) of worst detectors in memory cell by \(best-mutated-pop\) 11:endwhile ``` **Algorithm 1** AIS ## 4 Experiments In this section, first, the experimental dataset is described. Afterward, we explain some details of the experimental-setting, and next, we will present the results based on the metrics discussed above. ### Dataset In our experiments, we used a Brazilian bank's dataset, according to which 3.74% of all transactions are fraudulent. Nine splits are generated from all transactions in the dataset. Each split has two parts. The first part, which contains 70% of transactions, is for the training phase. The second part, which contains 30% of transactions, is for the testing phase. The number of fraudulent and legitimate transactions in each split is shown in Table 4[34]. We used MATLAB 2016 software for AIS and ARO implementation. Thus, we changed the format of datasets to CSV. We trained the fraud detection system by two methods, i.e., ARO and AIS, as explained in the previous sections. ### Experimental Settings After training the model, in the second step, we ran our model on the validation data with labeled transactions as fraud or normal. Then, in the next step, a comparison is made between the predicted labels and real labels by calculating four parameters: * \(False\)\(positive\)\((FP)\): The number of normal transactions that mistakenly predicted as frauds by our method. * \(False\)\(negative\)\((FN)\): The number of fraud transactions that mistakenly predicted as normal by our method. * \(True\)\(positive\)\((TP)\): The number of fraud transactions that correctly detected by our method. * \(True\)\(negative\)\((TN)\): The number of normal transactions that correctly detected by our method. Using the above parameters, we are able to compute some common metrics to evaluate the performance. We used four metrics in our testing phase: * \(Sensitivity\)\((\frac{TP}{TP+FN})\): It is the ability to recognize a fraudulent transaction as a fraudulent one. * \(Precision\)\(\frac{TP}{TP+FP}\): It is the accuracy on cases predicted as fraud. * \(Specific\)\(\frac{TN}{FP+TN}\): It is the ability to recognize a legitimate transaction as a legitimate one. * \(Accuracy\)\(\frac{TP+TN}{TP+FP+TN+FN}\): It presents the proportion of correct predictions. In addition, we measured training and testing time, which are critical issues in fraud detection. We used Equation 7 for cost calculating on this dataset. Gadi et al. used this formula because the dataset has 100% of fraudulent records and only 10% of legitimate records [15]. \[Cost=100\times FN+10\times FP+TP \tag{7}\] In the next step, we compared the performance of the two algorithms. The whole process of the fraud detection system is described in Figure 4. As mentioned before, each dataset has a specific cut point. By trial and error method, we found the cut points presented in Table 3. In each training dataset, there are about 28,000 records with 17 features. Finally, test (or validation) datasets are used in the testing phase. For testing the samples obtained by ARO or AIS method, these steps are followed: 1. The distance of the record from all normal samples is measured. 2. The distance is divided by the number of normal samples. One can call it final distance. 3. If the final distance is below the best cut-off value, one can categorize the distance as normal, otherwise it would be fraudulent. The performance is measured by the metrics discussed above. For the AIS method, we have provided the results presented in [1], also the results of our implemented version of this algorithm to have a more fair comparison. All the codes are available in [https://gitlab.com/Anahita-Farhang/ARO-AIS](https://gitlab.com/Anahita-Farhang/ARO-AIS). ### Experimental results This subsection presents the computational results of running AIS and ARO algorithms1. In sensitivity, precision, specificity, and accuracy, ARO achieved a higher average than AIS, as shown in Table 5. For training time, test time, and cost, ARO shows better performance. Results are shown in Table 6. As shown in Figure 5, ROC curve of testing results with ARO and AIS algorithm for all datasets, by implementing ARO, AUC, which is a suitable criterion for imbalanced datasets, is increased by 13% more than the AIS method. It shows that for each cut-off value, ARO outperforms AIS. Footnote 1: All the experiments were performed in a PC with an Intel® CoreTM i5-3210M CPU @ 2.5GHz with 4GB RAM in Windows 8(x64). Finally, two non-parametric statistical tests (i.e, wilcoxon and Kruskal-Wallis) were conducted to ensure the statistical significance in terms of accuracy for the ARO model. The Wilcoxon test results are illustrated in Table 8 that the ARO model almost reaches the significance level compared to AIS. The Kruskal-Wallis test was used to show the equality of results in all nine sections of the dataset. Results are presented in Table 9. Figure 4: Fraud detection system. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline dataset & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline Cut point & 0.1754 & 0.1841 & 0.1739 & 0.1762 & 0.175 & 0.1777 & 0.176 & 0.1916 & 0.1749 \\ \hline \end{tabular} \end{table} Table 3: Cut points in the datasets. ## 5 Discussion We trained the system with ARO and AIS methods. As mentioned in Section 2, given that ARO is a single-solution evolutionary algorithm, it responds faster than the AIS that \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Split type & Transaction type & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \multirow{2}{*}{train} & Legitimate & 27,904 & 28,012 & 28,061 & 28,145 & 28,045 & 27,973 & 28,113 & 27,884 & 28,188 \\ \cline{2-11} & Fraudulent & 1,084 & 1,092 & 1,088 & 1,075 & 1,081 & 1,116 & 1,099 & 1,106 & 1,100 \\ \hline \multirow{2}{*}{test} & Legitimate & 12,184 & 12,076 & 12,027 & 11,943 & 12,043 & 12,115 & 11,975 & 12,204 & 11,960 \\ \cline{2-11} & Fraudulent & 475 & 467 & 471 & 484 & 478 & 443 & 460 & 453 & 459 \\ \hline \end{tabular} \end{table} Table 4: Number of fraudulent and legitimate transactions in datasets. Figure 5: ROC curve of testing results with ARO and AIS algorithm for all datasets. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline Metric & \multicolumn{3}{l|}{Sensitivity} & \multicolumn{3}{l|}{Precision} & \multicolumn{3}{l|}{Specificity} & \multicolumn{3}{l|}{Accuracy} \\ \hline Method & ARO & AIS & ARO & AIS & ARO & AIS & ARO & AIS \\ \hline DS 1 & 0.86 & 0.68 & 0.42 & 0.33 & 0.95 & 0.95 & 0.95 & 0.94 \\ \hline DS 2 & 0.88 & 0.8 & 0.46 & 0.22 & 0.96 & 0.89 & 0.96 & 0.89 \\ \hline DS 3 & 0.79 & 0.61 & 0.34 & 0.22 & 0.94 & 0.92 & 0.93 & 0.9 \\ \hline DS 4 & 0.65 & 0.63 & 0.23 & 0.16 & 0.91 & 0.87 & 0.9 & 0.86 \\ \hline DS 5 & 0.88 & 0.78 & 0.58 & 0.23 & 0.97 & 0.9 & 0.97 & 0.89 \\ \hline DS 6 & 0.86 & 0.58 & 0.38 & 0.24 & 0.95 & 0.94 & 0.95 & 0.92 \\ \hline DS 7 & 0.74 & 0.6 & 0.32 & 0.34 & 0.94 & 0.96 & 0.93 & 0.94 \\ \hline DS 8 & 0.72 & 0.51 & 0.23 & 0.2 & 0.91 & 0.93 & 0.91 & 0.91 \\ \hline DS 9 & 0.95 & 0.63 & 0.54 & 0.33 & 0.97 & 0.95 & 0.97 & 0.94 \\ \hline **Average** & **0.81** & **0.65** & **0.39** & **0.25** & **0.95** & **0.92** & **0.94** & **0.91** \\ \hline \end{tabular} \end{table} Table 5: The results of implementing ARO and AIS on datasets. generates a community of data [130, 131, 132]. Therefore, the good speed with no parameter setting and good convergence rate have made ARO a good candidate versus AIS, and in our experiment, this claim was confirmed in four indicators. In classification problems, there are some common metrics to evaluate the performance: sensitivity (recall), precision, specificity, and accuracy. These four metrics have been measured in our testing phase. AUC was measured, which is the area under the ROC curve. ROC curve plots sensitivity versus false-positive rate. In fact, the cut-off value in the test phase is located at the top left corner in ROC curve. It is the point where sensitivity and specificity are equal. Gadi et al. found that if they use a cost function shown in Equation 7 in which they adopted an average cost of 1 dollar for every verification and an average loss of 100 dollars for every undetected fraud, they will obtain more applicable results. They used this formula because the dataset has 100% of fraudulent records and only 10% of legitimate records [1, 138, 15, 139]. This was considered to be more similar to the practice used for a fraud score compared to a ROC curve that compares multiple references simultaneously [15]. One of the main problems of AIS is the extreme need for a hyper-parameter setting, which is not present in ARO. ARO is a bio-inspired algorithm, and we aimed to test this algorithm against one of the algorithms that works best in detecting fraud on a Brazilian bank's dataset. Compared to other studies on this dataset, the detecting speed and the computational cost were important. We trained each dataset by each algorithm thirty times and registered the results of the best cost. As shown in Tables 5 and 6, ARO has better performance than AIS in all the metrics. The ARO method's best performance appears on the ninth dataset with the sensitivity of 0.95 and the cost of 6,407, which is better than the AIS method (sensitivity=0.63 & cost=23071). The ARO method's worst performance appears on the fourth dataset with the sensitivity of 0.65 and the cost of 27,923, which is still better than the AIS method (sensitivity=0.63 & cost=33864). The average sensitivity for ARO is 0.81 with the average precision 0.39. For the AIS technique, the average sensitivity and precision are 0.65 and 0.25. Training time, which is a vital issue in fraud detection, has been remarkably reduced. The average training time for ARO is 6.25s, so ARO fraud detection system can be considered as a real-time \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline **Metric** & \multicolumn{2}{l|}{**Train time (s)**} & \multicolumn{2}{l|}{**Test time (s)**} & \multicolumn{2}{l|}{**Cost**} & \multicolumn{2}{l|}{**AUC**} \\ \hline Method & ARO & AIS & ARO & AIS & ARO & AIS & ARO & AIS \\ \hline DS 1 & 8.08 & 24.25 & 1.87 & 1.85 & 12,570 & 22,072 & 0.91 & 0.81 \\ \hline DS 2 & 8.46 & 24.90 & 1.32 & 1.19 & 10,781 & 23,213 & 0.92 & 0.84 \\ \hline DS 3 & 7.33 & 24.65 & 2.22 & 1.75 & 17,473 & 28,966 & 0.87 & 0.76 \\ \hline DS 4 & 4.68 & 24.44 & 1.23 & 1.52 & 27,923 & 33,864 & 0.78 & 0.75 \\ \hline DS 5 & 4.63 & 24.66 & 1.13 & 1.62 & 9,132 & 23,115 & 0.93 & 0.84 \\ \hline DS 6 & 7.78 & 24.57 & 1.34 & 1.75 & 12,889 & 26,925 & 0.9 & 0.76 \\ \hline DS 7 & 3.8 & 24.25 & 1.27 & 1.68 & 19,522 & 24,224 & 0.84 & 0.78 \\ \hline DS 8 & 7.06 & 24.32 & 1.79 & 1.72 & 23,944 & 31,599 & 0.81 & 0.72 \\ \hline DS 9 & 4.43 & 25.30 & 1.54 & 1.80 & 6,407 & 23,071 & 0.96 & 0.79 \\ \hline **Average** & **6.25** & **24.59** & **1.52** & **1.65** & **15,627** & **26,339** & **0.88** & **0.78** \\ \hline \end{tabular} \end{table} Table 6: The results of implementing ARO and AIS on datasets. one. ARO improved sensitivity up to 25%, and precision up to 56%, decreased cost up to 41%, and training time up to 75%. The first fraud detection on our dataset was implemented by Gadi et al. He proved that by optimizing the parameters, AIS is the best method in comparison with BN, NB, and DT [15]. One of the best fraud detection systems on this dataset was performed in [1]. They employed AFDM which is a kind of improved AIS method. They achieved 17,389 for cost and 79 seconds for training time. By implementing ARO, we achieved 15,627 for cost and 6.25 for training time which are considerably better than the previous results. The obtained results and the results of the previous researches are shown in Table 7. In the last part, we have used two non-parametric statistical tests. We have applied Wilcoxon to show the significant difference between AIS and ARO algorithms. This test ranks all differences and applies a negative sign to all the ranks where the difference between the two observations is negative. The hypothesis \(H0\) in this test is the equality of the two algorithms and, as shown in Table 8, in accuracy, sensitivity, precision, train time, and cost, because of the p-values which are less than alpha (\(\alpha=0.05\)), this hypothesis was rejected that means two algorithms are not equal. Also, in the Wilcoxon test, negative rank for train time, test time, cost, and positive rank for other indices showed that ARO performs better than AIS in all indices. We have done Kruskal-wallis test because our dataset was divided into nine sections and it is important to check whether all the samples are originated from the same distribution. We performed this test to check whether there is a significant difference between the nine samples in each index. The results are shown in Table 9. The hypothesis \(H0\) in this test is the equality between all the nine samples. Due to the p-values which are greater than alpha (\(\alpha=0.05\)), and also because of the values of chi-square that are 8, which is less than 15.5073 (\(\chi^{2}_{0.05}=15.5073\) with \(df=8\)), the \(H0\) hypothesis cannot be rejected which means in all indexes, our nine sections are the same. As it was discussed, we have achieved promising results by using ARO algorithm. However, there is room for improvement. For example, an algorithm can be employed to choose the optimized cut-point values in Section 4.2. Moreover, to increase the performance of the algorithm, we suggest using cloud computing, i.e. implementing ARO \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Method & AIS & AFDM & AIS & ARO \\ \hline Cost & 23,303 & 17,389 & 26,339 & 15,627 \\ \hline Reference & [15] & [1] & Proposed AIS & Proposed ARO \\ \hline \end{tabular} \end{table} Table 7: History of the previous and obtained results. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Compared model & Sensitivity & Precision & Specificity & Accuracy & Train time (s) & Test time (s) & Cost & AUC \\ \hline Asymp. Sig. & 0.008 & 0.011 & 0.118 & 0.020 & 0.008 & 0.314 & 0.008 & 0.008 \\ \hline \end{tabular} \end{table} Table 8: Results of Wilcoxon signed-rank test with \(\alpha{=}0.05\). algorithm on a cloud-based file system (e.g, Hadoop) which makes data parallelization possible. Furthermore, new methods in the deep learning area show progress in terms of time in comparison with metaheuristic algorithms. Therefore, employing deep learning methods may reduce the training time and have positive impacts on the final results. ## 6 Conclusion Fraud is a critical concern for financial services (e.g., commercial banks, investment banks, insurance companies, etc.) and individuals. Different types of fraud cost millions of dollars every year. Among different types of fraud, credit card fraud is the most common one and several solutions have been proposed to detect fraudulent transactions. In this paper, we have implemented the ARO (Asexual Reproduction Optimization) in credit card fraud detection. This effective approach has achieved better results than the best techniques implemented on our dataset so far. We have compared the results with those of the AIS, which was one of the best methods ever implemented on the benchmark dataset. The chief focus of the fraud detection studies is finding the algorithms that can detect legal transactions from the fraudulent ones with high detection accuracy in the shortest time and at a low cost. ARO meets all these demands. ARO is an Evolutionary Single-Objective Optimization algorithm with lots of advantages that make it suitable for implementing in fraud detection problems. First of all, being an individually based technique, it converges faster to the global optimal point. Secondly, it has good exploration and exploitation rates. Thirdly, it has no parameter settings, which is a common issue in meta-cognitive problems such as Genetic Algorithms (GA), Annealing Simulation (SA), Taboo Search (TS), and Particle Swarm Optimization (PSO). Results show that ARO has increased the AUC, sensitivity, precision, specificity, and accuracy by 13%, 25%, 56%, 3%, and 3%, in comparison with AIS, respectively. We have achieved a high precision value indicating that if ARO detects a record as a fraud, with a high probability, it is a fraud one. Supporting a real-time fraud detection system is another vital issue. ARO outperforms AIS not only in the mentioned criteria, but also decreases the training time by 75% in comparison with the AIS, which is significant. Furthermore, two non-parametric statistical tests (i.e., Wilcoxon and Kruskal-Wallis) were conducted to ensure the statistical significance in terms of accuracy for the ARO model. The Wilcoxon test results show that the ARO model almost reaches the significance level compared to AIS. The Kruskal-Wallis test was used to show the equality of results in all nine sections of the dataset. The results of applying these two statistical tests ensure the statistical significance in our study. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Compared model & Sensitivity & Precision & Specificity & Accuracy & Train time (s) & Test time (s) & Cost & AUC \\ \hline Chi-square & 8.000 & 8.000 & 8.000 & 8.000 & 8.000 & 8.000 & 8.000 & 8.000 \\ \hline df & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\ \hline Asymp. Sig. & 0.433 & 0.433 & 0.433 & 0.433 & 0.433 & 0.433 & 0.433 \\ \hline \end{tabular} \end{table} Table 9: Results of Kruskal-Wallis test. ## 7 Future Work Our framework has addressed the problems such as high costs and training time in credit card fraud detection. Although, there is still room for further improvement. To increase the performance of the proposed method, it is possible to test the proposed model in a cloud environment, i.e., Hadoop. Moreover, ARO can be compared to PSO and QPSO, which have fewer parameter settings than AIS. In addition, the writers believe ARO has the potential to obtain much better results. One improvement can be done by weighting the fields that compose a transaction. In fact, there are plenty of fields in a transaction and some fields are more important than other fields. Therefore, we can increase or decrease the effect of the field on the final results through weighting fields in the distance function. Furthermore, the distance function can be different for each property in the dataset. As we discussed, each transaction has several fields with different meanings. Then the concept of distance is not the same for all the fields. As an example, suppose the person goes shopping once per month. So the distance of 30 is usual and it equals zero for the time concept. However, the distance of 30 for the amount column is important and it is not equal to zero. Therefore, considering application-based distance functions for each field is an interesting point to address in future work.
2309.04093
Diamond quantum magnetometer with dc sensitivity of < 10 pT Hz$^{-1/2}$ toward measurement of biomagnetic field
We present a sensitive diamond quantum sensor with a magnetic field sensitivity of $9.4 \pm 0.1~\mathrm{pT/\sqrt{Hz}}$ in a near-dc frequency range of 5 to 100~Hz. This sensor is based on the continuous-wave optically detected magnetic resonance of an ensemble of nitrogen-vacancy centers along the [111] direction in a diamond (111) single crystal. The long $T_{2}^{\ast} \sim 2~\mathrm{\mu s}$ in our diamond and the reduced intensity noise in laser-induced fluorescence result in remarkable sensitivity among diamond quantum sensors. Based on an Allan deviation analysis, we demonstrate that a sub-picotesla field of 0.3~pT is detectable by interrogating the magnetic field for a few thousand seconds. The sensor head is compatible with various practical applications and allows a minimum measurement distance of about 1~mm from the sensing region. The proposed sensor facilitates the practical application of diamond quantum sensors.
N. Sekiguchi, M. Fushimi, A. Yoshimura, C. Shinei, M. Miyakawa, T. Taniguchi, T. Teraji, H. Abe, S. Onoda, T. Ohshima, M. Hatano, M. Sekino, T. Iwasaki
2023-09-08T03:12:32Z
http://arxiv.org/abs/2309.04093v1
Diamond quantum magnetometer with dc sensitivity of! 10 pT Hz\({}^{-1/2}\) toward measurement of biomagnetic field ###### Abstract We present a sensitive diamond quantum sensor with a magnetic field sensitivity of \(9.4\pm 0.1\) pT/\(\sqrt{\mathrm{Hz}}\) in a near-dc frequency range of 5 to 100 Hz. This sensor is based on the continuous-wave optically detected magnetic resonance of an ensemble of nitrogen-vacancy centers along the [111] direction in a diamond (111) single crystal. The long \(T_{2}^{*}\sim 2\)\(\mu\)s in our diamond and the reduced intensity noise in laser-induced fluorescence result in remarkable sensitivity among diamond quantum sensors. Based on an Allan deviation analysis, we demonstrate that a sub-picotesla field of 0.3 pT is detectable by interrogating the magnetic field for a few thousand seconds. The sensor head is compatible with various practical applications and allows a minimum measurement distance of about 1 mm from the sensing region. The proposed sensor facilitates the practical application of diamond quantum sensors. ## I Introduction The biomedical applications of quantum sensors have been studied for over a decade [1]. The realization of magnetoencephalography (MEG) under ambient conditions is a major goal (conventional MEG requires a magnetically shielded room). In addition to clinical diagnosis [2; 3], ambient-condition MEG can be used for daily diagnosis, brain-machine interfaces [4; 5], and fundamental research on brain function [6; 7; 8; 9]. A quantum magnetometer that uses nitrogen-vacancy (NV) centers in diamond is a candidate for realizing ambient-condition MEG given that it can be operated with high sensitivity at room temperature in an ambient magnetic field [10; 11; 12; 13; 14; 15; 16; 17; 18]. A spatial resolution on the millimeter scale or below, far better than the centimeter-scale resolution of conventional MEG [3], is expected for a diamond quantum magnetometer [19]. Magnetometry based on continuous-wave optically detected magnetic resonance (CW-ODMR) is the most widely used method for measuring magnetic fields using NV centers [10; 11; 12; 13; 14; 15; 16]. In this method, a microwave (MW) field continuously drives the magnetic resonance of the NV center spin and the spin state is continuously read out as the intensity of the laser-induced fluorescence from the NV center. Compared with other methods based on pulsed MWs and/or light [13; 17; 18; 20], the CW-ODMR method has a simpler experimental setup and is easier to apply to actual measurements. Millimeter-scale magnetocardiography [19] has been realized using the CW-ODMR method. However, to realize MEG, measurement of an encephalomagnetic field requires exceptional sensitivity (on the order of pT/\(\sqrt{\mathrm{Hz}}\)). The frequency of a clinically relevant encephalomagnetic field ranges from nearly dc to \(\sim 100\) Hz [2; 3; 9]. Reported sensitivities are worse than required in this frequency range. For example, sensitivities of around 20 to 30 pT/\(\sqrt{\mathrm{Hz}}\)[13; 15] have been demonstrated. In addition, a sensitivity of 15 pT\(\sqrt{\mathrm{Hz}}\) in a higher frequency range (80 Hz to 3.6 kHz) has been reported [12]. A short standoff distance from field generating sources in the brain is also required given that the decay of an encephalomagnetic field is inversely proportional to the square of the distance [2]. Therefore, for biomedical applications, the sensitivity in the near-dc frequency range of a diamond quantum magnetometer that can closely approach the target object must be improved. Here, we develop a CW-ODMR-based diamond magnetometer for practical applications (e.g., MEG of a living animal). The sensor head of the magnetometer was designed to approach the target object to a distance of about 1 mm with a sensing volume of 0.03 mm\({}^{3}\). By carefully tuning the experimental conditions and using a high-quality diamond, we achieved a record-breaking sensitivity of \(9.4\pm 0.1\) pT/\(\sqrt{\mathrm{Hz}}\) in a near-dc frequency range of 5 to 100 Hz. Based on the Allan deviation, the minimum detectable field was found to be 8.5 and 0.3 pT for measurement periods of 1 second and several thousand seconds, respectively. Experimental setup ### Sensor head In this work, we synthesized a single-crystalline diamond using a high-pressure-high-temperature (HPHT) method with a \({}^{12}\)C isotopically enriched carbon source. The reduced concentration of \({}^{13}\)C was about 500 ppm. The amount of titanium in the metal solvent in the HPHT synthesis was adjusted to control the initial concentration of neutral substituted nitrogen (\(\rm N_{s}^{0}\)) in the diamond crystal [21]. The initial [\(\rm N_{s}^{0}\)] was estimated to be 5.6 ppm using electron spin resonance. The origin of nitrogen in diamond crystals seems to be impurities introduced from the source material, solvent, or pressure transmitting medium during the growth process. Since this nitrogen is of natural origin, the isotope abundance of \(\rm N_{s}^{0}\) is the same as the natural abundance (\({}^{14}\)N, 99.6%; \({}^{15}\)N, 0.4%). After this HPHT synthesis, a piece of the crystal was cut out parallel to the (111) crystal plane. The dimensions of this diamond sample were approximately \(1~{}\rm mm\times 0.7~{}mm\) in area and 0.4 mm in thickness. Negatively charged NV (\(\rm NV^{-}\)) centers were then produced using electron beam irradiation followed by annealing at 1000 C\({}^{\circ}\) for 2 hours in vacuum. The energy and total fluence of the irradiation were 2.0 MeV and \(5\times 10^{17}~{}\rm cm^{-2}\), respectively. The concentrations of the produced \(\rm NV^{-}\) and residual \(\rm N_{s}^{0}\) were estimated to be 1.2 and 2.3 ppm, respectively, using electron spin resonance [22]. A full width at half maximum of 0.19 MHz for the CW-ODMR peak was experimentally measured independent of this work. This linewidth indicates a long dephasing time of \(T_{2}^{*}\sim 2~{}\rm\mu s\). The conceptual design of our sensor head is shown in Fig. 1(a). This sensor head was designed to closely approach the head of a living animal and measure the encephalomagnetic field along the \(z\) axis by an ensemble of NV centers oriented to the surface-normal [111] direction parallel to the \(z\) axis. The sensor head components described in this section (see below) were integrated using plastic and aluminum holders. Hence, the sensor head can be freely moved as a unit and easily positioned close to the target object. The diamond containing NV centers was attached by a high-thermal-conductivity glue to a polycrystalline diamond plate (\(10\times 10\times 0.5~{}\rm mm^{3}\)) in order to dissipate the heat due to laser illumination. The other side of the polycrystalline diamond plate had a current flow guide for MWs. The MW guide was made of thin copper film; the distance between the lower side of the MW guide and the excited NV centers was 0.8 mm. A bias magnetic field of 0.9 mT along the \(z\) axis was applied by a ring samarium-cobalt magnet. We used a hemispherical lens with a high refractive index of 2.0 to enhance the collection efficiency of the laser-induced fluorescence from the NV center ensemble [14]. The fluorescence collection efficiency from the diamond surface facing the lens [top surface in Fig. 1(a)] was assumed to be as high as about 56% based on a previously reported numerical calculation [14] for a similar setup. The fluorescence that was not emitted from this surface was considered to be emitted mainly from the side faces due to the high refractive index (2.4) of the diamond [23]. Some of the fluorescence from the side faces of the diamond was collected by the lens since the lens diameter (4 mm) was larger than the size of the diamond. Fluorescence was also collected by an elliptically shaped reflective inner surface of an aluminum block. Stray green light and part of the fluorescence from neutrally charged NV (\(\rm NV^{0}\)) centers were filtered out by a long-pass filter with a cut-on wavelength of 633 nm. The transmitted fluorescence was detected by a reverse-biased photodiode (\(\rm PD_{B}\)). Figure 1: Experimental setup (not to scale). (a) Sensor head design and MW circuit diagram. \(\rm PD_{B}\): fluorescence photodiode; LPF: long-pass filter. (b) Optical setup. \(\lambda/2\): halfwave plate; PBS: polarizing beam splitter; NPBS: non-polarizing beam splitter; L: lens; M: mirror; \(\rm PD_{ref}\): reference photodiode; BB: beam block. ### CW-ODMR measurement setup The NV ensemble was excited by a green laser at 532 nm from a side face of the diamond, as shown in Fig. 1(a). Figure 1(b) shows the optical setup. A laser beam with a diameter of about 3 mm was focused by a lens with a focal length of 300 mm. The beam diameter at the diamond was estimated to be 200 \(\mu\)m. The excitation volume was estimated to be 0.03 mm\({}^{3}\). The laser light was linearly polarized along the \(y\) axis, which is perpendicular to the chosen NV orientation. The fluorescence photocurrent \(I_{\rm fl}=6.6\) mA was observed at an incident light power of 0.39 W, which corresponds to a detected fluorescence power of about 13 mW. The noise in the fluorescence due to the intensity fluctuation of the incident laser was reduced using a balanced detection technique. The reference light, which was picked up by a non-polarizing beam splitter, was detected by a reverse-biased photodiode (PD\({}_{\rm ref}\)). In this work, we connected the anode of PD\({}_{\rm fl}\) to the cathode of PD\({}_{\rm ref}\) to obtain the difference between their photocurrents, \(I_{\rm fl}\) and \(I_{\rm ref}\), respectively. The difference photocurrent \(I_{\rm diff}\) was amplified by a lab-built transimpedance amplifier with a gain of 10 kV/A. The power of the reference light was finely adjusted using a halfwave plate and a polarizing beam splitter to achieve a high reduction rate for the intensity noise. The polarization fluctuation of the laser was converted into an intensity fluctuation by a polarization beam splitter just after the laser. The beam diameter at PD\({}_{\rm ref}\) was expanded by a lens to balance the nonlinear response of the photodiode with that of PD\({}_{\rm fl}\), since the nonlinear response depends on spot size [24]. The magnetic resonance between the ground states \(|0\rangle\) and \(|-1\rangle\) was driven by applying an MW current to the MW guide. To enhance the amplitude of a CW-ODMR peak, we simultaneously drove the three transitions associated with the hyperfine spin state by three-tone MWs [12], which was generated by mixing radio-frequency (RF) waves at 2.16 MHz with MWs and summing the mixed waves with bypassed MWs. In this work, the enhancement factor for the peak amplitude was about 2.5. We adopted lock-in detection, achieved by sinusoidally modulating the MW frequency, to avoid large residual noise at low frequencies. The amplified difference photocurrent was fed into a lock-in amplifier and demodulated with the modulation frequency as the reference. The 3-dB cutoff frequency of the low-pass filter in the lock-in amplifier was 149.4 Hz, which corresponds to a noise-equivalent-power bandwidth of 168.8 Hz. The demodulated output was recorded on a computer via an analog-to-digital converter. The sensor head and optical setup were inside a room that was shielded from magnetic fields by three permalloy layers to reduce environmental field fluctuations. The total shielding factor of this room was about \(2\times 10^{-4}\) at 1 Hz and about \(1\times 10^{-5}\) at 10 Hz. An additional permalloy box was placed around the sensor head. The front face of the shield box remained open to introduce the incident laser and the target object. ## III Results ### CW-ODMR measurement Figure 2(a) shows a CW-ODMR spectrum of the ensemble of [111]-oriented NV centers. The vertical axis in the figure is the demodulated signal \(\tilde{I}\) in the photocurrent, which was calculated using the gains at the transimpedance and lock-in amplifiers. The horizontal axis is the detuning \(\delta\) from the resonance frequency of the central peak at which the three hyperfine spin states were simultaneously driven. The fluorescence photocurrent \(I_{\rm fl}\) at a far-detuned MW frequency was \(I_{\rm fl}=6.6\) mA. In this measurement, the frequency and depth of the modulation were 6.2 kHz and 160 kHz, respectively. We found that a modulation frequency of 3 to 7 kHz yielded a low-noise output. The modulation frequency was finely tuned within this range on each day of the experiment because the frequencies of some noise peaks in the intensity noise slightly shifted over time. The modulation frequency of higher than several kilohertz caused a decrease in the amplitude of the CW-ODMR peaks, probably because the dynamics of the population in the ground states was slower than the modulation at several kilohertz for the relatively low power of \(<1\) W and large spot size of 200 \(\mu\)m in diameter. The MW and RF wave power was tuned to yield a maximum zero-crossing slope at the central peak. The black solid curve is fitted to the measured data using the summation of five derivative Lorentzian functions. The corresponding full width at half max Figure 2: Lock-in CW-ODMR spectrum (a) over hyperfine manifold and (b) in near-resonant region of central peak. Measured demodulated photocurrent is shown by filled circles. The solid line in (a) represents the fitted curve obtained from the summation of five derivative Lorentzian functions. The linear function shown by the dashed line in (b) was fitted to the near-resonant data to obtain a zero-crossing slope. imum of the derivative Lorentzian function was about 0.48 MHz. This linewidth is greater than the inhomogeneous broadening of the CW-ODMR peak due to the inhomogeneity in the bias magnetic field, which was estimated to be approximately 0.2 MHz. To determine the zero-crossing slope, we measured a CW-ODMR spectrum at the near-resonance region of the central peak, as shown in Fig. 2(b). The demodulated photocurrent linearly depends on the detuning in this region. The zero-crossing slope \(d\tilde{I}/d\delta\) was measured to be 324 pA/Hz by fitting the data with a linear function, as shown by the black dashed line. This slope corresponds to the photocurrent response to magnetic field variation as \((d\tilde{I}/d\delta)/\gamma_{\rm e}=9.06\) A/T, where \(\gamma_{\rm e}=28.0\) GHz/T is the gyromagnetic ratio for an NV center. ### Reduction in intensity noise The reduction rate for the intensity noise was estimated at \(I_{\rm fl}=25\) mA. We measured the standard deviations of \(\tilde{I}\) with and without the reference light to be 3.0 nA and 130 nA, respectively. Here, the MW source was switched off to isolate the sensor from the noise associated with the environmental magnetic field. The relative intensity noise in the incident light was roughly estimated to be \(10\mathrm{log}_{10}\left(\frac{130~{}\mathrm{nA}^{2}}{168.8~{}\mathrm{Hz} \times 25~{}\mathrm{mA}^{2}}\right)=-130\) dBc/Hz at a modulation frequency of 6.2 kHz. The photon shot noise with and without the reference light was calculated to be 1.6 nA and 1.2 nA, respectively. The details of the photon shot noise calculation are described in Sec. III.3. We obtained the following reduction rate for the fluorescence intensity noise: \[\sqrt{\frac{3.0~{}\mathrm{nA}^{2}-1.6~{}\mathrm{nA}^{2}}{130~{}\mathrm{nA}^{2} -1.2~{}\mathrm{nA}^{2}}}=1.9\times 10^{-2}.\] We found that this "red-green" balance detection exhibited a similar reduction rate to that for the "green-green" balanced detection with the reference and incident lights. ### Photocurrent dependence of noise We analyzed the noise components (photon shot noise, fluorescence intensity noise, and electrical noise) of the detectors and circuits by measuring their dependence on the fluorescence photocurrent \(I_{\rm fl}\). The demodulated photocurrent \(\tilde{I}\) was recorded for 5 s and Fourier-transformed to provide a single-sided noise amplitude spectral density \(n_{\tilde{I}}\). To evaluate the noise \(n_{\tilde{I},\mathrm{far}}\) without influence from environmental magnetic field noise, the analysis was performed with an MW carrier frequency of 2.4 GHz, which was far-detuned from the resonance. We observed no excess noise due to the application of the far-detuned MWs. The noise density \(n_{\tilde{I},\mathrm{far}}\) was almost flat up to the cutoff frequency of the lock-in amplifier. The average \(\langle n_{\tilde{I},\mathrm{far}}\rangle\) of the noise density within the 100-Hz bandwidth was taken as a measure of the intrinsic noise of our diamond sensor at a given \(I_{\rm fl}\). The dependence on \(I_{\rm fl}\) of \(\langle n_{\tilde{I},\mathrm{far}}\rangle\) is shown in Fig. 3(a). Here, we varied the incident laser power using a halfwave plate and a polarizing beam splitter just before the non-polarizing beam splitter. In the figure, the measured \(\langle n_{\tilde{I},\mathrm{far}}\rangle\) is represented by open circles. The relative uncertainty in the data, shown as error bars, was independently evaluated to be 5%. \(\langle n_{\tilde{I},\mathrm{far}}\rangle\) at \(I_{\rm fl}=0\) represents the electrical noise density \(\langle n_{\tilde{I},\mathrm{elec}}\rangle\) and was measured to be 20 pA/\(\sqrt{\mathrm{Hz}}\) by blocking the laser beam before the non-polarizing beam splitter. We fitted the noise model in Eq. (1) to the data. \[\langle n_{\tilde{I},\mathrm{far}}\rangle=\sqrt{\langle n_{\tilde{I}, \mathrm{elec}}\rangle^{2}+p_{1}I_{\rm fl}+p_{2}I_{\rm fl}^{2}}. \tag{1}\] The second and third terms represent the photon shot noise \(\langle n_{\tilde{I},\mathrm{psn}}\rangle\) and fluorescence intensity noise \(\langle n_{\tilde{I},\mathrm{int}}\rangle\), respectively. This noise model well describes the data, as shown by the black solid curve in Fig. 3(a). The fitted parameters were \(p_{1}=(5.0\pm 0.6)\times 10^{-19}\) A/Hz and \(p_{2}=(5.0\pm 0.5)\times 10^{-17}\) /Hz. The black dashed curve is Figure 3: Fluorescence photocurrent dependence of (a) floor of noise spectral density of \(\tilde{I}\), (b) zero-crossing slope \(d\tilde{I}/d\delta\), and (c) estimated floor of equivalent magnetic field noise spectral density. the sum of \(\langle n_{\tilde{I},\text{elec}}\rangle\) and the calculated shot noise given by \[\sqrt{\langle n_{I,\text{elec}}\rangle^{2}+2\times 2q_{e}I_{\text{fl}}}, \tag{2}\] where \(q_{e}=1.6\times 10^{-19}\) C is the elementary charge. The factor of 2 for the shot noise term was introduced because the shot noise at the two photodiodes was assumed to be independent. The measured shot noise coefficient \(p_{1}=(5.0\pm 0.6)\times 10^{-19}\) A/Hz is close to the calculated value of \(2\times 2q_{e}=6.4\times 10^{-19}\) A/Hz. The intensity noise \(\langle n_{\tilde{I},\text{int}}\rangle=\sqrt{p_{2}I_{\text{fl}}^{2}}\) is equivalent to \(\langle n_{I,\text{psn}}\rangle\) at the fluorescence photocurrent \(I_{\text{fl,equiv}}=p_{1}/p_{2}=10\pm 1.6\) mA. The photon shot noise surpassed the laser intensity noise at a low fluorescence photocurrent (\(<I_{\text{fl,equiv}}\)). The sensor noise \(n_{B}\) in the magnetic field measurement depends on demodulated photocurrent noise \(n_{I}\) and zero-crossing slope \(d\tilde{I}/d\delta\) as \(n_{B}=n_{\tilde{I}}/(\gamma_{e}d\tilde{I}/d\delta)\). The fluorescence dependence of the slope was measured, as shown in Fig. 3(b). The error bars are the estimated standard deviations of the slope; they are much smaller than the marker size. Here, the modulation parameters and powers of the MWs and RF waves were fixed over all measurements; they were tuned at \(I_{\text{fl}}=7.2\) mA to maximize the slope. Note that the optimal parameters and power depend on the incident laser power [25; 26]. Nevertheless, we confirmed that tuning these values resulted in an improvement in the slope of about 3% at \(I_{\text{fl}}=29.3\) mA. We thus assumed that the relative uncertainty of the data was several percent in this measurement. We found that the slope saturated as the incident laser power increased. This saturation could be explained by a charge state conversion of NV centers, which led to a decrease in the contrast of a CW-ODMR peak, because [N\({}_{\text{s}}^{0}\)] was only about twice as large as [NV\({}^{-}\)] in our diamond [27; 28; 29]. A detailed investigation of this saturation is beyond the scope of this work. The magnetic field noise density \(\langle n_{B,\text{far}}\rangle\) expected from the measured \(\langle n_{I,\text{far}}\rangle\) and \(d\tilde{I}/d\delta\) did not monotonically decrease as \(I_{\text{fl}}\) increased, as shown in Fig. 3(c), because of the saturation of the slope. The error bars indicate the uncertainties computed from a relative uncertainty of 5% in the slope and the covariance matrix used in the curve fitting to \(\langle n_{\tilde{I},\text{far}}\rangle\) with Eq. (1). The photocurrent dependence of \(\langle n_{B,\text{far}}\rangle\) suggests that good sensitivity to a magnetic field can be achieved at \(I_{\text{fl}}\) from 5 to 20 mA. ### Magnetic field noise spectral density and sensitivity We measured single-sided noise amplitude spectral density \(n_{B,\text{res}}\) in a magnetic field measurement where the MWs were resonant with the central CW-ODMR peak (\(\delta=0\)). The noise spectrum \(n_{B,\text{res}}\) was computed using the discrete Fourier transform from a measured time trace of \(\tilde{I}\) for 5 s with sampling frequency \(F_{s}=400\) Hz. Figure 4 shows the measured \(n_{B,\text{res}}\), which was averaged over 10 time measurements, at \(I_{\text{fl}}=6.4\) mA (blue solid curve). The optimal power of the reference light was estimated from the CW-ODMR peak contrast of 3% and a reference light power that had been optimized with the far-detuned MWs. The zero-crossing slope \(d\tilde{I}/d\delta\) was \(332\pm 0.7\) pA/Hz. Note that the displayed \(n_{B,\text{res}}\) was digitally filtered by narrow-band notch filters for the harmonics of a 50-Hz power line and a band-pass filter with 3-dB cutoff frequencies of 5 and 100 Hz, which corresponds to the bandwidth of the target object (e.g., brain of a living animal). The noise-equivalent power bandwidth \(f_{\text{NEP}}\) of the digital filtering was numerically calculated to be 91.9 Hz. In this numerical calculation, white noise with a standard deviation of \(\sigma\) was numerically computed with sampling frequency \(F_{s}\) and digitally filtered. The standard deviation \(\sigma^{\prime}\) of the filtered noise, given by \(\sigma^{\prime}=\sigma\sqrt{f_{\text{NEP}}/(F_{s}/2)}\)[30], was numerically calculated to yield \(f_{\text{NEP}}\). The achieved noise density indicated a very low floor in the single-sided spectrum in the near-dc range. The lowest noise density floor, about 9 pT/\(\sqrt{\text{Hz}}\), was measured near 40 Hz and from 70 to 90 Hz. The sudden drop in \(n_{B,\text{res}}\) at 90 Hz was due to the digital band-pass filter. A low noise density of 15 pT/\(\sqrt{\text{Hz}}\) was obtained even near 5 Hz, even though magnetic field noise generally deteriorates at lower frequency [12; 13; 15; 16; 19; 20]. We attributed the noise peaks around 25 Hz to the mechanical vibration of the sensor head since these peaks shifted to lower frequencies when the sensor head was additionally supported. The noise spectral density \(n_{B,\text{far}}\) measured with the far-detuned MWs at \(I_{\text{fl}}=6.4\) mA is shown by the orange trace in Fig. 4. We found that \(n_{B,\text{far}}\) reached the photon-shot-noise-limited sensitivity of 6.9 pT/\(\sqrt{\text{Hz}}\) (black dashed line). Figure 4: Single-sided noise amplitude spectral density in magnetic field measurement. The blue and orange curves are the sensor noise measured with resonant and far-detuned MWs, respectively. Calculated photon-shot-noise-limited sensitivity of 6.9 pT/\(\sqrt{\text{Hz}}\) is indicated by the dashed line. PSN: photon shot noise. The sensitivity of our sensor was evaluated from the average power of the measured noise, which was digitally filtered. The sensitivity \(\eta\) is defined as \(\eta=\delta B\sqrt{T}\), where \(\delta B\) is the minimum detectable magnetic field for measurement time \(T\). In this definition, the noise spectrum is assumed to be frequency-independent (white noise). The standard deviation of \(128\pm 2\) pT in the measured time trace is considered to represent \(\delta B\) for measurement time \(T=F_{s}^{-1}=2.5\) ms. Since the bandwidth of the digital band-pass filter was narrower than the measurement bandwidth \(F_{s}/2\) and the lock-in amplifier's bandwidth, \(f_{\rm NEP}\) for the digital filtering was substituted for the measurement bandwidth; that is, the sensitivity was equivalent to \(\eta=\delta B/\sqrt{2f_{\rm NEP}}\)[12; 30]. We achieved a sensitivity of \(\eta=9.4\pm 0.1\) pT/\(\sqrt{\rm Hz}\). ### Allan deviation The Allan deviation of the noise for a measurement of about 200 minutes was computed to evaluate the stability of our sensor. We continuously tuned the MW carrier frequency to the resonance using the demodulated photocurrent \(\tilde{I}\) output, which was low-pass-filtered with a cutoff frequency of 10 Hz. The bandwidth of the feedback response was approximately 2 Hz. The measured noise was recorded every minute on a computer. The notch and band-pass digital filters used in the sensitivity analysis (see Sec. III.4) were not used in this analysis. We then computed the overlapping Allan deviation, as shown in Fig. 5. The open circles indicate Allan deviations for a given averaging time. The error bars represent the standard deviations of the Allan deviations; they are much smaller than the marker size. We found that the 1-second interrogation yielded an Allan deviation of 8.5 pT, which is consistent with the evaluated sensitivity (\(\eta=9.4\) pT/\(\sqrt{\rm Hz}\)). The Allan deviation showed a bump around the 10-second averaging time. This bump may arise from a periodic fluctuation of several tens of seconds. We found that the bump could be suppressed to some extent by removing a 0.025-Hz component using a digital notch filter. However, the cause of this fluctuation was not identified. The dashed line shows the minimum detectable magnetic field \(\delta B\) predicted by \(\delta B=\eta/\sqrt{T}\) with the achieved sensitivity of \(\eta=9.4\) pT/\(\sqrt{\rm Hz}\). It indicates that the Allan deviation scaled to \(T^{-1/2}\) and was close to \(\delta B\) at averaging times of 100 to 1000 seconds. The Allan deviation reached 0.3 pT for an averaging time of a few thousand seconds and then seemed to saturate. The zero-crossing slopes before and after this Allan deviation measurement were found to be almost the same. We thus conclude that our sensor remained stable and could measure a magnetic field with a sensitivity of \(\eta=9.4\pm 0.1\) pT/\(\sqrt{\rm Hz}\) for at least 200 minutes. ## IV Discussion The demonstrated sensitivity of \(9.4\pm 0.1\) pT/\(\sqrt{\rm Hz}\) is the best reported value for diamond quantum sensors based on the CW-ODMR of an ensemble of NV centers [12; 13; 15; 30] in the frequency range of 5 to 100 Hz. The previous best sensitivities were around 20 to 30 pT/\(\sqrt{\rm Hz}\) in the low frequency range [13; 15] and 15 pT/\(\sqrt{\rm Hz}\) in the relatively high frequency range of 80 Hz-3.6 kHz [12]. Moreover, in our study, the noise floor of \(n_{B}\) stayed below 20 pT/\(\sqrt{\rm Hz}\) even at 5 Hz, as shown in Fig. 4. Given that the noise environment is generally cleaner at higher frequency, the very low noise floor of about 9 pT/\(\sqrt{\rm Hz}\) will continue into the kilohertz range if we use a higher cutoff frequency for the lock-in amplifier's low-pass filter. The Allan deviation analysis showed that our diamond magnetometer can interrogate a magnetic field for a long time with remarkable sensitivity. Therefore, our sensor is capable of detecting a repetitive biomagnetic field, for example, a stimulus-evoked field, with a strength on the order of 1 pT by accumulating the signals. CW-ODMR-based magnetometry has advantages over pulsed-MW-based magnetometry for practical applications; it has a simpler experimental setup and looser requirements for the inhomogeneities of the bias magnetic field and MWs. Additionally, the use of a single orientation of NV center axes in our magnetometer leads to a lower requirement for the bias field alignment compared with that for multiple orientations [12; 15; 20]. The sensor head design, which can approach the target object to a distance of 1 mm, relies on a simple setup and reduced requirements for the bias field. The simplified geometry between the single orientation of NV centers and the magnetic field to be measured also facilitates various practical applications. We note that a better sensitivity of around 2 pT/\(\sqrt{\rm Hz}\) in the low-frequency range (from 10 Hz), achieved using the Ramsey method, has been Figure 5: Allan deviation as function of averaging time. Open circles show calculated overlapping Allan deviation from a continuous measurement for 200 minutes. Estimated uncertainties of the Allan deviations are indicated by the error bars, which are much smaller than the marker circle size. recently reported [20]; however, our sensor is more suitable for practical applications such as biomagnetic field measurement because of its simplified setup and short measurement distance. We attributed a major part of the sensitivity improvement in this work to the long dephasing time of \(T_{2}^{*}\sim 2~{}\mu\)s, achieved by decreasing the concentration of \({}^{13}\)C to about 500 ppm and using a relatively low initial nitrogen concentration of 5.6 ppm. The narrow linewidth of a CW-ODMR peak due to the long \(T_{2}^{*}\) resulted in a high response signal to a magnetic field of \(\gamma_{e}(d\tilde{I}/d\delta)=9.3\) A/T, even with the use of only a single crystallographic orientation of NV centers. The photon-shot-noise-limited sensitivity is comparable to previously reported values [12; 15]. In addition, the approximately five-fold improvement in the intensity noise reduction in our balanced detection over the balanced detection reported in a previous study [15] contributed to the good sensitivity. A reduction in the relative intensity noise of a laser can enhance sensitivity. The relative intensity noise of our laser (Coherent Verdi G5) was measured to be about \(-130\) dBc/Hz at a modulation frequency of 6.2 kHz; the typical estimated relative intensity noise for state-of-the-art solid-state lasers at the same frequency is \(-140\) dBc/Hz [31; 32]. Therefore, a 10-dB improvement in \(n_{I,\text{int}}^{2}\) is feasible. This would result in a photon-shot-noise-limited sensitivity at up to \(I_{\text{fl}}\sim 100\) mA. It is expected that the zero-crossing slope can be increased by extending the dephasing time \(T_{2}^{*}\) for the diamond. For example, a very long dephasing time of 8.5 \(\mu\)s with \([\text{NV}^{-}]=0.4\) ppm has been reported [13]. This long dephasing time will offer a four-fold improvement if the same fluorescence intensity is available since the shot-noise-limited sensitivity is proportional to the linewidth of a CW-ODMR peak [12; 25]. Although the lower \([\text{NV}^{-}]\) emits weaker fluorescence, a photocurrent of up to 10 mA can be obtained by increasing the incident laser power. In addition, the fluorescence collection efficiency can be boosted to approximately unity by using a total internal reflection lens and a light pipe [33; 20]. ## V Conclusions We demonstrated a sensitive diamond magnetometer with a magnetic field sensitivity of \(9.4\pm 0.1\) pT/\(\sqrt{\text{Hz}}\) in a near-dc frequency range of 5 to 100 Hz. The magnetometer can closely approach the target object and the measurement distance from the sensing volume was about 1 mm. The Allan deviation indicated that our magnetometer can measure magnetic fields of 8.5 and 0.3 pT with a unity signal-to-noise ratio by interrogating for 1 second and several thousands of seconds, respectively. Our high-sensitivity diamond magnetometer was designed to be compatible with practical applications, including the measurement of the encephalomagnetic field of a living animal. The sensitivity improvement achieved in this work is an important step toward realizing magnetoencephalography under ambient conditions with millimeter-scale spatial resolution. ###### Acknowledgements. This work was supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant No. JPMXS0118067395 and JPMXS0118068379.
2309.03924
Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given Computational Time Limits
Machine learning (ML) techniques have been proposed to automatically select the best solver from a portfolio of solvers, based on predicted performance. These techniques have been applied to various problems, such as Boolean Satisfiability, Traveling Salesperson, Graph Coloring, and others. These methods, known as meta-solvers, take an instance of a problem and a portfolio of solvers as input. They then predict the best-performing solver and execute it to deliver a solution. Typically, the quality of the solution improves with a longer computational time. This has led to the development of anytime selectors, which consider both the instance and a user-prescribed computational time limit. Anytime meta-solvers predict the best-performing solver within the specified time limit. Constructing an anytime meta-solver is considerably more challenging than building a meta-solver without the "anytime" feature. In this study, we focus on the task of designing anytime meta-solvers for the NP-hard optimization problem of Pseudo-Boolean Optimization (PBO), which generalizes Satisfiability and Maximum Satisfiability problems. The effectiveness of our approach is demonstrated via extensive empirical study in which our anytime meta-solver improves dramatically on the performance of Mixed Integer Programming solver Gurobi, which is the best-performing single solver in the portfolio. For example, out of all instances and time limits for which Gurobi failed to find feasible solutions, our meta-solver identified feasible solutions for 47% of these.
Catalina Pezo, Dorit Hochbaum, Julio Godoy, Roberto Asin-Acha
2023-09-07T03:04:50Z
http://arxiv.org/abs/2309.03924v1
# Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given Computational Time Limits ###### Abstract Machine learning (ML) techniques have been proposed to automatically select the best solver from a portfolio of solvers, based on predicted performance. These techniques have been applied to various problems, such as Boolean Satisfiability, Traveling Salesperson, Graph Coloring, and others. These methods, known as meta-solvers, take an instance of a problem and a portfolio of solvers as input. They then predict the best-performing solver and execute it to deliver a solution. Typically, the quality of the solution improves with a longer computational time. This has led to the development of _anytime_ selectors, which consider both the instance and a user-prescribed computational time limit. _Anytime meta-solvers_ predict the best-performing solver within the specified time limit. Constructing an anytime meta-solver is considerably more challenging than building a meta-solver without the "anytime" feature. In this study, we focus on the task of designing anytime meta-solvers for the NP-hard optimization problem of _Pseudo-Boolean Optimization_ (PBO), which generalizes Satisfiability and Maximum Satisfiability problems. The effectiveness of our approach is demonstrated via extensive empirical study in which our anytime meta-solver improves dramatically on the performance of Mixed Integer Programming solver Gurobi, which is the best-performing single solver in the portfolio. For example, out of all instances and time limits for which Gurobi failed to find feasible solutions, our meta-solver identified feasible solutions for \(47\%\) of these. Algorithm Selection \(\cdot\) PBO Combinatorial optimization \(\cdot\) ML ## 1 Introduction _Per-instance Automatic Algorithm Selection_ (AAS), first proposed in (Rice, 1976), consists of, for a given instance of a known problem and a portfolio of algorithms for the problem, a prediction of an algorithm in the portfolio that best solves the given instance. The prediction is done by Machine Learning models that are trained on a set of problem instances. This is of particular interest for NP-hard optimization problems since, for such problems, there is no single algorithm that dominates the others on every instance in every possible scenario. The _anytime behavior_ of an algorithm, when a feasible solution is available, is its profile of improvement in the objective function value at each successive time step. _Anytime Automatic Algorithm Selection_ aims to choose the algorithm which is expected to find the best possible solution, within the given time limit, for a specific instance. Previously, Anytime Automatic Algorithm Selection meta-solvers were proposed for the Knapsack (Huerta et al., 2020) and Traveling Salesperson (Huerta et al., 2022) problems. We devise here Anytime Automatic Algorithm Selection for the NP-hard Pseudo-Boolean Optimization problem (PBO) (Boros and Hammer, 2002). PBO is an optimization problem with an objective function that is a Pseudo-Boolean function and subject to constraints that are (in)equalities over Boolean variables. Many problems are typically modeled as PBO, including hardware and software verification (Manquinho and Marques-Silva, 2005; Wille et al., 2011), software dependency (Trezentos et al., 2010), planning (Acha et al., 2022), scheduling problems (Asin Acha and Nieuwenhuis, 2014), and the satisfiability problem (SAT), the maximum satisfiability problem (MaxSAT) (Biere et al., 2009a). As such, improving the ability to deliver high-quality solutions for PBO can impact the solvability of a broad range of problems. Indeed, a number of commercial and publicly available algorithms (solvers) have been proposed for PBO and the SAT community maintains the Pseudo-Boolean Competition (Manquinho et al., 2011) in which the performance of state-of-the-art solvers is assessed. This paper describes a meta-solver that, for a given instance and time limit, i) predicts, using a Machine Learning model, which solver, among a portfolio of solvers, will deliver a best quality (smallest objective value) feasible solution and ii) executes such a solver. Our experiments demonstrate that our meta-solver outperforms all the individual solvers in the portfolio by a wide margin. In particular, our meta-solver outperforms Gurobi - which is the dominant solver in the portfolio - in achieving better quality solutions, for a portion of the instances and time limits where Gurobi finds feasible solutions. And in 47% out of the cases where Gurobi does not identify feasible solutions, our meta-solver does find feasible solutions. A major contribution of our meta-solver is that identifies with great precision when feasibility is _not_ expected to be attained for the given instance, within the specified time limit. Beyond achieving improved results, our study provides insights into the most important features that determine the choice of the best solver. We identify the fraction of the number of terms that appear on the objective function, out of the total number of terms in the objective and the constraints, as a major feature. This feature has not appeared previously in algorithm selection studies on SAT and MaxSAT. Another major feature is the prescribed time limit, which appears to be more important than other characteristics of the instances in determining the solver selection. The paper is organized as follows: In Section 2 we discuss related work. Section 3 presents essential concepts and terminology. Section 4 describes the meta-solver construction and Section 5 presents and analyzes experimental results. Finally, Section 6 discusses future work and conclusions. ## 2 Related Work This section provides an overview of related work on Pseudo-Boolean Optimization solvers, and on Automatic Algorithm Selection for the SAT and MaxSAT problems, which are special cases of PBO. To facilitate the reading of the paper we provide, in Table 1, a list of acronyms used throughout. ### PBO solvers Most PBO solvers are based on making calls to a program subroutine, based on the Conflict-Driven Clause-Learning (CDCL) algorithm (Biere et al., 2009b), that solves a decision problem on whether the input formula is feasible or not. The optimization problem is translated into a feasibility problem by adding to the constraints the _objective function \begin{table} \begin{tabular}{c c|c c} **Acronym** & **Definition** & **Acronym** & **Definition** \\ \hline AAS & Automatic Algorithm Selection & LSU & Linear SAT-UNSAT algorithm \\ AAAS & Anytime Automatic Algorithm Selection & MaxSAT & Maximum Boolean Satisfiability problem \\ ASLib & Algorithm Selection Library & ML & Machine Learning \\ ASP & Answer Set Programming & MIP & Mixed Integer Programming \\ BCS & Boolean Constraint Satisfaction & NaPS & Nagoya Pseudo-Boolean Solver \\ BDD & Binary Decision Diagram & PB & Pseudo-Boolean \\ CDCL & Conflict Driven Clause Learning & PBO & Pseudo-Boolean Optimization \\ GB & Gradient Boosting & RF & Random Forest \\ CNF & Conjunctive Normal Form & SAT & Boolean Satisfiability Problem \\ CNN & Convolutional Neural Network & SBS & Single Best Solver \\ KNN & K-Nearest Neighbors & VBS & Virtual Best Solver \\ LP & Linear Programming & WBO & Weighted Boolean Maximization \\ LS & Local Search & WPM & Weighted Partial MaxSAT \\ \hline \end{tabular} \end{table} Table 1: Acronyms used in this paper constraint_, which is an inequality specifying that the objective function is less than or equal (for minimization) to a specified upper bound. This translates the PBO problem into a Boolean Constraint Satisfaction (BCS) problem. Many solvers (e.g. (Sorensson, 2010; Sakai and Nabeshima, 2015; Martins et al., 2014)) further encode the BCS problem as a CNF Satisfiable (SAT) formula. Another family of solvers, e.g. (Wolsey and Nemhauser, 1999; Gurobi Optimization, LLC, 2023), implement a Branch & Bound search strategy on a search tree that, at each node, solves the linear relaxation of the problem. In addition to these, a third family of solvers uses local search procedures (Lei et al., 2021). Next, we list the PBO solvers considered for inclusion in the portfolio of the meta-solver. These solvers were chosen due to their good performance in the PBO competitions (Manquinho et al., 2011). **NaPS:**: The Nagoya Pseudo-Boolean Solver (Sakai and Nabeshima, 2015) won the 2016 Pseudo-Boolean Competition in \(4\) categories. This solver is a MaxSAT solver, based in Minisat+(Sorensson, 2010). The main difference between NaPS and other PBO solvers that translate the formula to MaxSAT, is that NaPS uses Binary Decision Diagrams (BDD) to translate the PB constraint to a SAT formula. **OpenWBO:**: Open-WBO (Martins et al., 2014) is a weighted partial MaxSAT solver that won second place in two categories in the 2016 PBO Competition. PBO instances are easily translated into weighted partial MaxSAT instances where the PBO's constraints are translated into hard clauses (that must be satisfied), and the objective function is translated into a set of weighted soft clauses. Open-WBO implements five different search algorithms, of which we only consider two since the other three were dominated by other algorithms in our portfolio. The two search algorithms are: **Linear-su:**: This algorithm translates the PBO instance to Weighted Partial MaxSAT and uses the LSU search strategy as explained in (Koshimura et al., 2012). We will refer to this option as _OpenWBO-lsu_. **oll:**: This algorithm translates the PBO instance into Weighted Partial MaxSAT and uses a search strategy similar to WPM1, as explained in (Ansotegui et al., 2012). This option will be referred to as _OpenWBO-oll_. **Clasp:**: Clasp (Gebser et al., 2007) is part of the PosTdam Answer Set Solving COllection, POTASSCO. It is a CDCL solver for Answer Set Programming. Answer Set Programming (ASP) is a form of declarative programming oriented towards difficult (primarily NP-hard) search problems that is more expressive and subsumes PBO. It uses different semantics than other CDCL solvers and, as such, it has superior performance for certain subsets of instances. **LS-PBO:**: The local search LS-PBO solver achieved good performance in instances from the PB competition. It features a transformation of the objective function into objective constraints, a constraint weighting scheme for the Pseudo-Boolean constraints, and a scoring function to guide the local search (Lei et al., 2021). **Gurobi:**: Gurobi (Gurobi Optimization, LLC, 2023) is a Mixed Integer Programming (MIP) commercial solver that is able to handle mixed linear, quadratic and second-order cone constraints. When solving a PBO instance, Gurobi uses a Branch & Bound search procedure powered by advanced preprocessing techniques, intelligent generation of cutting planes, specialized heuristics, and parallel processing. Here we used version 9.5.0. **RoundingSat:**: The RoundingSat solver, originally introduced in (Elffers and Nordstrom, 2018), is a CDCL solver that includes faster propagation routines for PB constraints. Unlike other solvers, it does not translate the PB constraints into a SAT formula but executes conflict analysis directly on the PB constraints. It also allows for incorporating a Linear Programming (LP) solver into its pipeline. ### Algorithm Selection for SAT and MaxSAT A thorough review of Automatic Algorithm Selection (AAS) is provided in (Kerschke et al., 2019). The performance of AAS meta-solvers has been improved over time due to the influence of algorithm selection competitions (Lindauer et al., 2019) and the maintenance and updating of the Algorithm Selection Library (ASLib) (Bischl et al., 2016). In particular, for SAT and MaxSAT (which are closely related to PBO), many successful meta-solving approaches were proposed in (Xu et al., 2008), (Malitsky et al., 2012), (Ansotegui et al., 2016),(Hoos et al., 2015), (Pulina and Tacchella, 2007), (Gebser et al., 2011), (Maratea et al., 2014). For example, the SATzilla solver (Xu et al., 2008) has been quite influential in the SAT community and won several categories in different versions of the SAT competition and SAT evaluation. SATzilla is a Portfolio-Based Algorithm Selection system that chooses the appropriate solver in the portfolio, based on the computation of a number of features from the input instance and other features it collects from probing procedures. For MaxSAT, an improved instance-specific algorithm configuration, also based on different formula and probing features, was proposed in (Ansotegui et al., 2016). This solver won the majority of the categories of the MaxSAT competition in 2016. SATzilla's first version (Nudelman et al., 2004) proposed \(84\) features for characterizing SAT instances, classified into \(9\) categories: problem size, variable-clause graph, variable graph, clause graph, balance, proximity to Horn formulae, LP-based, CDCL probing and local search probing features. (Xu et al., 2008) used \(48\) of those proposed features, excluding the computationally expensive ones, to build SATzilla. In (Ansotegui et al., 2016), \(32\) of the standard SAT features were selected, such as the number of variables, number of clauses, proportion of positive to negative literals, and average number of clauses in which a variable appears, among others. For the specific MaxSAT problem, they also computed the percentage of clauses that are soft and the statistics of the distribution of weights. In (Loreggia et al., 2016), the authors propose a new approach to AAS, following the philosophy of deep learning models that replace domain-specific features with generic raw data, from which they learn the important features automatically. For this, the authors propose to use as raw data the input text file of any combinatorial problem and convert it to a fixed-size image, that will be used as input for a Convolutional Neural Network (CNN). Specifically, they first create a vector from the input file, replacing each character with its ASCII code, they then reshape the vector as a matrix of \(\sqrt{N}\times\sqrt{N}\), where \(N\) is the number of total characters in the input text file. Finally, this new "image" of ASCII values is re-scaled to a predefined size, to work with a set of images of the same size. With this input, the selector is a trained CNN multi-label classification model that encodes the input instance and outputs the most promising solver for the instance. This approach is tested with SAT and Constraint Satisfaction (CSP) instances, obtaining a meta-solver that is able to outperform the Single Best Solver (see Subsection 3.3), but underperforms in comparison with methods based on domain-specific features. As a baseline for our work, we will use a straightforward adaptation to this approach to anytime scenarios, since no specific work on Anytime Automatic Algorithm Selection for PBO has been proposed until now. ## 3 Preliminaries In this Section, we give an overview of Machine Learning methods that we use and provide formal definitions of the Pseudo-Boolean Optimization problem, as well as the Automatic Algorithm Selection problem. We also present a performance metric, called the \(\tilde{m}\), that is used in addition to accuracy and confusion matrix to assess the performance of the proposed meta-solver. ### Machine Learning Over the past decade, the field of Machine Learning (ML) within Artificial Intelligence has undergone significant development, according to (Alpaydin, 2021). ML has become a powerful tool for processing and analyzing large volumes of data, as algorithms developed for various ML models aim to uncover hidden patterns within the data. These models learn from a given set of data, called a training set, to create a function \(f\) that maps an input instance to a corresponding scalar or vector output, referred to as labels. The process by which \(f\) is learned determines the classification of the ML model. If the learning process relies on ground truth labels, consisting of input instances and their corresponding output labels, then the model is considered supervised. Examples of supervised models can be found in Burkart's survey (Burkart & Huber, 2021). If the model finds patterns independently without access to ground truth labels, it is considered unsupervised, as seen in Alloghani's work (Alloghani et al., 2020). Semi-supervised models combine ground truth labels with pattern analysis of input data to learn, as described in (Zhu & Goldberg, 2009). Supervised and unsupervised machine learning can both perform automatic algorithm selection/configuration, as demonstrated by the work of (Kadioglu et al., 2010). However, the focus of this text is on supervised Machine Learning. ML models can also be classified based on the nature of the output produced by \(f\). If the output consists of discrete values used to categorize inputs into different classes, the model is considered a classification model. On the other hand, if the output corresponds to real values, the model is a regression model. The supervised ML algorithms for classification used here are: **Random Forest:**: The Random Forest (RF) method, as described by Breiman (Breiman, 2001), is an ensemble technique (Breiman, 1996) that constructs a specified number of decision trees (controlled by a parameter \(n_{estimators}\)). Each tree is trained on a different subset of instances within the training set and proposes a result to compute the output label. The final output label is determined through a consensus scheme that differs depending on whether the model is a regression or classification model. In the case of regression, the consensus is reached by averaging the outputs of all the decision trees. In contrast, for classification models, the output label corresponds to the most frequently repeated label (voted) among the decision trees' outputs. The decision trees themselves are constructed using the decision tree algorithm described in (Quinlan, 1986). \(k\)**-Nearest Neighbors:**: The \(k\)-Nearest Neighbors (KNN) algorithm, introduced by Fix and Hodges (Fix & Hodges, 1991), is a Machine Learning (ML) method that determines the output label based on the labels of the \(k\) closest training examples to the input point being labeled. The distance between the feature vectors of the input point and the training examples can be calculated using various metrics, but the most commonly used is the Euclidean distance. For classification tasks, KNN assigns the output label as the most frequently occurring label among the \(k\) neighbors. In the case of regression tasks, the output label corresponds to the average of the labels of the \(k\) neighbors. **Gradient Boosting:**: Gradient Boosting (GB) is an ML method, proposed in (Friedman, 2001), that builds upon the ideas behind Ada Boost. GB allows for different parameterized loss functions to be defined. The learning process involves consecutively training a parameterized number (\(n_{estimators}\)) of new "weak" models, with each new model being given as input to the next iteration. In a manner similar to gradient descent, a negative gradient is computed based on the past model, which is weighted according to a parameterized scheme (\(learning\_rate\)). A move in the opposite direction is then taken to reduce the loss. This process is repeated to improve the performance of the model. ### Pseudo-Boolean Optimization (PBO) A Pseudo-Boolean function is a mapping \(f:\{0,1\}^{n}\rightarrow\mathbb{R}\), where \(\mathbb{R}\) is the set of real numbers, (Boros & Hammer, 2002). A Pseudo-Boolean Optimization Problem (PBO) is formulated for an array of Boolean variables \(\mathbf{x}\) as follows: \[\min f(\mathbf{x})\] s.t. \[g_{1}(\mathbf{x})\geq a_{1}\] \[\vdots\] \[g_{n}(\mathbf{x})\geq a_{n}\] \[\mathbf{x}\in\{0,1\}^{n}.\] Without loss of generality, the constraints are of the form \(g_{i}(\mathbf{x})=b_{1}t_{1}+b_{2}t_{2}+\ldots+b_{n}t_{m}\), where \(b_{i}\) are integers, and \(t_{j}\), called a _term_, is a product of the variables in a subset \(S_{j}\in\{1,2,\ldots,n\}\), \(t_{j}=\prod_{k\in S_{j}}x_{k}\). ### Performance metric for Anytime Automatic Algorithm Selection A specific type of Algorithm Selection is _per-instance Automatic Algorithm Selection (AAS)_: given a problem \(P\), with \(I\) a set of instances of \(P\), \(A=\{A_{1},A_{2},...,A_{n}\}\) a set of algorithms for \(P\) and a general given metric \(pm\) that measures the performance of any algorithm \(A_{j}\in A\) for \(I\), AAS consists of a selector \(S\) that maps any instance \(i\in I\) to an algorithm \(S(i)\in A\) such that the overall performance of \(S\) on \(I\) is optimal according to metric \(pm\). In order to measure the performance of a solver \(s\in A\) over time, we discretize the time-space into _timesteps_. Let \(I\) be a set of instances, and \(T\) a set of timesteps. For the instance-timestep pair \((i,t)\in I\times T\), let \(o_{s}(i,t)\) be the objective value of \(s\) on instance \(i\), at timestep \(t\). Since the value for \(o_{s}(i,t)\) can greatly vary across instances and timesteps, in order for each data point to weigh equally in a cumulative metric, a normalization function \(n(o_{s}(i,t),i,t)\) is used to map \(o_{s}(i,t)\) values to a uniform range. For PBO, we use the normalization given in (5). The cumulative metric \(m_{s}\) we use, also considered in (Amadini & Stuckey, 2014), is defined as: \[m_{s}=\sum_{(i,t)\in I\times T}n(o_{s}(i,t),i,t) \tag{1}\] and corresponds to the normalized cumulative performance of solver \(s\) across all pairs \((i,t)\in I\times T\). For a meta-solver \(ms\) that for each (i,t) instance-timestep pair selects solver \(s^{\prime}_{i,t}\) its cumulative performance metric is defined as: \[m_{ms}=\sum_{(i,t)\in I\times T}n(o_{s^{\prime}_{i,t}}(i,t),i,t) \tag{2}\] The evaluation of meta-solvers is usually done in comparison to the performance of single best solver and virtual best solver, defined as: **Single Best Solver (SBS):**: The single algorithm that performs best (on average) on _all_ instances. **Virtual Best Solver (VBS):**: A solver that makes perfect decisions and matches the best-performing algorithm for each problem instance, without overhead. For an algorithm selector meta-solver \(ms\), the \(\hat{m}_{ms}\) metric was proposed for the Algorithm Selection Competitions (Lindauer et al., 2019), using, for each solver \(s\), the performance metric \(m_{s}=\sum_{i\in I}n(o_{s}(i),i)\). Here we generalize it for anytime algorithm selector meta-solvers as follows. \[\hat{m}_{ms}=\frac{m_{ms}-m_{VBS}}{m_{SBS}-m_{VBS}} \tag{3}\] where \(m_{ms}\) is the normalized cumulative performance of meta-solver \(ms\), \(m_{VBS}\) is the normalized cumulative performance of the VBS, and \(m_{SBS}\) is the normalized cumulative performance of the SBS. We observe that: * The closer \(\hat{m}_{ms}\) to \(0\), the more similar the meta-solver is to the VBS. * If \(\hat{m}_{ms}>1\), then the meta-solver is worse than the SBS and, hence, is not useful. ## 4 Designing the Machine Learning Oracle for AAAS for PBO In this section we describe the workflow we carried on for designing and implementing AAAS Machine Learning oracles for PBO. In Subsection 4.1 we give details on how we recorded the anytime behavior of the solvers on the chosen portfolio and elaborate on the characteristics of the instance benchmarks used for our work. Subsection 4.2 presents the dataset we used for the training and testing of our meta-solver, and Subsection 4.3 describes the possibilities we considered for characterizing the instances to use as input features to our models. Further details on the ML models and algorithms tested can be found in Subsection 4.5. Finally, Subsection 4.5 presents the evaluation of the implemented ML models. ### PBO instances and solvers The dataset of PBO instances was obtained from the 2006, 2007, 2009, 2010, 2011, 2012, 2015, and 2016 Pseudo-Boolean Competitions (Manquinho et al., 2011). These instances were collected from different domain applications such as Bio-informatics, Timetabling, and Hardware verification, among others. The instances with similar origins are organized in benchmarks or "families", and differ from one another in the type of constraints (linear or nonlinear) and the magnitudes of the constraints' coefficients (normal integers or arbitrary precision integers). Our experimental study includes \(118\) benchmarks, for a total of \(3128\)_feasible_ instances. The solvers used for the construction of the meta-solver, described in detail in Section 2.1, are: NaPS, two variants of OpenWBO, LS-PBO, RoundingSat, Gurobi and Clasp. We only considered solvers that either have their codes available so as to modify them to record their anytime behavior, or already provide this capability by default. In order to evaluate the anytime behavior of the solvers, we discretize a time interval of one hour into \(500\) timesteps following a logarithmic scale, analogous to (Huerta et al., 2022). We keep track, whenever the value of the objective function improves, of the corresponding timestep, and the updated new solution. Figure 1: Anytime behavior of the solvers for the instance “normalized-C499_b” from Benchmark 106. The anytime behavior of each solver is recorded as the updated best objective value (incumbent) for each of the \(500\) timesteps. Figure 1 shows the anytime behaviors of the solvers for the instance "normalized-C499_b", which corresponds to a logic-synthesis application. Note that the solver that outputs the solution with the smallest value at a given timestep \(ts\) is considered the best option for any specified time limit between the time corresponding to \(ts\) and the next timestep \(ts+1\). In Figure 1 we observe the change in the best solver across the timeline. Initially, for small time limits, _Clasp_ is the best solver. Then, after a few milliseconds, _LS-PBO_ becomes the best solver, but it is finally outperformed by _Gurobi_. A solver is said to _win_ an instance-timestep pair if it computes the best-found solution (i.e. a feasible solution with the best objective value) for that instance in that timestep. Ties are broken in favor of the solver that achieved such best incumbent first. ### Training and testing dataset generation Figure 3: Best solver per instance for each timestep. The horizontal axis represents the \(3128\) instances arranged in the \(118\) benchmarks. Each vertical bar displays, for one instance, the change in the best solver over the timesteps. Figure 2: Number of wins for each solver across the time horizon (see explanation in text). The dataset we used for building our ML oracles was generated from running all the solvers on the portfolio over all the instances we collected. Figure 2 summarizes the number of wins for each solver across the time horizon, for each of the \(3128\) instances. For each solver, on the horizontal axis, there is a bar consisting of \(500\) vertical lines, colored from light blue (for the small timesteps) all the way to purple (for the large timesteps). We include a "no solution" entry for instance-timestep pairs where no feasible solution was identified by any of the solvers. Throughout various instances and time intervals, four dominant solvers emerge: Gurobi, RoundingSAT, OpenWBO-oll and LS-PBO. RoundingSAT and LS-PBO exhibit a greater share of wins for smaller timesteps, in comparison with larger ones. Conversely, Gurobi's success rate grows as the timesteps become larger. Although OpenWBO-lsu, NaPS, and Clasp do not command a significant portion of victories, they complement the behavior of the more dominant solvers within the portfolio. Figure 3 summarizes the information on the best solver for each instance and for each timestep. In this figure, it is apparent that the best solver performance depends on the benchmark as well as the timestep. The clear implication is that there is no single best solver for all the instances and timesteps. Since instances belonging to the same family are plotted together, we can also observe that the behavior of the solvers in the portfolio seem to depend on the family of the instances. Most of the instances for which no feasible solution is found by any of the solvers across 500 timesteps belong to benchmarks "mps-v2-20-10" and "market-split" from the 2006 version of the PB Competition, benchmarks "opb-trendy" and "opb-paranoid" from the PB Competition 2010 and PB Competition 2012. These benchmarks correspond to the competition's category called BIGINT, which means that the coefficients can be arbitrary precision (i.e. not bounded) integer numbers. To train and test the ML models, the instances were partitioned into training and testing sets. This was done by partitioning the instances of the \(118\) benchmarks into \(70\%\) for training and \(30\%\) for testing, resulting in \(2054\) instances for training and \(1074\) instances for testing. The partition was done by randomly picking the instances from each benchmark, maintaining the same ratio. ### Characterization and labeling #### 4.3.1 Domain-specific features for PBO Based on previous work on SAT (Xu et al., 2008) and MaxSAT (Ansotegui et al., 2016), we defined a set of features for our problem. For this selection, considering the anytime nature of our meta-solver, we focus on informative fast-to-compute features. Since our problem has its own characteristics, we also test some other features that are specific to non-linear PBO instances. Therefore, here we use the following \(8\) sets of _domain-specific features_: **Number of constraints:**: Number of constraints in the instance. An equivalent feature for SAT and MaxSAT was used by (Xu et al., 2008) and (Ansotegui et al., 2016). **Number of variables:**: Number of Boolean variables present in the instance. This feature was used by (Xu et al., 2008) and (Ansotegui et al., 2016). **Linearity:**: Identifies if the formula contains non-linear constraints. No similar feature was proposed before. **Distribution of the number of terms per constraint:**: We partition the constraints into four classes according to the number of terms they contain: \(1\), \(2\), \(3\), or \(4\) or more terms. The four percentages of the number of constraints in each class out of the total are four features in this set. A similar set of features was used by (Xu et al., 2008). **Term degree:**: Percentage of unary, binary, ternary and quaternary-or-more terms. This is the number of terms with \(1\), \(2\), \(3\), or \(4\) or more variables out of the total number of terms in the instance. No similar feature was proposed before. **Objective function size:**: Percentage of terms that are present in the objective function, out of the total number of terms. No similar feature was proposed before. **Positive terms (Constraints):**: Percentage of positive terms in the constraints. An equivalent feature was used by (Xu et al., 2008) and (Ansotegui et al., 2016). **Positive terms (Objective):**: Percentage of positive terms in the objective function. Inspired by the above, we extend the feature for the objective function. #### 4.3.2 Ground truth labeling for the models As mentioned in Section 4.1, \(7\) different solvers were used to create the meta-solver. We then use the solvers as labels to identify which solver is the best for a given instance-time pair. We include a "no solution" label to indicate the cases where no solver obtains a feasible solution at a given timestep. This can be useful, especially, for hard instances where solvers require a long time to compute the first feasible solution. Also, in practice, a "no solution" label may indicate to the user the need for allocating more computational resources for solving the instance. Nevertheless, this feature of our model was not used for the final evaluation of the meta-solver, since this kind of prediction is not usually considered in the \(\hat{m}\) metric. Despite the potential misalignment between accuracy and \(\hat{m}\) metrics, we have chosen to train multi-label classification models that prioritize accuracy. This decision stems from the requirement of having simple and fast ML models for anytime scenarios. By employing a multi-label classification model, our approach offers the advantage of considering all solvers simultaneously and making a single call to the oracle to make a decision. This stands in contrast to more complex Algorithm Selection Systems that typically involve multiple ML oracles, such as multiple binary classification models for each pair of alternatives or regression models for individual solvers. The use of such complex systems would result in prohibitively long prediction times, which are not suitable for our anytime scenario. ### Machine Learning Models (Lorereggia et al., 2016) proposed a generic ML approach, using Convolutional Neural Networks (CNN), already described in Section 2, for designing an automatic algorithm selector able to work without the need for handcrafted domain-specific features. We test variants of this method against variants of Random Forest, Gradient Boosting and k-Nearest Neighbors based on the domain-specific characterization of Subsection 4.3. #### 4.4.1 Models using domain-specific features In our study, we conducted experiments using three ML algorithms, as outlined in Section 3.1. These algorithms were implemented using the Scikit-learn library (Pedregosa et al., 2011). We utilized various subsets of the 14 domain-specific features described in Section 4.3. Additionally, hyperparameter tuning was performed to determine the optimal architecture for each model, as well as weighting strategies to compensate for the natural bias induced by the dominating classes of the portfolio. **RF_basic:**: The Random Forest classifier uses only two features: the number of constraints and the number of variables. The hyperparameters used were: n_estimators = 100,max_features = "sqrt",criterion = "gini". **RF_nonlinear:**: The Random Forest classifier uses all the \(8\) sets of domain-specific features. The hyperparameters used were: n_estimators = 100,max_features = "sqrt",criterion="gini". **RF_linear:**: The Random Forest classifier uses features of the linearized version of the PBO instance. Therefore, the features related to non-linearity and term degree are redundant and removed. The hyperparameters used were: n_estimators=100,max_features = "sqrt",criterion = "gini". **GB_basic:**: The Gradient Boosting classifier uses only two features: the number of constraints and the number of variables. The hyperparameters used were: n_estimators = 100, learning_rate = 0.5, max_depth = 3,max_features="sqrt". **GB_nonlinear:**: The Gradient Boosting classifier uses all the \(8\) sets of domain-specific features. The hyperparameters used were: n_estimators = 100, learning_rate = 0.25, max_depth = 3, max_features = "sqrt". **GB_linear:**: The Gradient Boosting classifier uses features of the linearized version of the PBO instance. Therefore, the features related to non-linearity and term degree are redundant and removed. The hyperparameters used were: n_estimators = 100, learning_rate = 0.1,max_depth = 3,max_features = "sqrt". **KNN_basic:**: The \(k-\)Nearest Neighbors classifier uses only two features: the number of constraints and the number of variables. The hyperparameter used was: n_neighbors = 13. **KNN_nonlinear:**: The \(k-\)Nearest Neighbors classifier uses all the \(8\) sets of domain-specific features. The hyperparameter used was: n_neighbors = 21. **KNN_linear:**: The \(k-\)Nearest Neighbors classifier uses features of the linearized version of the PBO instance. Therefore, the features related to non-linearity and term degree are redundant and removed. The hyperparameter used was: n_neighbors = 21. For all the variants, the set of features is augmented with the feature of timestep, which increments the number of the model's input features in one. We note that for the linear versions, we first need to linearize the input instance in order to compute the purely linear features, which is not the case for the nonlinear versions, for which we don't incur in such overhead for the computing of the non-linear features. #### 4.4.2 CNN for Loreggia's Representation As a baseline method, we characterize the instances as images, following the proposal of (Loreggia et al., 2016) (described in Section 2.2). These images are given as input to a Convolutional Neural Network (CNN), which outputs the best solver for every timestep. Hence, we adapt the method to handle anytime scenarios by learning a label for each of the possible \(500\) timesteps. That way, when a prediction for a particular time is needed, we have to inspect the output of the network that corresponds to the closest (smaller or equal) timestep output. For the implementation of the CNN, three different architectures were tested: **VGG16**(Simonyan & Zisserman, 2014), **AlexNet**(Krizhevsky et al., 2017) and **GoogLeNet**(Szegedy et al., 2015). ### Evaluation Table 2 compares the accuracy and \(\hat{m}\) values (calculated as described in 5.1) of the different combinations of ML models and subsets of features as explained in the previous subsection. As can be seen, GB_linear provides the best performance in accuracy and RF_nonlinear the best performance in the \(\hat{m}\) metric. The ML methods relying on domain-specific features, regardless of the subsets of features considered, outperform in accuracy the deep-learning networks based on the generic representation of (Loreggia et al., 2016), which we take as a baseline. The Deep Learning Network that provides the best accuracy and \(\hat{m}\) values is the one based on the AlexNet architecture. Although not perfect, Table 2 demonstrates an inverse correlation relation between the accuracy and \(\hat{m}\) metrics. This is with the noticeable exception of the best-performing Deep Learning Network, AlexNet, which, in comparison with GB_basic and all KNN models, with a worse accuracy value achieves a better \(\hat{m}\) score. It is important to note that this table does not account for the overhead associated with computing the features or the time required for the models to generate predictions. These factors can significantly impact the practical performance of using these models to build an anytime meta-solver. Therefore, we will further analyze and present results considering the four best-performing combinations of models and sets of features: RF_nonlinear, RF_linear, GB_nonlinear, and GB_linear. Figure 4 depicts the Confusion Matrix for our best models. It is evident that all matrices demonstrate a similar pattern in the behavior of the models. Generally, it can be inferred that the classes were learnable, except for the Clasp class, which has a smaller representation in the dataset. It is natural for these models that the higher the class representation in the dataset, the higher the accuracy for that class. Similarly, a higher class representation increases the likelihood of the model over-predicting that class. To mitigate this issue, we implemented methods to address the bias introduced by the dominant classes. These methods involved assigning, during the training of the models, bigger weights to miss-classifications of less frequent classes, compared to the more dominant ones. The output of Random Forest and Gradient Boosting includes the MDI (Mean Decrease in Impurity) for each feature, which is a proxy for feature importance. The higher the value of the MDI, the more important the feature is. Figure 5 shows the MDI values for the \(10\) features of the linearized instances for both models. It is evident that the importance of features varies depending on the model used. In particular, for the GB model, the timestep feature appears to be less significant compared to other features related to the composition of the PBO formula in making predictions. On the \begin{table} \begin{tabular}{l l l} \hline \hline **ML Oracle** & **Accuracy** & **Metric \(\hat{m}\)** \\ \hline \hline Loreggia’s W VGG & 0.4712 & 1.00 \\ Loreggia’s W AlexNet & 0.5775 & 0.7157 \\ Loreggia’s W GoogLeNet & 0.4931 & 1.3565 \\ RF\_basic & 0.6580 & 0.7108 \\ **RF\_nonlinear** & 0.7106 & **0.5250** \\ **RF\_linear** & **0.7159** & 0.5729 \\ GB\_basic & 0.6379 & 0.8252 \\ GB\_nonlinear & 0.7046 & 0.6198 \\ **GB\_linear** & **0.7184** & 0.6501 \\ KNN\_basic & 0.6225 & 0.8481 \\ KNN\_nonlinear & 0.6407 & 0.8589 \\ KNN\_linear & 0.6621 & 0.7971 \\ \hline \end{tabular} \end{table} Table 2: Accuracy and \(\hat{m}\) values for different Machine Learning Models and subsets of characteristics. other hand, the RF model heavily relies on the timestep feature to make recommendations. Both models consider the _percentage of terms that are present in the objective function_, first proposed here, as a very important feature. Figure 6 provides a visual representation of how our two most accurate models, RF_linear and GB_linear, behave. By examining this figure in conjunction with Figures 5,4, and Table2, we can draw some conclusions. Although GB_linear achieves higher accuracy, this is primarily due to the bias introduced by the four most dominant classes, for which it exhibits superior performance compared to the RF model. Furthermore, it is evident that the GB model places less emphasis on the anytime behavior of the solvers and tends to select the same solver for a given instance, regardless of the timestep. In contrast, the RF model demonstrates a more varied selection of solvers based on the timestep. This observation aligns with the analysis of feature importance in the GB and RF models, providing support for this observation. Figure 4: Confusion matrixes of PBO meta-solvers based on RF_nonlinear, RF_linear, GB_nonlinear and GB_linear. Figure 5: ## 5 Results ### Meta-solver's performance In this section, we present the results of the performance of our meta-solver based on the four best models explained in the previous Section: RF_nonlinear, RF_linear, GB_nonlinear, and GB_linear ML oracles. As explained in Subsection 3.3, the best way to measure the performance of a meta-solver is through the \(\hat{m}_{ms}\) metric. For our particular case, \(\hat{m}_{ms}\) was Figure 6: Comparison of ground truth labels with predicted labels of the RF_linear and GB_linear for the test set. calculated considering Gurobi as the SBS, for all timesteps. For computing the cumulative score \(m_{s}\), for each solver \(s\), as defined in Equation 1, the normalization function \(n(o_{s}(i,t),i,t)\) of \(o_{s}(i,t)\) (the objective value of \(s\) on instance \(i\) at timestep \(t\)), is defined so that its co-domain is in the range \([0,1]\cup\{2\}\). For this, we compute \(o_{min}(i)\), the minimum feasible value (in many cases the optimal value) of the objective function for the instance \(i\) and \(o_{max}(i)\), the maximum feasible value of the objective function for the instance \(i\), both considering all the feasible solutions found by all the solvers. The by-default normalization of a given value \(o_{s}(i,t)\) is computed as follows: \[n^{\prime}(o_{s}(i,t),i,t)=\frac{o_{s}(i,t)-o_{min}(i)}{o_{max}(i)-o_{min}(i)} \tag{4}\] This by-default normalization is not always well defined and some special cases have to be considered. Considering such cases, we formally define \(n(o_{s}(i,t),i,t)\) as: \[\begin{cases}0&\text{if }o_{s}(i,t)=o_{min}(i)=o_{max}(i)\\ 2&\text{if }o_{s}(i,t)\text{ is undefined}\\ &\text{but }o_{max}(i)\text{ is defined}\\ n^{\prime}(o_{s}(i,t),i,t)&\text{otherwise}\end{cases} \tag{5}\] As \(\hat{m}_{ms}\) compares the meta-solver with SBS, and such solver is not able to use the "no solution" label in its favor, for our meta-solver's evaluation, we decide to only consider instance-time pairs for which \(o_{max}(i,t)\) is defined (i.e. we don't consider the instance-timesteps pairs that correspond to white points on Fig 27). One issue to consider concerning the computational time limit is whether to include the feature computation and prediction times needed by the ML Oracle in addition to running the solver. The prediction requires input preparation for the instance (computing the features for the model, constructing the image for CNN, and linearizing the instances for the models with only linear features) and running the ML model. If we consider the prediction time, there is less time to run the solver and, consequently, the value of our performance metric \(\hat{m}_{ms}\) goes up. We report on the performance of RF_nonlinear, RF_linear, GB_nonlinear, and GB_linear for both cases, when prediction time "overhead" is included or not, for each timestep, in Table 3. It is evident that while the RF_linear model exhibits the better accuracy value, it is the RF_nonlinear one that achieves the best \(\hat{m}\) value. The difference in the \(\hat{m}\) scores between these two models grows even bigger when considering the overhead. This is primarily due to the time impact of linearizing the instances before computing the features in the RF_linear case. This also happens with the GB models, although the \(\hat{m}\) values are less competitive than the ones achieved by the RF models. Figure 7 provides insight into how the \(\hat{m}\) value changes for each timestep for the best-performing RF and GB models, taking into account the overhead. As anticipated, we observe that the overhead has a negative impact on the \(\hat{m}\) value during the initial timesteps. RF consistently demonstrates better \(\hat{m}\) values than GB, suggesting that RF learns more effectively from the anytime data. ### Comparing the meta-solver with the Single Best Solver Recall that Gurobi is the single best solver (SBS). Here we elaborate on where the gain from the meta-solver (MS) comes from. To do this, we consider all _test_ instance-timestep pairs for which a feasible solution is known, \(502258\) in total. In Figure 8, we compare, over this set of instances, the number of instances for which each of the solvers (SBS and the MS based on the RF_nonlinear model) report the best-found incumbent solution (red), a feasible worse-than-the-best-found incumbent solution (orange) or for which no incumbent solution has been computed yet (light blue). It is apparent from this figure that the meta-solver MS provides a significant improvement over the use of the SBS by, for many instance-timestep pairs, selecting alternative solvers that are either able to find better solutions than the SBS or that \begin{table} \begin{tabular}{l l l} Model & \(\hat{m}_{ms}\) (no) & \(\hat{m}_{ms}\) (o) \\ \hline \hline RF\_nonlinear & **0.5250** & **0.5318** \\ RF\_linear & 0.5729 & 0.6042 \\ GB\_nonlinear & 0.6198 & 0.6270 \\ GB\_linear & 0.6501 & 0.6712 \\ \end{tabular} \end{table} Table 3: \(\hat{m}_{ms}\) calculated with no overhead time (no) and \(\hat{m}_{ms}\) calculated considering the overhead time (o) for the PBO meta-solvers based on RF_nonlinear, RF_linear, GB_nonlinear and GB_linear. Lower \(\hat{m}_{ms}\) values are better. are able to compute an incumbent solution when the SBS is not. This justifies the use of our meta-solver for the PBO problem in practice. Overall, from \(502,258\) test instance-timestep pairs, Gurobi is able to find \(296,103\) best-found solutions, \(142,824\) non-best-found incumbent solutions, and is unable to find feasible solutions for \(63,331\) instance-timestep pairs. The meta-solver finds \(352,462\) best-found solutions, \(116,717\) non-best-found solutions and is unable to find feasible solutions for \(33,079\) instance-timestep pairs. As we can see, the meta-solver improves Gurobi's performance by achieving the best-found solution in around \(19\%\) more instance-timestep pairs and diminishing in up to \(47.7\%\) the number of instance-timestep pairs for which a feasible solution is not yet found. ## 6 Conclusions and Future Work We propose here an Anytime meta-solver for the Pseudo-Boolean Optimization problem. Our meta-solver is able to predict and execute a solver that, among 7 different solvers, performs best for a given problem instance and a specified time limit. Our results show that our meta-solver (based on any of the two best models) significantly outperforms all individual solvers, while it also identifies when feasibility cannot be achieved for a given instance. A logical next step is to propose ways of adapting Anytime Algorithm Selection Problems to the scenarios of the Algorithm Selection Library (Bischl et al., 2016), which, currently, are not _anytime_. We will use this as an efficient way of sharing our data. For future work, we plan to explore the application of Graph Neural Networks as a potential ML oracle. This type of Neural Network has recently been shown to perform well on data that can be represented as a graph, which is the case for PBO. We also plan to explore a two-layer meta-solver approach, where the first layer selects a solver from a portfolio while the second layer chooses the most suitable set of parameters for the chosen solver. ## Acknowledgments First, second and last authors are supported in part by AI institute NSF award 2112533. Figure 7: Values of \(\hat{m}\) across timesteps for GB_nonlinear and RF_nonlinear. Lower \(\hat{m}\) values are better.
2309.05602
On the Comparison of AGN with GRMHD Simulations: II. M87
Horizon-scale observations of the jetted active galactic nucleus M87 are compared with simulations spanning a broad range of dissipation mechanisms and plasma content in three-dimensional general relativistic flows around spinning black holes. Observations of synchrotron radiation from radio to X-ray frequencies can be compared with simulations by adding prescriptions specifying the relativistic electron-plus-positron distribution function and associated radiative transfer coefficients. A suite of time-varying simulations with various spins, plasma magnetizations and turbulent heating and equipartition-based emission prescriptions (and piecewise combinations thereof) is chosen to represent distinct possibilities for the M87 jet/accretion flow/black hole (JAB) system. Simulation jet morphology, polarization and variation are then "observed" and compared with real observations to infer the rules that govern the polarized emissivity. Our models support several possible spin/emission model/plasma composition combinations supplying the jet in M87, whose black hole shadow has been observed down to the photon ring at 230 GHz by the Event Horizon Telescope (EHT). Net linear polarization and circular polarization constraints favor magnetically arrested disk (MAD) models whereas resolved linear polarization favors standard and normal evolution (SANE) in our parameter space. We also show that some MAD cases dominated by intrinsic circular polarization have near-linear V/I dependence on unpaired electron or positron content while SANE polarization exhibits markedly greater positron-dependent Faraday effects - future probes of the SANE/MAD dichotomy and plasma content with the EHT. This is the second work in a series also applying the "observing" simulations methodology to near-horizon regions of supermassive black holes in Sgr A* and 3C 279.
Richard Anantua, Angelo Ricarte, George Wong, Razieh Emami, Roger Blandford, Lani Oramas, Hayley West, Joaquin Duran, Brandon Curd
2023-09-11T16:34:54Z
http://arxiv.org/abs/2309.05602v2
# On the Comparison of AGN with GRMHD Simulations: II. M87 ###### Abstract Horizon-scale observations of the jetted active galactic nucleus M87 are compared with simulations spanning a broad range of dissipation mechanisms and plasma content in three-dimensional general relativistic flows around spinning black holes. Observations of synchrotron radiation from radio to X-ray frequencies can be compared with simulations by adding prescriptions specifying the relativistic electron-plus-positron distribution function and associated radiative transfer coefficients. A suite of time-varying simulations with various spins and plasma magnetizations is chosen to represent distinct possibilities for the M87 jet/accretion flow/black hole (JAB) system. We then input turbulent heating and equipartition-based emission prescriptions (and piecewise combinations thereof) in the time-dependent 3D simulations, in which jet morphology, polarization and variation are "observed" and compared with real observations so as to try to infer the rules that govern the polarized emissivity. The models in this paper support a magnetically arrested disk (MAD) with several possible spin/emission model combinations supplying the jet in M87, whose inner jet and black hole shadow have been observed down to the photon ring at 230 GHz by the Event Horizon Telescope (EHT). We also show that some MAD cases that are dominated by intrinsic circular polarization have near-linear \(V/I\) dependence on unpaired electron or positron content while SANE polarization exhibits markedly greater positron-dependent Faraday effects - future probes of the SANE/MAD dichotomy and plasma content with the EHT. This is the second work in a series also applying the "observing" simulations methodology to near-horizon regions of supermassive black holes in Sgr A* and 3C 279. keywords: active galactic nucleus (AGN), general relativistic magnetohydrodynamic (GRMHD) simulation, relativistic jet, very long baseline interferometry (VLBI) ## 1 Introduction Over a century ago, M87 was described as a "curious straight ray" by Heber Curtis (Curtis, 1918) due to its relativistic jet, and nearly three quarters of a century ago, it was identified as a discrete radio source (Bolton et al., 1949). M87 is now the best studied jet/accretion flow/black hole (JAB) system, and the first to be imaged down to the horizon scale by the Event Horizon Telescope (Event Horizon Telescope Collaboration et al., 2019, 2019, 2019, 2019). Throughout the years in which the giant elliptical galaxy M87 has been observed from its lobes to its core, we have learned that it is one of the closest examples of a common physical phenomenon, the production of twin, relativistic jets by accreting, spinning, massive black holes. In recent years, our understanding of how jets form, propagate and radiate has advanced considerably. Much of this progress can be credited to advances in observational capability, throughout the electromagnetic spectrum. In particular, the technique of VLBI (including polarimetry) has been extended to higher frequency, where the angular resolution is finer and depolarization is smaller. Gamma-ray observations have also contributed much. Equivalent progress can be seen in the numerical simulation of non-axisymmetric, general relativistic, hydromagnetic flows where specific models can be evolved dynamically with numerical confidence. The challenge today is to reconcile these two approaches. This reconciliation needs to take place at several levels. Radio/mm/submm jets can be imaged down to merely tens of gravitational radii (defined henceforth to be \(M\equiv GM_{\rm H}/c^{2}\), where \(M_{\rm H}\) is the mass of the hole). Structure has been identified down to \(\sim 10M\) (Fish et al., 2016)) and beyond by the Event Horizon Telescope (EHT) project, which has made linearly polarized images with resolution limit \(\sim 5M\) (Event Horizon Telescope Collaboration et al., 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021). Jet structure has now been connected with the emitting ring in 3.5mm intensity maps (Lu et al., 2023). The general relativistic regime has often been simulated under a variety of dynamical assumptions and initial conditions. There has also been progress in adding radiative transfer to these codes to take account of absorption, scattering and Faraday rotation (Dexter et al., 2012). However, in order to achieve the ultimate goal of elucidating jet launch and collimation (Lu et al., 2014) and to measure the spin of the black hole, it is necessary to have a better understanding of how high energy electrons are accelerated. Most jets are observed on the larger scale, where general relativity is unimportant. In addition to radio observations, optical and X-ray observations extend down to \(\sim~{}0.1",\sim~{}1"\) respectively. Long term monitoring, using VLBI, has taught us much about the apparent motion of emission sites within jets and accounting for this is a goal too. It is also necessary to uncover the character of the medium through which the jet is propagating- inflow, outflow or thick, orbiting torus; fluid or hydromagnetic- and the interaction with it. In addition, it is necessary to understand the remarkably rapid variability, notably at \(\gamma\)-ray energies, which cannot originate very close to the black hole and where the jets are totally unresolved. Answering these questions presents an even greater challenge to our current understanding of particle acceleration. Relativistic jets are associated with a large and heterogeneous sample of sources and the distribution of their properties is largely determined by the statistics of relativistic beaming. (We are primarily concerned with AGN here, but the problems we are addressing are also features of gamma-ray bursts and galactic superluminal sources, from which much can also be learned.) The orientation of a specific source is a parameter which can be adjusted to match a simulation to a particular source. However, we also know that black hole spin axes are isotropically distributed in space. Furthermore, the fluxes and images should scale with the jet powers and hole masses. All of this implies specific contributions to the overall distributions of total fluxes, apparent expansion speeds, polarization and so on in a complete sample of sources selected according to well-defined criteria. The overall nonthermal radiative efficiency can be determined observationally (Soltan, 1982). This, too, relates strongly to the particle acceleration. There have been many proposals as to how particles are accelerated under these conditions. Strong shocks are commonly invoked, but these are not efficient accelerators in magnetically-dominated flows and may be too slow to account for many observations. Supersonic jets are surely very noisy, and the associated hydromagnetic turbulence can promote second order acceleration. The very existence of fast variability, especially at \(\gamma\)-ray energy, suggests that unstable electromagnetic configurations lead to very rapid particle acceleration where electromagnetic energy is efficiently converted locally to high energy particles by a process we have called "magnetoluminescence" (Blandford et al., 2017). Clearly the program that has just been sketched is a massive undertaking and it is premature to try to execute it in full. In this paper, we limit ourselves to a smaller exercise designed to link plasma microphysics to discrete observable AGN features. Observing JAB simulations has already been carried out for Sagittarius A* in Anantua et al. (2020)b and Anantua et al. (2023), where a turbulent heating model exponentially suppressing emission from high-gas-to-magnetic pressure regions outperformed other distinct phenomenological model classes with respect to image size and spectral constraints anticipating EHT results (Event Horizon Telescope Collaboration et al., 2022)1. Here, we consider a few simulations and apply it to another well-observed source, M87 - with the added flexibility of comparing two turbulent heating prescriptions, using a separate prescription for jet regions, and including positrons. We will pay most attention to varying the particle acceleration/emissivity prescription and explore how to see which choices come closest to reproducing the observations and calculating the underlying physical properties. There is no reason to expect the match to be especially good right now. However, better simulations and observations, both imminent, should allow this approach to be followed more productively, now armed with an arsenal of distinguishable models ranging from M87-like ordered electromagnetic jet and magnetosphere to dense, Faraday-thick turbulent plasma more suitable for other AGN. Footnote 1: The Critical Beta two-parameter model was not among the 6% of models in Sgr A* Paper I passing non-EHT constraints on 86 GHz flux– with the important caveat that it was explored at a single point in model parameter space for limited inclinations relative to the fiducial models. In Section 2, we review observations of the M87 hole, disk and jet, emphasizing those that are most directly connected to the synchrotron emissivity. Section 3 presents GRMHD simulations with varying spin and accretion mode and describes commonalities and differences in their plasma flow structures. In Section 4, we introduce self-consistent prescriptions for emission (and absorption), particle acceleration and dissipation, including the essential effects of positron physics. In Section 5, we describe the global properties of our GRMHD simulations. In Section 6, we apply our emission prescriptions with positrons to the time-dependent simulations to "observe" M87. Our general conclusions and plans for further investigations are collected in Section 7. Synchrotron radiation theory calculations for alternative emission model prescriptions including positrons, are expounded in the Appendix. ## 2 Observations of M87 Located at the heart of the Virgo Cluster \(d=16.7\pm 0.6\) Mpc away (Blakeslee et al., 2009) (cf. Table 1), the bright active galaxy M87 (3C 274) serves as an exemplary laboratory for the investigation of black hole jets. Observations of the jet on all scales suggest that M87 is a FRI misaligned BL-Lac blazar. M87 has a remarkably prominent jet, with an equally remarkably faint disk centered on a large black hole. ### Black Hole We adopt a black hole mass of \((6.5\pm 0.7)\times 10^{9}\)M\({}_{\odot}\) (Event Horizon Telescope Collaboration et al., 2019), corresponding to length, time, angular and energy scales \(10^{13}\) m, \(9\) hr, \(4\) \(\mu\)as and \(1.2\times 10^{57}\) J, respectively. The associated Eddington luminosity is \(\sim 8\times 10^{40}\) W. A lower bound of \(a>0.2M_{\rm M87}\) has been derived for the spin of the hole (Li et al., 2009), assuming M87's SMBH is surrounded by a prograde, radiatively inefficient accretion flow (RIAF). Hybrid jet/advection-dominated accretion flow (ADAF) models Feng & Wu (2017) have provided an estimate as high as \(a=0.98M_{\rm H}\). We first adopt an intermediate angular frequency of \(\Omega_{\rm H}=0.35/(GM_{\rm H}/c^{3})\) of \(10^{-5}\)s\({}^{-1}\) for M87's black hole. Using the relations \(J_{H}=GM_{\rm H}^{2}a/c\) and \(\Omega_{\rm H}=\frac{a}{r_{+}^{2}+a^{2}}\), where \(r_{+}=\frac{1}{2}(r_{S}+\sqrt{r_{S}^{2}-4(J_{H}/(M_{\rm H}c))^{2}})\) is the radius of the outer Kerr horizon, the corresponding dimensionless black hole spin is \(a/M_{\rm H}=0.94\). We also consider lower spin prograde \(a/M=0.5\) (\(\Omega_{\rm H}=0.13/(GM_{\rm M87}/c^{3})\)) and retrograde \(a/M=-0.5\) cases. Observations of the jets on all scales suggest that they, and by hypothesis, the spin of the hole, are inclined to the line of sight at an angle \(\theta=20^{\circ}\)(Wang & Zhou, 2009; Prieto et al., 2016). The extractable rotational energy and angular momentum are \(\sim 2\times 10^{56}\)J and \(\sim 4\times 10^{61}\)kg m\({}^{2}\)s\({}^{-1}\), respectively - ample to power the jets observed today for a Hubble time without any accretion. Henceforth, we measure all lengths, angles and times in units \(M\) set by M87's black hole, i.e., \(GM_{\rm M87}/c^{2}\), \((GM_{\rm M87}/c^{2})/d\), \(GM_{\rm M87}/c^{3}\), respectively (confer Table 2). When the spin direction is along the general direction of the angular velocity of the orbiting gas (Walsh et al., 2013), it is aligned with the receding jet. ### Radio-mm-submm Observations Following the pioneering observations of Junor & Biretta (1995), there have been many impressive high resolution observations of the inner jet of M87. * A 15 GHz map with beam 600 \(\mu\)as\(\times 1300\)\(\mu\)as \(\equiv~{}150M\times 325M\) measured out to a projected length \(Y\sim 2\times 10^{4}M\)(Kovalev et al., 2007) and therefore a length along the deprojected jet \(z\sim 6\times 10^{4}M\). * Pilot monitoring at 22 GHz with resolution \(\sim 250M\times 250M\) extending out to projected length \(\sim 6000M\) and showing superluminal motion with speed up to \(\sim 1.6c\)(Hada et al., 2017) * A 43 GHz VLBI time sequence- 11 maps made over 210 d with a beam FWHM \(\sim 55M\times 115M\), which extends out to \(z\sim 6000M\) in projected radius (Walker et al., 2008, 2016; Mertens et al., 2016) (Fig. 2). Apparent speeds up to \(\sim 2c\) are observed. * An 86 GHz image with resolution \(\sim 20M\times 60M\) extending out to \(z\sim 3000M\) and exhibiting \(\sim 20\) percent linear polarization and strong Faraday rotation (Hada et al., 2016), and a Global mm-VLBI Array (GMVA) 86 GHz M87 observation exhibiting a limb-brightened jet base (see Kim et al. (2018) Fig. 4). * Event Horizon Telescope (EHT) observational data made with effective beam of size \(\sim 10M\)(Akiyama et al., 2015). Increased coverage with next generation facilities may further bridge the gap between this jet structure and the ringlike EHT 2019 observation around the central supermassive black hole with greater precision, building off of Lu et al. (2023). These observations confirm that the approaching radio jet is modestly relativistic and collimated within \(z\sim 30M\). They are also broadly consistent with there being a self-absorbed innermost jet called the core with constant brightness temperature \(\sim 2\times 10^{10}\) K and flux density \(S_{\nu}\sim 1\) Jy for 3 GHz \(\lesssim\nu\lesssim 300\) GHz. At higher frequency, the entire jet appears to be optically thin. The resolved jet structure accounts for a minority of the flux at all frequencies but is strongly edge-brightened and the mean intensity decays as an inverse square with distance from the axis. There are indications that the southern edge is brighter than the northern edge. The jet is quite variable. The shape of the jet is roughly parabolic for \(30M\lesssim z\lesssim 7000M\) with the separation of the brightened edges roughly given by \(\sim 6z^{1/2}\). At larger radii, the jet expansion is closer to linear. #### 2.2.1 Observational Model We have made a simple analytical model which captures many of the features of the time-averaged observation over the range of frequencies where the inner jet has been resolved. While this does not do full justice to the observations, it is sufficient for our purpose. We introduce a Cartesian coordinate system on the sky with \(Y\) measuring distance along the jet from the hole and \(X\) distance across it (in units of \(M\)). The intensity satisfies \[I=\frac{I_{0}e^{\xi(1-\xi)}}{1+v_{300}^{8/3}Y^{2}}, \tag{1}\] where \(\xi=X^{2}/Y\). The observed radio-mm-submm isotropic power is dominated by mm observations and is \(\sim 10^{34}\) W (Prieto et al., 2016). A beaming corrected guess might be as high as \(\sim 10^{35}\) W. The spectral energy distribution of the central 0.4 arcs (32 pc) of M87 from Prieto et al. (2016) is \(\nu F_{\nu}\approx 10^{11.6}\) Jy Hz for \(10^{10.5}\) Hz \(<\nu<10^{14.5}\) Hz. The radio-to-UV bolometric luminosity is \(3.6\cdot 10^{-6}L_{\rm Edd}=2.7\cdot 10^{35}\) W (Prieto et al., 2016). #### 2.2.2 Event Horizon Telescope Observations In April 2019, the Event Horizon Telescope released the first images resolving the boundary of a black hole (Event Horizon Telescope Collaboration et al., 2019), ushering in the age of direct observation of horizons. The results have already resolved a wide discrepancy in the black hole mass for M87\({}^{\circ}\)- from stellar dynamical measurements of \(6.6\times 10^{9}M_{\odot}\) from Gebhardt et al. (2011), compared to gas dynamical measurements from Walsh et al. (2013) who measured half this mass. Simulations concordant with EHT M87 observations require that the central black hole have nonzero spin in order to explain the presence of the jet powered by the Blandford-Znajek mechanism (Blandford & Znajek, 1977), and polarized observations (Event Horizon Telescope Collaboration et al., 2021, 2021) indicating the hole is supplied vertical magnetic flux further support this interpretation. ### Optical-Infrared Observations The Hubble Space Telescope has provided us with stunning optical band observations of the M87 jets, including knots with superluminal components and flatter spectrum than the rest of the jet (Perlman et al., 2001). The most famous feature, HST-1 \(\sim\) 80 pc from the nucleus, produces blobs that appear to move up to \(6c\) on the observer plane (Biretta et al., 1999). _HST_-1 has exhibited 40% variability between 1993 and 1997 (Perlman et al., 2001). The isotropic radiant power of M87 has a bolometric luminosity given by \(L_{\rm bol}\sim 3\times 10^{35}\)W Prieto et al. (2016). Using the quiescent spectral energy distribution from Prieto et al. (2016), it is inferred that the upper limit to disk power is \(L_{\rm disk}\)\(\leq\) 3.4\(\cdot\)10\({}^{41}\)erg/s. At 10% efficiency, \(L=\eta\dot{m}c^{2}\) implies an upper bound to the mass accretion rate of \(3.8\cdot 10^{21}\) g/s = \(6\times 10^{-5}M_{\odot}\)/yr. The twofold change in M87's observed bolometric luminosity from its quiescent state value to (\(L\) = \(5.4\times 10^{42}\)erg/s) during its 2005 outburst Prieto et al. (2016) suggests a Doppler boosting factor of 8-16 given the mass accretion rate upper limit. ### X-ray Observations M87 has been observed at X-ray wavelengths by _Chandra_(Wilson & Yang, 2002). The equipartition magnetic field value for the knot HST-1 was found to be \(\sim 3\times 10^{-4}\) G (Owen et al., 1989). The steady jet isotropic luminosity in 2-10 keV X-rays is \(\sim 3\times 10^{34}\) W (Prieto et al., 2016). However, the variable source HST-1 can be roughly ten times brighter including observations at optical wavelengths. As HST-1 also displays features moving towards us exhibiting apparent speeds \(\sim 6c\) Figure 1: VLBA 2 cm image of M87. The swirling jet substructure suggests magnetic Kelvin-Helmholtz instability-a feature also seen in the simulation movie of the corkscrewing jet [http://richardanantua.com/sample-page/jetaccretion-diskblack-hole-movies/](http://richardanantua.com/sample-page/jetaccretion-diskblack-hole-movies/). Image adapted with permission of Dan Homan, Yuri Kovalev, Matt Lister and Ken Kellermann. Figure 3: EHT M87 2017 observational campaign: intensity map on (Left) and linear polarization map (Right) (Event Horizon Telescope Collaboration et al., 2021). Figure 2: M87 VLA observation on linear intensity scale (Top) and observational movie snapshots at \(t_{\rm Ohk0}\) (Bottom Left) side-by-side with a snapshot at \(t_{\rm Ohk0}+10t_{\rm Strip}\) (Bottom Right) for \(t_{\rm step}=21\)days \(\approx 56M_{\rm M87}\) monotonically transformed by \((\cdot)^{1/4}\) for visual clarity on the bottom row. The beam dimensions are \(4.3\cdot 10^{-4}\)arcs \(\cdot 2.1\times 10^{-4}\)arcs. These images can be viewed as a movie sans transformation, courtesy of Craig Walker and his collaborators, here: [http://www.aoc.nrao.edu/~cwalker/M87/](http://www.aoc.nrao.edu/~cwalker/M87/). we suppose that this is a small part of the flow that, unlike the main body of the jet, is directed along our line of sight. It may not contribute significantly to the true integrated jet bolometric power. M87 resides at the center of the \(\sim 10^{15}\rm M_{\odot}\) Virgo cluster of galaxies, surrounding M87 in a cooling flow region with X-ray luminosity \(10^{36}\) W (Churazov et al., 2001). The cooling time is short compared with the flow time, suggesting that the hot gas is maintained in rough dynamical equilibrium by mechanical heating associated with the jets (though other possibilities have been widely discussed). If this is the case, then each jet must carry a total power that is significantly larger than this and which is mostly carried off by buoyant bubbles. On this rather uncertain basis, we estimate a total jet power of \(L_{\rm jet}=5\times 10^{36}\) W, noting that M87 could be in a relatively dormant state right now. ### Gamma-ray Observations The _Fermi_ Large Area Telescope (LAT) has seen M87 as a gamma ray point source with variable TeV emission on yearly timescales (Aharonian, 2006) and no observed yearly or decadal variability at MeV-GeV ranges (Abdo et al., 2009). The observed \(>100\) MeV flux is \(2.54\cdot 10^{-8}\) photons cm\({}^{-2}\)\(s^{-1}\) and the corresponding luminosity is \(4.9\cdot 10^{41}\) erg s\({}^{-1}\)(Abdo et al., 2009). ### Galaxy and Cluster The largest scale observations (Owen et al., 2000; de Gasperin et al., 2012; Forman et al., 2017) show that the M87 jets interact strongly with the surrounding medium. The jet orientation changes at a radius \(\sim 2\) kpc. The jets have inflated large, buoyant bubbles that probably produce enough dissipation to balance radiative cooling loss. M87 has been active but quite variable for several Gyr, although the underlying mass supply rate may have varied significantly over this time. Analyzing the most recent activity leads to a conservative estimate of the current power per jet \(\sim 10^{37}\) W. ### Summary M87 is fairly extreme in many respects. It has the most massive black hole we can study in detail. Its disk luminosity is \(\lesssim 3\times 10^{-7}L_{\rm Edd}\) while it creates jets with total power \(\gtrsim 3\times 10^{-4}L_{\rm Edd}\) which have been shown to be collimated on scales \(\lesssim 10M\). Most importantly, we have recently begun to learn much more about the region within \(\sim 100M\) from EHT observations. This makes it an excellent source to model. ## 3 3D GRMHD Simulations ### Historical Overview We now give a brief synopsis of the development of GRMHD simulations to contextualize the principal simulation used in this work. Koide and Meier pioneered GRMHD simulations (Koide et al., 2000) of jet formation in the magnetosphere of rapidly rotating Kerr black holes. Their code solves the GRMHD equations for conservation of particle number, momentum and energy in a Kerr metric for \(0.75r_{S}<x^{1}<20r_{S}\) (where \(x^{1}\) is the radius \(r\) in Boyer-Lindquist coordinates). The next major advance was the high-accuracy relativistic magnetohydrodynamics (HARM) code of Gammie McKinney and Toth (Gammie et al., 2003). This conservative numerical scheme for integrating the GRMHD equations is guaranteed to obey shock jump conditions at discontinuities in the fluid variables. HARM led to a number of applications, such as Dexter and Fragile's disk simulations of Sgr A* Dexter et al. (2012) and Mishra et al. (2016) and McKinney & Blandford (2009) simulations of stable relativistic jets. Later simulations by Farris and Gold account for strong gravitational curvature near black hole binaries (Farris et al., 2012) or describe models with magnetorotational (MRI) disk instability and magnetically arrested disks (MAD) (Gold et al., 2016). The simulations in Gold et al. (2016) that model the disk - which is governed by distinct emission mechanisms from the jet - require an evolution equation for proton temperature \(T_{p}\) and electron temperature \(T_{e}\). Later simulations by Aloy et al. (Aloy et al., 2015) have merged the Multi-Scale Fluid-Kinetic Simulation Suite with the high resolution 3D RMHD code MR-GENESIS. HARM continues to be the benchmark against which the accuracy of GRMHD simulations is measured (Porth et al., 2016). ### Fiducial Simulations #### 3.2.1 Overview In this work, we use a set of three numerical GRMHD simulations of black hole accretion. The fluid simulations were produced with the KHARMA code, a GPU-based descendant of iharm, a conservative second-order explicit shock-capturing finite-volume code for arbitrary stationary spacetimes (Gammie et al., 2003; Prather et al., 2021). The governing equations of ideal GRMHD can be written as a set of conservation laws; in a coordinate basis, they are \[\partial_{t}\big{(}\sqrt{-g}\rho u^{t}\big{)} = -\partial_{t}\big{(}\sqrt{-g}\rho u^{i}\big{)}, \tag{2}\] \[\partial_{t}\big{(}\sqrt{-g}T^{i}_{\nu}\big{)} = -\partial_{t}\big{(}\sqrt{-g}T^{i}_{\nu}\big{)}+\sqrt{-g}T^{k}_{ \lambda}\Gamma^{\lambda}_{\nu k},\] (3) \[\partial_{t}\big{(}\sqrt{-g}B^{i}\big{)} = -\partial_{t}\big{(}\sqrt{-g}(b^{i}u^{i}-b^{i}u^{j})\big{)}, \tag{4}\] along with a no-monopoles constraint \(\partial_{t}\big{(}\sqrt{-g}B^{i}\big{)}=0\). Here, the rest-mass density of the fluid is \(\rho\), \(u^{\mu}\) is the fluid four-velocity, \(b^{\mu}\) is the magnetic induction four-vector, and the magnetohydrodynamic stress-energy tensor is \[T^{\mu\nu}=\big{(}\rho+u+P+b^{2}\big{)}\,u^{\mu}u^{\nu}+\big{(}P+b^{2}/2\big{)} \,g^{\mu\nu}-b^{\mu}b^{\nu}, \tag{5}\] where \(u\) and \(P\) are the internal energy of the fluid and its pressure, which is related to the internal energy via an ideal gas law equation of state with constant adiabatic index \(\hat{\gamma}\) via \(P=(\hat{\gamma}-1)\,u\). The effects of spacetime are accounted for in the usual way, with \(g=\sqrt{-\mathrm{det}g_{\mu\nu}}\), the determinant of the covariant metric and \(\Gamma\) a Christoffel symbol encapsulating derivatives of the metric. More detail about the end-to-end simulation procedure can be found in Wong et al. (2022) The simulations all used outflow boundary conditions at both the inner and outer radial edges, located within the event horizon and at \(1,000GM_{\rm h}/c^{2}\) respectively. Each simulation was run from \(t=0\,GM_{\rm h}/c^{3}\) until \(30,000\,GM_{\rm h}/c^{3}\) in order to provide a converged characterization of the source, although we use snapshots from the latter \(25,000\,GM_{\rm H}/c^{3}\) of each simulation. In particular, we consider two MAD simulations with dimensionless black hole spins \(a/M_{\rm H}=-0.5\) and \(+0.94\). We also consider one SANE simulation with spin \(a/M_{\rm H}=-0.5\). In MAD accretion, magnetic flux carried by the accretion flow builds up near the event horizon until the magnetic pressure near the hole is large enough to counterbalance the inward ram pressure of the accreting material. MAD accretion thus proceeds in chaotic bursts of isolated, thin plasma streams beginning far from the hole, with the overall flow characterized by occasional violent magnetic eruption events. In contrast, SANE accretion proceeds in a turbulent but consistent, disk-like flow. Further information about the KHARMA GRMHD library can be found in Event Horizon Telescope Collaboration et al. (2022). Figure 4 shows xz- (p colloidal) slices of the fiducial simulations snapshots at \(T=25,000M\) for MAD \(a/M_{H}=0.94\), \(a/M_{H}=-0.5\) and SANE \(a/M_{H}=-0.5\) for the key quantity magnetization \(\sigma=\frac{b^{2}}{\rho}\). Expectedly, the MAD simulations are more highly magnetized- particularly along the polar regions where there is a relativistic outflow. The columns in Figure 5 representing electron number density, internal energy and magnetic field strength for the same simulations show that turbulence in the equatorial inflow- such as that driven by the magnetorotational instability- is particularly prominent for the SANE case. The MAD/SANE magnetic substructure and field strength dichotomy is also apparent in slices of plasma \(\beta\) in Fig. 6. #### 3.2.2 Mass Accretion Rate The mass accretion rate in the code is an adjustable parameter with which the flux scales. In this work, our target flux for synthetic images is 0.5 Jy. Now that we have described the simulations we are using to test the radiative properties of M87's JAB system, we add the key physics governing energy transfer from the GRMHD plasma to the high energy particles responsible for the observed emission. ## 4 Model Prescriptions for Emission, Particle Acceleration and Dissipation ### General Considerations We suppose that the radio and mm emission is synchrotron radiation due to particle acceleration arising from a number of different mechanisms discussed here and in the Appendix. We expand upon the synchrotron prescriptions implemented for jet models in Anantua et al. (2018, 2020), where the relativistic electron distribution function is a power law with slope 2, which implies that the emissivity in the comoving (primed) frame \(j^{\prime}_{\nu^{\prime},\Omega^{\prime}}(\nu^{\prime},{\bf n}^{\prime})\propto \tilde{P}_{\rm e}|{\bf B}^{\prime}\times{\bf n}^{\prime}|^{3/2}\nu^{\prime-1/2}\), where \(\tilde{P}_{\rm e}\) is the (presumed isotropic) partial pressure of the electrons emitting at the frequency \(\nu^{\prime}\). The choice to scale near-horizon jet emissivity in terms of magnetic pressure is motivated by the observation that the jet becomes increasingly simple and electromagnetic -- high \(\sigma\) -- as the horizon is approached and as exhibited by the RMHD simulations. However, we note the assumptions underlying some of the previous simulations make no provision for the particle acceleration and transport resulting in the spatial and temporal variation of \(\tilde{P}_{\rm e}\). In this work, we use a more generic electron pressure formalism distinguishing the highly magnetized outflow from the less magnetized inflow to prescribe emission models. It is the primary objective of this investigation to explore how much this matters and if indeed any prescription for the particle acceleration is compatible with existing and anticipated VLBI imaging. Perfect MHD defines a set of reference frames in which the plasma, treated as a fluid, is at rest. The motion perpen \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Distance from Earth\({}^{1}\)} & Schwarzschild radius\({}^{2}\) & Apparent angular width\({}^{3}\) & Jet opening angle\({}^{4}\) & Jet viewing angle\({}^{5}\) \\ \hline \hline \((16.7\pm 0.6)\)MPc & \((6.35\times 10^{-4}\pm 3.69\times 10^{-5})\)pc & \(3.9\mu\)as & \(5^{\circ}\) (at 100pc) & \(10^{\circ}-19^{\circ}\) \\ \((5.15\times 10^{25}\pm 2.78\times 10^{24})\)cm & \((1.94\times 10^{15}\pm 2.09\times 10^{14})\)cm & & \(60^{\circ}\) (at the core) & \\ \hline \end{tabular} \({}^{1}\)Blakeslee et al. (2009) \({}^{2}\)Event Horizon Telescope Collaboration et al. (2019) \({}^{3}\)Gebhardt et al. (2011) \({}^{4}\)Doeleman et al. (2012) \({}^{5}\)Wang & Zhou (2009) \end{table} Table 1: M87 Geometry Figure 4: Magnetization \(\sigma\equiv b^{2}/\rho\) in azimuthal slices for fiducial simulations MAD \(a=0.94\) (Left) MAD \(a=-0.5\) (Middle) and SANE \(a=-0.5\) (Right). \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{Unit type} & \(M\) \\ \hline \hline Mass & \((6.6\pm 0.4)\times 10^{9}M_{\odot}\) \\ Length & \((3.2\times 10^{-4}\pm 1.8\times 10^{-5})\)pc \\ Angular width at \(d_{\rm M87}\) & \((3.7\pm 0.3)\mu\)as \\ Time & \((9.1\pm 0.8)\)h \\ \hline \end{tabular} \end{table} Table 2: Code scale \(M\) to physical units for M87 dicular to the magnetic field has a velocity \({\bf v}_{\perp}={\bf E}\times{\bf B}/B^{2}\) in the simulation frame. In general, the component of the fluid velocity resolved along the field \({\bf v}_{\parallel}\) is problematic when the inertia of the plasma is ignorable as we are implicitly assuming here. It should be emphasized that the minimum charged particle density needed to support the space charge and current (Goldreich & Julian, 1969) is orders of magnitude smaller than what is needed to account for the radio and \(\gamma\)-ray emission. It should also be stressed that efficient and progressive pair production and particle acceleration is to be expected in AGN jets as modeled here. The motional potential difference across an electromagnetic jet near the black hole should be \(\sim 1-300\,\)EV, many orders of magnitude greater than the \(\sim 1\,\)MV minimum needed to create positrons or the \(\sim 1\,\)GV needed to accelerate electrons to the \(\lesssim 1\,\)GeV energies associated with the mm emission. The numerical simulations express none of this physics and, in any case, introduce a "floor" to the electron density for purely numerical reasons. The simulation particle density should not be trusted within the inner jet. The composition of the plasma is also uncertain. Close enough to the event horizon, the plasma must fall inward and be connected to a source as jets are outflowing at larger radius. The simplest assumption, which we shall adopt, is that pairs are continuously produced in the inner magnetosphere. Plasma can also be entrained from the surrounding medium. This is expected to play a large role in the dynamical evolution and emission of the jet at large radii. The simulations suggest that it is unimportant within, say, \(\sim 1000M\) and we shall suppose that the associated radio emission is a locally accelerated pair plasma where \(\gamma\)-ray production is balanced by annihilation and other quantum electrodynamical processes. ### Phenomenological Jet Models Given all this uncertainty, we adopt a phenomenological approach where we adopt a set of simple prescriptions for electron and positron pressure and temperature that can lead to quite different simulated observations. We now develop a compendium of formulas linking plasma variables to radiation phenomenology, starting with jet models for the electromagnetically regions of a force-free, relativistic plasma. #### 4.2.1 Constant Electron \(\beta\) Model The simplest prescription is the Constant \(\beta_{e}\) Model, that the pressure of synchrotron radiating electrons is a constant fraction of the magnetic pressure \[P_{e}=\beta_{e0}P_{B} \tag{6}\] Close to the hole, the total pressure \(P\) is dominated by the magnetic pressure \(P_{B}=b^{2}/2\mu_{0}\) but at large radius, there is gas pressure contributed by the entrained gas. If we adopt \(\beta\lesssim 0.01\), close to the hole, then this is the sub-"equipartition" prescription that is often invoked when interpreting jet observations e.g., Anantua et al. (2020). #### 4.2.2 Current Density (\(j^{2}\)) Model Another type of model by which electromagnetic jets can dissipate power employs currents. This type of model can be implemented using the field gradient -- the current density \(j^{\prime}\) -- and introducing a resistivity \(\eta=\mu_{0}L_{j}\) where \(L_{j}\) is a length scale which we choose to be a fixed fraction of the jet width. The dissipation rate is then \(W^{\prime}=\eta j^{\prime 2}\). This approach is partly motivated by particle-in-cell (PIC) simulations of relativistic reconnection. This model has been compared to the 43 GHz M87 jet in Anantua et al. (2018) as a mechanism for generating limb brightening. Though we do not make images of the Current Density Model here, we Figure 5: Vertical slices for electron number density \(N_{e}\), internal energy \(U\), and magnetic field strength \(|B|\) in cgs units for fiducial simulations MAD \(a=0.94\) (Top) MAD \(a=-0.5\) (Middle) and SANE \(a=-0.5\) (Bottom). Figure 6: Simulation plasma \(\beta\) parameter \(\equiv P_{\rm gas}/P_{\rm mag}\) on azimuthal slices for fiducial models MAD \(a=0.94\) (Left) MAD \(a=-0.5\) (Middle) and SANE \(a=-0.5\) (Right). note the physical significance of jet currents as a spatially inhomogeneous source of dissipation. ### JAB System Models We now consider the entire inflow-outflow structure governed by the supermassive black hole. We have previously described the relativistic polar outflow as a Blandford-Znajek jet. Beyond the outflow or jet funnel, astrophysical plasmas experience discontinuities in pressure and density. When there is a sufficient velocity gradient, Kelvin-Helmholtz instabilities produce gaseous swirling features-notable in M87 up to 1 kpc from the black hole (Pasetto et al., 2021). The enveloping corona is loosely bound to the JAB system. The inflowing disk is supported against its own inertia by magnetic and thermal pressure and momentum transport- the latter of which may lead to the magneto-rotational instability. The property of turbulent heating to preferentially energize electrons in magnetically dominated regions and protons in gas pressure dominated regions has been explored in Howes (2010). There are several ways of parameterizing this behavior (Moscibrodzka et al., 2011; Anantua et al., 2020). #### 4.3.1 \(R-\beta\) Model We start our JAB emission modeling by noting the tendency of plasma turbulence to preferentially heat electrons at low \(\beta\) and ions at high \(\beta\), as was originally conceptualized in the context of the solar corona (Quataert & Gruzinov, 1999; Howes, 2010). Applied to JAB systems, the \(R-\beta\) turbulent heating model takes the form \[R=\frac{T_{i}}{T_{e}}=\frac{\beta^{2}}{1+\beta^{2}}R_{\rm high}+\frac{1}{1+ \beta^{2}}R_{\rm low} \tag{7}\] It is the primary model used by the Event Horizon Telescope (Event Horizon Telescope Collaboration et al., 2019, 2021) and developed by Moscibrodzka et al. (2016). #### 4.3.2 Critical \(\beta\) Model The Critical \(\beta\) Model is an alternative turbulent heating model with an exponential parameter \(\beta_{c}\) controlling the transition between electron- and ion-dominated heating \[\frac{T_{e}}{T_{e}+T_{i}}=fe^{-\beta/\beta_{c}} \tag{8}\] This model was developed in Anantua et al. (2020).b. The models are compared for reasonable parameter values in Fig. 7. We see at low \(\beta\), the electron-to-ion temperature ratios in the \(R-\beta\) and the Critical Beta Models have similar asymptotic behavior. However, at high \(\beta\), the Critical \(\beta\) Model \(T_{e}/T_{i}\) exponentially falls to 0, which by preliminary indications may reduce the bremsstrahlung contribution (though we reserve a more extensive investigation modeling emission processes beyond synchrotron for future work). The Critical \(\beta\) Model also has transition behavior at intermediate \(\beta\)'s controlled by an exponential parameter \(\beta_{c}\), leading to greater variety of intermediate \(\beta\) emission regions probed between the same range of electron-to-ion temperature ratios compared to the \(R-\beta\) Model. #### 4.3.3 Multi-Zone Emission Models We have seen how plasma inertial and electromagnetic properties within and across our JAB simulations differ by orders of magnitude through \(\beta\) and \(\sigma\). Moreover, differences in the plasma velocity towards and away from the black hole leads to plasma mixing regions and whose behavior may not be amenable to the existing smooth emission parametric models. We thus refine our JAB system emission models by combining the \(\beta\)-dependent turbulent heating models with jet regions radiating by conversion of magnetic to particle energy. We define the jet region using a transitional value of magnetization \(\sigma\) of 1/2. Armed with these models of particle thermodynamics, we turn to the emitted radiation. ### Radiation An electromagnetically-dominated jet outflow must be continually converting electromagnetic energy into high energy pairs through a \(-\mathbf{E}\cdot\mathbf{j}\) interaction. These particles will radiate through the synchrotron and Compton processes. We can use the observations to draw some inferences about the variation of the distribution function along the jet. At sufficiently small radius, the jet will become optically thick to synchrotron self-absorption at a given frequency given by \[\nu_{\rm SA}=\] \[22\left(\frac{L_{\rm EM}}{10^{44}{\rm erg\ s^{-1}}}\right)^{0.1 }\left(\frac{S}{1{\rm Jy}}\right)^{0.4}\left(\frac{r}{50M}\right)^{-0.8}\left( \frac{\Gamma}{3}\right)^{-0.4}{\rm GHz}\] Also, at a given radius there is a characteristic frequency where the synchrotron cooling time of the emitting electrons is equal to the expansion time scale \[\nu_{\rm cool}=\left(\frac{L_{\rm EM}}{10^{44}{\rm erg\ s^{-1}}}\right)^{-1.5} \left(\frac{r}{50M}\right)^{-0.2}\left(\frac{\Gamma}{3}\right)^{6}{\rm THz} \tag{10}\] If a power law is accelerated, the local spectrum should break by \(\Delta\alpha=0.5\) at this frequency. However, note the extreme sensitivity to the bulk Lorentz factor \(\Gamma\). This probably controls what is actually observed. Figure 7: Comparison of R-\(\beta\) (solid lines) and Critical Beta (dashed lines) models for reasonable parameter values. The entire nuclear spectrum within \(r\sim 3\times 10^{5}M\) has been carefully determined by Prieto et al. (2016). They find a sharp break at \(\nu=\nu_{b}\sim 150\) GHz. Presumably the flatter \(\alpha\sim 0.2\) spectrum at \(\nu\lesssim\nu_{b}\) is attributable to the superposition of a radial sequence of spectra with a frequency to radius mapping as defined above. The lowest frequency considered, \(\sim 3\) GHz, should originate at \(r\sim 1000M\). ### Emission Modeling with Positrons #### 4.5.1 Particle Production and Acceleration Though the vast majority of GRMHD simulations consider ionic plasma without an explicit fluid for electron-positron pairs, it is unknown whether jet matter is typically dominated by an ion-electron plasma or a pair one. In the latter case, black hole-powered jets generally get their initial mass-loading through \(\gamma-\gamma\) pair production. The means of particle acceleration within AGN jets, however, is not understood. There have been several mechanisms invoked, including those involving strong shock fronts, both non-relativistic and relativistic, magnetic reconnection, stochastic acceleration though wave-particle interaction and electrostatic acceleration - either along a magnetic field or perpendicular to the field as a consequence of drift motion. Recent observations, especially when seen in the context of observations of extreme acceleration in pulsar wind nebulae and Gamma Ray Bursts, point to the need for new and more rapid approaches. The need is best exemplified by \(\gamma\)-ray observations which can show that electric fields as strong as magnetic fields (setting \(c=1\)) may be needed in order for electrons to attain the required energies in the face of strong radiative loss through synchrotron emission and Compton scattering. #### 4.5.2 Positron Production Modeling and Radiative Transfer Much of the plasma in the accreting component of the RIAF system is likely a mixture of ionized hydrogen and helium from stellar winds and the interstellar medium. Since the conductivity is high near the event horizon, particles in the plasma are forced to follow magnetic field lines, so the jet, which is canonically magnetically disconnected from the accretion disk, cannot be directly supplied with plasma from the disk. If electron-positron pairs are produced in these regions, then they may be the dominant matter source. Electron-positron pairs may be produced by pair cascades in charge-starved magnetospheres (like in evacuated jet funnels) or in the disk coronae. In the systems we study, electron-positron pairs are produced mainly via the Briet-Wheeler process, i.e., as a result of photon-photon collisions. In order to create a pair, the center-of-momentum energy of the photons must exceed the rest-mass energy of a pair \(\approx 1\,\mathrm{MeV}\approx 2\times\left(1.2\times 10^{20}\right)\mathrm{Hz}\). The cross-section peaks near this threshold value, and the participating photon couples lie over a spectrum of energy ratios: some pairs of photons having approximately the same frequency while others are matched with low/high frequencies. Pair-producing processes are often differentiated based on the photon source and whether the newly created pairs radiate and contribute (non-negligibly) as a new photon source. _Pair drizzle_ occurs when the pairs are produced by photons from the background radiation field (due to synchrotron and bremsstrahlung emission and Compton upscattering) and typically exhibits variation on timescales associated with the plasma fluid. Drizzle pair production has been studied in a variety of scenarios ranging between stellar-mass to supermassive black hole accretion Moscibrodzka et al. (2011); Laurent and Titarchuk (2018); Kimura and Toma (2020); Wong et al. (2021); Yao et al. (2021). In the alternative scenario, high energy photons with frequencies \(\gg 10^{20}\,\mathrm{Hz}\) can interact with background (low energy) photons from the disk to undergo pair production. Here, the high-energy photons are produced when unscreened electric fields accelerate stray charges (Beskin et al., 1992); when the acceleration is large, the leptons radiate the requisite high energy photons. Often, the newly created pairs are born in the same region with unscreened fields and are thus themselves accelerated, restarting the process in a cascade of pair creation. The short timescales associated with pair cascades means they may explain the ultra-rapid high-frequency radio emission flares from AGN jets. Pair cascades have been studied with a variety of analytic, semi-analytic, and numerical methods, e.g., Fragile and Meier (2009); Broderick and Tchekhovskoy (2015); Parfrey et al. (2019). Positrons not only effect images at the level of emission, but also through radiative cooling, e.g., Fragile and Meier (2009); Yoon et al. (2020). Given the uncertainty in jet positron fraction, we focus on the special cases of a sparse ionic jet with electron number density \(n_{e0}\) and a jet where all sources of pair production result in a plasma with an equal number density of ionic and pair plasma (\(f_{\mathrm{pos}}\equiv n_{\mathrm{pairs}}/n_{e0}=1\)). There are benchmarks for positron content in the Literature, such as the estimate by Ghisellini (2012) of the fraction of a jet (opening angle \(\psi\), distance from black hole \(R_{0}\)) converted to positrons, as \(f=0.1\,\mathrm{min}\{1,\frac{R_{0}}{60}\}\) where the compactness is \(\ell=\frac{\sigma_{T}L_{0}}{\psi R_{0}\mathrm{D}^{\mathrm{tr}}n_{e0}^{-S}}\). More possibilities for painting positrons on jets can be found in Appendix. ## 5 Observing a Time-Dependent Simulation ### Anatomy of a Time-Dependent KHARMA Jet Simulation We have outlined a methodology for combining jet emission prescriptions with detailed, 3D time-dependent simulations. To emphasize the 3D nature, we take transverse slices in the equatorial plane in Figure 8. There, we see even among two MADs (spins \(a/M=0.94,-0.5\)) there are large, azimuthal symmetry-breaking patches of high magnetization emanating in different directions from the black hole. Electron number density exhibits a similar pattern of asymmetry, however, internal energy, magnetic field strength and plasma \(\beta\) are relatively azimuthally symmetric. We now describe the process of ray tracing the resulting emission. ### Radiative transfer with IPOLE: Azimuthal and Polar Variation GRMHD simulations can be ray-traced using general relativistic radiative transfer (GRRT) codes to simulate surveys of sources throughout the sky. In this work, we use the GRRT code ipole(Moscibrodzka & Gammie, 2018, see also Wong et al., 2022) to produce polarimetric images of the GRMHD simulations described in Section 3.2.1. The ipole code solves for the evolution of the polarized intensities at each point along a geodesic with a two-stage operator splitting method. In the first stage, the covariant coherency tensor (which can be written in terms of the invariant Stokes parameters and Pauli matrices) is parallel transported along the direction of the geodesic. In the second stage, the Stokes parameters are updated using the analytic solution to the explicit, general polarized transport with constant emission, absorption, and rotation coefficients, which are computed in the local orthonormal tetrad defined by the fluid and magnetic field. Each image comprises a square grid of \(N\mathrm{x}N\) pixels over a \(160\,\mu\mathrm{as}\) field of view, with each pixel reporting the Stokes intensities for \(I,Q,U\), and \(V\). Since producing images requires evaluating transfer coefficients in physical units, it is necessary to specify scales for the mass-density of the accreting plasma and the size of the black hole as well as the orientation of the observer (i.e., the software camera) with respect to the black hole. We list the physical M87 black hole parameters (and references) in Table 1, and in Table 2 we report the "code scale" values corresponding to the black hole mass identified above. Note that GRMHD codes are generally unable to accurately evolve the fluid state in regions with high magnetization \(\sigma\equiv b^{2}/\rho\) and artificially inject mass and energy in these regions. Since the plasma density (and temperature) are therefore typically unphysically high in regions of large \(\sigma\), ray-tracing codes like ipole normally introduce a so-called \(\sigma\) cutoff, where the plasma density in regions with \(\sigma>\sigma_{\mathrm{cutoff}}\) is explicitly set to zero before computing the transfer coefficients. In this work, we set \(\sigma_{\mathrm{cutoff}}=2\), consistent with the typical values used for such flows (see, e.g., Event Horizon Telescope Collaboration et al. (2019). ## 6 Comparison of simulations with observations We now present a suite of 3D-GRMHD simulation images spanning various plasma compositions and prescriptions for electron-positron thermodynamics. JAB systems are often modeled as electron-proton plasmas with a single function linking electron temperature to plasma variables such as \(\beta\) throughout the inflow/outflow system Event Horizon Telescope Collaboration et al. (2019, 2021). We start with models using this approach with the \(R-\beta\) and Critical \(\beta\) turbulent heating prescriptions, and then we refine the models by prescribing jet funnel emission in a region between \(\sigma_{\mathrm{min}}=1/2\) and \(\sigma_{\mathrm{max}}=2\) and adding pairs. Unless otherwise stated, the images are raytraced at 230 GHz and the inclination angle is \(17^{\circ}\). Note that in the tables referenced in this section, comparisons are made with snapshots in the GRMHD space. Due to this, model performance with respect to observations may not hold when comparisons to windows of simulations are made such as those in Event Horizon Telescope Collaboration et al. (2019). The image library presented here greatly expands the \(a=-0.5M\) MAD and SANE snapshots from Anantua et al. (2023) to include the highly prograde spin \(a=0.94\), temporal evolution, extreme positron fractions \(n_{\mathrm{pairs}}/n_{e0}=50,100\), varying frequency from 230 GHz to 86 GHz and varying inclination up to \(40^{\circ}\) viewing angle. Here, the SANE-MAD dichotomy manifest in the image library is also made quantitative by the tabulation of Faraday conversion and rotation depths and comparison to M87 linear polarization data. ### SANE Positron Effects #### 6.1.1 SANE R-\(\beta\) Fig. 9 shows intensity with electric vector polarization angle (EVPA) and circular polarization maps for the SANE \(a=-0.5\) simulation in the \(R-\beta\) model. This model has asymptotic ion-to-electron temperature ratios \(R_{\mathrm{low}}=1\) and \(R_{\mathrm{high}}=20\). Here, the total flux is greatest near the pho ton ring immediately surrounding the central depression and slowly decreases radially becoming broadly distributed through the equatorial annulus. Polarization oriented at the EVPA is spread throughout the equatorial annulus in the 40\(\mu\)as x 40\(\mu\)as field of view. To the ionic plasma in the Top Panels, an equal number density of pairs as the original electron number density are added in the Bottom Panels (while renormalizing \(m_{\rm unit}\) to maintain a 0.5 Jy image flux). The added positrons significantly rotate EVPAs, as the Faraday rotation measure depends sensitively on the positron fraction for SANEs. #### 6.1.2 SANE Critical \(\beta\) In Fig. 10 we show our other turbulent heating model-the Critical Beta Model. This model controls the transition from preferential electron heating to preferential ion heating through the exponential parameter \(\beta_{c}\), which for higher values smooths the transition by allowing a larger range of betas to include radiating electrons. For model parameters temperature ratio prefactor \(f=0.5\) and \(\beta_{c}=0.5\), total flux is concentrated in a ring at \(\sim 20\mu\)as and regions along lines of sight close to the polar axis, though polarization morphology trends remain similar to the \(R-\beta\) case. Note our \(R-\beta\) models are more linearly depolarized than Critical \(\beta\) Models even with lower contributions to intrinsic emission at high \(\beta\) in the latter models. This is one of several examples of Faraday effects we will see in this Section. #### 6.1.3 SANE R-\(\beta\) with Constant \(\beta_{E}\) Jet In Fig. 11 we add a jet region where the energy of relativistic electrons is directly derived from the magnetic pressure to the \(R-\beta\) model. The emission is extended more broadly and evenly throughout the field of view as it is projected from a broader region of the outflow paraboloid governed by the transitional value of \(\sigma\) separating the constant \(\beta_{e}\) jet from the turbulently heated plasma. #### 6.1.4 SANE Critical \(\beta\) with Constant \(\beta_{E}\) Jet In Fig. 12 we add a jet region of magnetic-to-particle energy conversion to the Critical Beta model. In the SANE case, the jet does not appreciably change the image morphology. Moreover, polarization does not vary monotonically with the addition of positrons across different emission models. ### MAD Positron Effects The MAD images from our fiducial time are nearly indistinguishable when Faraday effects are turned off. In these MAD images, we see a prominent flux tube in a loop extending towards the lower left. In these particular MAD images, whose circular polarization is dominated by the intrinsic, we see another dramatic polarization effect: linear increase in the magnitude of \(V/I\) (confer Section 6.5 for polarimetric quantity definitions) as a function of synchrotron emitters not in pairs (which is maximal for the ionic plasma case). Figure 10: For the \(a=-0.5\) SANE: Top panel: Critical Beta at 230 GHz without positrons Bottom panel: Critical Beta at 230 GHz for an even mix of pair and ionic plasma. Figure 9: For the \(a=-0.5\) SANE at \(T=25,000M\): Top panel: R-Beta at 230 GHz without positrons. Bottom panel: R-Beta at 230 GHz with for an even mix of pair and ionic plasma with ions and positrons each accounting for 1/4 the plasma number density and electrons accounting for the remaining 1/2. #### 6.2.1 MAD R-\(\beta\) Starting in Fig. 13 with the R-\(\beta\) model, the linear polarization ticks oriented at the EVPA for the 0-positron case remain in their orientations with minimal angular displacement when positrons are added to form the mixed plasma in the ray tracing step. However, in radiative transfer using coefficients for the mixed plasma in which 1/2 the particles are electrons, 1/4 of the particles are positive ions and 1/4 of the particles are positrons (i.e, 2/3 of the synchrotron emitting leptons are paired), we have the degree of circular polarization \(V/I\) diminishing to 1/3 of the positron-free value. The addition of positrons also reverses the polarity of the bottom left portion of the flux eruption loop. #### 6.2.2 MAD Critical \(\beta\) In Fig. 14, the Critical \(\beta\) image and \(V/I\) map mirror the global structure in the \(R-\beta\) case in Fig. 13. They also share similar dependence of the circular polarization dependence on the free electron-positron fraction, and partial reversal of circular polarization sense in the flux eruption loop. #### 6.2.3 MAD R-\(\beta\) with Constant \(\beta_{E}\) Jet In Fig. 15, the R-Beta model with jet maintains the prominent flux eruption loop as the above models. The presence of the Constant \(\beta_{c}\) jets slightly reduced the circular polarization degree both with and without positrons. Figure 11: For the \(a=-0.5\) SANE at \(T=25,000M\): Top panel: R-Beta with \(\beta_{c0}=0.01\) jet at 230 GHz without positrons Bottom panel: R-Beta at 230 GHz for an even mix of pair and ionic plasma. Figure 12: For the \(a=-0.5\) SANE at \(T=25,000M\): Top panel: Critical Beta with \(\beta_{c0}=0.01\) jet at 230 GHz without positrons. Bottom panel: Critical Beta at 230 GHz for an even mix of pair and ionic plasma. Figure 13: For the \(a=-0.5\) MAD at \(T=25,000M\): Top panel: R-Beta at 230 GHz without positrons Bottom panel: R-Beta at 230 GHz for an even mix of pair and ionic plasma. #### 6.2.4 MAD Critical \(\beta\) with Constant \(\beta_{E}\) Jet In Fig. 16 the Critical Beta model with jet exhibits the same trends as its \(R-\beta\) counterpart above. ### Comparison of R-Beta and Critical Beta Models In Fig. 17 we compare Critical Beta Model with \(R-\beta\) Model for MAD \(a/M=0.94\). Like comparisons between Figs. 10 and 11 and between 13 and 14 show, these models are quite degenerate at the level of total intensity morphology. However, the circular polarization of the Critical Beta \(V/I\) map does exhibit more scrambling near the photon ring- where a broad range of \(\beta\)'s may contribute given our shallow exponential parameter \(\beta_{c}=0.5\). ### Extreme Positron Fractions As mentioned in Sec. 6 increasing the pair fraction causes intrinsically emitted circular polarization to decrease, Faraday rotation to decrease, and Faraday conversion to increase. Faraday rotation is essential for depolarizing these accretion flows. Thus, dramatic effects occur when the pair fraction is raised high enough to turn an model from Faraday thick to Faraday thin. In Fig. 18 we show the effects of raising \(n_{\rm pairs}/n_{0}=100\) for a SANE \(a=-0.5\) simulation and contrast with MAD \(a=-0.5\) and \(a=+0.94\) in Figs. 19 and 20, respectively (using the R-\(\beta\) Model). The effect on the MAD simulation is subtle, characterized by a decrease in the intrinsically emitted circular polarization that dominates on large scales. Note Figure 16: For the \(a=-0.5\) MAD at \(T=25,000M\): Top panel: Critical Beta with \(\beta_{c0}=0.01\) jet at 230 GHz without positrons Bottom panel: Critical Beta with \(\beta_{c0}=0.01\) jet at 230 GHz for an even mix of pair and ionic plasma. Figure 14: For the \(a=-0.5\) MAD at \(T=25,000M\): Top panel: Critical Beta at 230 GHz without positrons Bottom panel: Critical Beta at 230 GHz for an even mix of pair and ionic plasma. Figure 15: For the \(a=-0.5\) MAD at \(T=25,000M\): Top panel: R-Beta with \(\beta_{c0}=0.01\) jet at 230 GHz without positrons Bottom panel: R-Beta at 230 GHz for an even mix of pair and ionic plasma. that this is model-specific, and in fact Faraday conversion is the only source of circular polarization in pair plasma jet models in Anantua et al. (2020)a Meanwhile, the effect is much stronger for the SANE simulation, which is intrinsically more Faraday thick. After removing Faraday rotation, the simulation acquires a much more ordered linear polarization pattern. In addition, very large circular polarization fractions are produced in the absence of depolarization by Faraday rotation. ### Comparison of Models with Polarization Constraints In Table 3, we compare observations from Event Horizon Telescope Collaboration et al. (2021) against fiducial model linear polarization: both summed from net (unresolved) \(Q\), \(U\) and \(I\) across the image plane, \[|m|_{\rm net}=\frac{\sqrt{\left(\sum_{i}Q_{i}\right)^{2}+\left(\sum_{i}U_{i} \right)^{2}}}{\sum_{i}I_{i}} \tag{11}\] and its average local (resolved) magnitude \[<|m|>=\frac{\sum_{i}I_{i}P_{i}}{\sum_{i}I_{i}}=\frac{\sum_{i}\sqrt{Q_{i}^{2}+U _{i}^{2}}}{\sum_{i}I_{i}}. \tag{12}\] SANE models tend to be less linearly polarized on net and MAD models more linearly polarized on net than the constraint- though all models exceed the beam/resolution-dependent averaged polarization magnitude constraint. The fiducial models satisfying the net linear polarization constraint for MAD \(a=-0.5\) are the R-Beta, R-Beta with Jet and Critical Beta with Jet, all with maximal positron fractions. For MAD \(a=+0.94\), only the positron-free Critical Beta models (with and without jet) satisfy the \(|m|_{\rm net}\) constraint. In Table 4, we project forward anticipating circular polarization measurements will be performed in the near future to compare with our models. All of our models satisfy the preliminary \(V_{\rm net}\) constraint in Goddi et al. (2021). For the structure parameter \(\beta_{2}\), however, Table 5 shows only MAD \(a=0.94M\) pure turbulent heating models pass. ### Faraday Effects As linear polarization travels through a magnetized plasma, its EVPA is rotated by Faraday rotation, interchanging Stokes \(Q\) and \(U\). Similarly, Faraday conversion exchanges linear and circular polarization, interchanging Stokes \(U\) and \(V\). Both of these effects can be significant in accreting black hole systems. In particular, Faraday rotation is believed to be extremely important for reducing the linear polarization fraction in models of M87* to the observed values. Typically, SANEs have larger Faraday rotation and conversion depths than MADs. This is largely because SANE models require larger mass densities to match the observed flux of M87*. They also have lower temperatures, which increases the efficiency of Faraday effects. Figure 17: R-Beta vs. Critical Beta comparison for MAD \(a=+0.94\). Top panel: R-Beta at 230 GHz. Bottom panel: Critical Beta Figure 18: Extreme positron variation comparison for SANE \(a=-0.5\) R-Beta at 230 GHz: fPos = 0 (Top Panel) vs. fPos = 100 (Bottom Panel). #### 6.6.1 Faraday Rotation Table 6 of fiducial model Faraday rotation depths shows a pronounced gap between a marginal effect in MAD simulations relative to the corresponding effect which is 3 orders of magnitude larger in SANE simulations. Varying positron content even at the percent level leads to large EVPA rotational swings for SANE plasmas due to the large absolute response of the Faraday rotation measure to the increased fraction of positrons. This naturally leads to a profoundly discriminating probe of plasma magnetic inflow properties in regions of changing positron fraction. Even when the plasma composition is in steady state, we may identify the rapid spatial variation of circular polarization as a signature of high Faraday rotation depth characteristic of SANE accretion flows. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline &. & SANE & (\(a=-0.5\)) & & & MAD & (\(a=-0.5\)) & \\ & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta \\ & & w./ Jet & & w./ Jet & & w./ Jet & & w./ Jet \\ \hline \(|V|_{\rm net}(f_{\rm pos,min})\) & \(3.93\cdot 10^{-3}\) & \(3.97\cdot 10^{-3}\) & \(8.51\cdot 10^{-3}\) & \(2.26\cdot 10^{-3}\) & \(5.51\cdot 10^{-2}\) & \(4.07\cdot 10^{-2}\) & \(6.93\cdot 10^{-2}\) & \(4.86\cdot 10^{-2}\) \\ \hline \(|m|_{\rm net}(f_{\rm pos,max})\) & \(1.62\cdot 10^{-3}\) & \(2.88\cdot 10^{-3}\) & \(2.73\cdot 10^{-3}\) & \(2.50\cdot 10^{-3}\) & \(\mathbf{3.67\cdot 10^{-2}}\) & \(\mathbf{3.10\cdot 10^{-2}}\) & \(5.21\cdot 10^{-2}\) & \(\mathbf{3.55\cdot 10^{-2}}\) \\ \hline \(<|V|>(f_{\rm pos,min})\) & \(1.29\cdot 10^{-1}\) & \(1.38\cdot 10^{-1}\) & \(1.35\cdot 10^{-1}\) & \(1.44\cdot 10^{-1}\) & \(3.49\cdot 10^{-1}\) & \(3.30\cdot 10^{-1}\) & \(2.71\cdot 10^{-1}\) & \(2.53\cdot 10^{-1}\) \\ \hline \(<|m|>(f_{\rm pos,max})\) & \(1.31\cdot 10^{-1}\) & \(1.45\cdot 10^{-1}\) & \(1.40\cdot 10^{-1}\) & \(1.47\cdot 10^{-1}\) & \(4.20\cdot 10^{-1}\) & \(3.60\cdot 10^{-1}\) & \(3.83\cdot 10^{-1}\) & \(3.46\cdot 10^{-1}\) \\ \hline & & & & & & & MAD & (\(a=+0.94\)) & \\ & & & & & & w./ Jet & & w./ Jet \\ \hline \(|V|_{\rm net}(f_{\rm pos,min})\) & & & & & & & & & \\ & & & & & & & & \\ \hline \(|V|_{\rm net}(f_{\rm pos,max})\) & & & & & & & & & \\ \hline \(|V|>(f_{\rm pos,min})\) & & & & & & & & & \\ \hline \(<|V|>(f_{\rm pos,max})\) & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 3: Linear polarization \(|m|_{\rm net}\) and \(<|m|>\) for fiducial models at \(T=25,000M\). The observational constraints from EHT M87 Paper VII take the form of the polarization ranges \(0.01\leq|m|_{\rm net}\leq 0.037\) and \(0.057<<|m|><0.107\). Note that the bold values refer to fiducial models which satisfy the net linear polarization constraints. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline &. & SANE & (\(a=-0.5\)) & & & MAD & (\(a=-0.5\)) & \\ & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta \\ & & w./ Jet & & w./ Jet & & w./ Jet & & w./ Jet \\ \hline \(|m|_{\rm net}(f_{\rm pos,min})\) & \(3.93\cdot 10^{-3}\) & \(3.97\cdot 10^{-3}\) & \(8.51\cdot 10^{-3}\) & \(2.26\cdot 10^{-3}\) & \(5.51\cdot 10^{-2}\) & \(4.07\cdot 10^{-2}\) & \(6.93\cdot 10^{-2}\) & \(4.86\cdot 10^{-2}\) \\ \hline \(|m|_{\rm net}(f_{\rm pos,max})\) & \(1.62\cdot 10^{-3}\) & \(2.88\cdot 10^{-3}\) & \(2.73\cdot 10^{-3}\) & \(2.50\cdot 10^{-3}\) & \(\mathbf{3.67\cdot 10^{-2}}\) & \(\mathbf{3.10\cdot 10^{-2}}\) & \(5.21\cdot 10^{-2}\) & \(\mathbf{3.55\cdot 10^{-2}}\) \\ \hline \(<|m|>(f_{\rm pos,min})\) & \(1.29\cdot 10^{-1}\) & \(1.38\cdot 10^{-1}\) & \(1.35\cdot 10^{-1}\) & \(1.44\cdot 10^{-1}\) & \(3.49\cdot 10^{-1}\) & \(3.30\cdot 10^{-1}\) & \(2.71\cdot 10^{-1}\) & \(2.53\cdot 10^{-1}\) \\ \hline \(<|m|>(f_{\rm pos,max})\) & \(1.31\cdot 10^{-1}\) & \(1.45\cdot 10^{-1}\) & \(1.40\cdot 10^{-1}\) & \(1.47\cdot 10^{-1}\) & \(4.20\cdot 10^{-1}\) & \(3.60\cdot 10^{-1}\) & \(3.83\cdot 10^{-1}\) & \(3.46\cdot 10^{-1}\) \\ \hline & & & & & MAD & (\(a=+0.94\)) & \\ & & & & & \(R-\)Beta & Crit. Beta & Crit. Beta \\ & & & & & w./ Jet & & w./ Jet \\ \hline \(|m|_{\rm net}(f_{\rm pos,min})\) & & & & & \(4.91\cdot 10^{-2}\) & \(4.56\cdot 10^{-2}\) & \(\mathbf{3.35\cdot 10^{-2}}\) & \(\mathbf{3.67\cdot 10^{-2}}\) \\ \hline \(|m|_{\rm net}(f_{\rm pos,max})\) & & & & \(5.17\cdot 10^{-2}\) & \(5.06\cdot 10^{-2}\) & \(4.59\cdot 10^{-2}\) & \(4.82\cdot 10^{-2}\) \\ \hline \(<|m|>(f_{\rm pos,min})\) & & & & \(5.76\cdot 10^{-1}\) & \(5.18\cdot 10^{-1}\) & \(5.18\cdot 10^{-1}\) & \(4.81\cdot 10^{-1}\) \\ \hline \(<|m|>(f_{\rm pos,max})\) & & & & \(5.86\cdot 10^{-1}\) & \(5.25\cdot 10^{-1}\) & \(5.81\cdot 10^{-1}\) & \(5.26\cdot 10^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Circular polarization \(|V|_{\rm net}\) and \(<|V|>\) for fiducial models at \(T=25,000M\). Note that all models satisfy the \(V_{\rm net}\)=0.008 EHT bound (Event Horizon Telescope Collaboration et al., 2021; Goddi et al., 2021) #### 6.6.2 Faraday Conversion Table 7 shows fiducial model Faraday conversion depths for SANEs are 1-2 orders greater than those for MADs. Faraday conversion depths tend to be lower than Faraday rotation depths: by 3 orders of magnitude for SANEs and 1-2 orders for MADs. However, because Faraday conversion results in the direct production of circular polarization (from linear), it may result in a significant contribution of \(V\). Faraday conversion can produce CP even in a pure pair plasma as long as the magnetic field twists along the line of sight (Wardle and Homan, 2003; Ricarte et al., 2021). ### Frequency and Inclination Dependence In Figs. 21 and 22, we search for extended structure at 86 GHz in the R-\(\beta\) with \(\beta_{\rm e0}=0.01\) Model in the MAD and SANE cases, respectively. In our 86 GHz images, we use the same Munit that normalized the 230 GHz images to.5 Jy, though now we have more flux with a larger field of view and shifting emitting regions at low frequency. The SANE Figure 19: Extreme positron variation comparison for MAD \(a=-0.5\) R-Beta at 230 GHz: fPos = 0 (Top Panel) vs. fPos = 100 (Bottom Panel). \begin{table} \begin{tabular}{l c c c c c c c c} \hline & & SANE & \((a=-0.5)\) & & & MAD & \((a=-0.5)\) & \\ & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta \\ & & w./ Jet & & w./ Jet & & w./ Jet & & w./ Jet \\ \hline \(\beta_{2}(f_{\rm pos,min})\) & \(3.51\cdot 10^{-3}\) & \(2.82\cdot 10^{-3}\) & \(3.84\cdot 10^{-3}\) & \(3.80\cdot 10^{-3}\) & \(6.22\cdot 10^{-3}\) & \(6.31\cdot 10^{-3}\) & \(1.17\cdot 10^{-2}\) & \(6.18\cdot 10^{-3}\) \\ \hline \(\beta_{2}(f_{\rm pos,max})\) & \(2.96\cdot 10^{-3}\) & \(3.03\cdot 10^{-3}\) & \(4.24\cdot 10^{-3}\) & \(9.86\cdot 10^{-4}\) & \(1.47\cdot 10^{-2}\) & \(9.83\cdot 10^{-3}\) & \(2.15\cdot 10^{-2}\) & \(1.32\cdot 10^{-2}\) \\ \hline & & & & & & MAD & \((a=+0.94)\) \\ & & & & \(R-\)Beta & \(R-\)Beta & Crit. Beta & Crit. Beta & \(\rm{w./\ Jet}\) \\ \hline \(\beta_{2}(f_{\rm pos,min})\) & & & & & \(3.23\cdot 10^{-2}\) & \(2.42\cdot 10^{-2}\) & \(\bf{3.58\cdot 10^{-2}}\) & \(2.77\cdot 10^{-2}\) \\ \hline \(\beta_{2}(f_{\rm pos,max})\) & & & & \(\bf{3.93\cdot 10^{-2}}\) & \(2.72\cdot 10^{-2}\) & \(\bf{3.66\cdot 10^{-2}}\) & \(2.72\cdot 10^{-2}\) \\ \hline \end{tabular} \end{table} Table 5: Azimuthal structure mode \(\beta_{2}\) for fiducial models at \(T=25,000M\). The observational constraints from EHT M87 Paper VII are in the range \(0.04\leq|\beta_{2}|\leq 0.07\). Note that the bold values refer to fiducial models which satisfy the observational constraints. Figure 20: MAD \(a=+0.94\) R-Beta at 230 GHz.Top panel: fPos = 0 Bottom panel: fPos = 100 Fig. 22 in particular shows an upwardly extended feature reminiscent of the M87 jet base in Lu et al. (2023). Changing observer inclination induces further distinctive image morphological features, as shown in the 86 GHz maps in Fig. 23 with inclination angle \(40^{\circ}\) (instead of the default M87 inclination \(17^{\circ}\) used throughout this work). In Fig. 23, the \(R-\beta\) model with \(\beta_{e0}=0.01\) jet has image plane projection horizontally elongated in the MAD case and vertically elongated in the SANE case relative to the default orientation. The jet collimation profile generally broadens as the jet inclination tilts away from the line of sight; the broader jet is more like observed in Lu et al. (2023). More edge-on morphologies are expected to break degeneracies in face-on images due to general relativistic strong lensing effects. at \(T=20,000M\) and a smaller extrusion at \(T=30,000M\) in the circular polarization maps. Figure 26 compares the evolution of circular polarization fraction with positron fraction at fiducial time \(T=25,000M\) with that of a later snapshot of the simulation at \(30,000M\). Fifty-one different positron fractions are used in the series of frames producing these curves representing the variation of \(V/I\) with positron fraction. The fiducial time with a prominent flux ejection loop has \(V/I\) monotonically going from the most negatively polarized to approaching 1/3 this value linearly in the fraction of unpaired emitters (slightly slower than linearly around \(\frac{(n_{s})_{0}}{(n_{e})_{0}+2n_{\rm pairs}}=1/3\) where the plasma is an even mix of electrons, positrons and protons). The flux loop occurrence at fiducial time \(T=25,000M\) of the MAD \(a=-0.5\) simulation may be representative of a broader episodic phenomenon occurring throughout the evolution of the flow. The Fig 27 time series of mass accretion rate \(\dot{m}\) and horizon-threading flux \(\phi\) from \(10,000M<\phi\) to \(10^{-4}\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0.5\), and the flux ejection loop is \(\phi=-0.5\). The flux ejection loop is \(\phi=-0. \(T<30,000M\) reveals at \(T=25,000M\) a sharp rise in \(\phi\) accompanied by a sharp decrease in \(\dot{m}\), which accords with the flux eruption scenario where a highly polarized magnetic flux loop is added to a magnetically arrested disk. Similar loop morphologies were observed for \(T=17,730M\) and \(27,110M\) where the simulation time series have similar peaks in \(\phi\) and troughs in \(\dot{m}\) as at \(T=25,000M\). ## 7 Discussion and Conclusions There have been remarkable advances in imaging and simulating AGN jets over the past couple of decades. Despite this progress there are potentially vital components -- the jet composition and relativistic particle acceleration remains which remain controversial. Our methodology to address these is to focus on one well-studied source, M87, and one region of the electromagnetic spectrum, radio, millimeter and submillimeter, and to incorporate different phenomenological prescriptions to bridge this divide into the simulation time series. The results of the simulations are shown in Figure 25. The results of the simulations are shown in Figure 26. The results of the simulations are shown in Figure 27. The results of the simulations are shown in Figure 27. The results of the simulations are shown in Figure 28. The results of the simulations are shown in Figure 29. tions and then "observe" them. The actual observations, especially those from the Event Horizon Telescope, can then be used to discern empirically some of the rules that govern jet formation, collimation, polarization and dissipation. This approach can be extended using more sources, frequencies and simulations and statistical comparisons can also be conducted. These extensions will be discussed in future publications including the completion of this series on Sgr A*, M87 and 3C 279. The GRMHD model that we have used to develop a more generally applicable set of techniques is quite specific in terms of spin (\(a/M_{H}=-0.5,0.94\)) and disposition of the surrounding gas (dense orbiting torus with non-relativistic wind at high latitude outside the jet). The magnetic flux density and polarity were consequences of the conditions of the simulations. Given these boundary conditions, the concentration of horizon-crossing magnetic flux and the formation of an electromagnetic outflow or jet are inevitable. Within the Bondi radius (\(\sim 10^{5}M\)), the jet profile is roughly parabolic, consistent with other simulations, e.g., Penna et al. (2013). The form of the flux and velocity variation across the jet should also be reasonably generic, though the stability properties and entrainment at the jet surface is probably sensitive to the numerical details. In conclusion, we should have a pretty representative suite of simulations of the flow and the field to link to the highest resolution mm-observations. The "Observing" JAB simulations methodology reproduces a surprising number of observed signatures of M87 morphology and dynamics. Starting with turbulent heating models including that used by the EHT, R-\(\beta\), and the Critical \(\beta\) model of Anantua et al. (2020), we have the expected ring-like global structure for intensity and EVPAs strongest in local maxima of intensity. Adding equipartition-inspired models, the jet magnetic substructure for Constant Electron Beta models characterized by constant electron-gas-to-magnetic pressure along the jet gives a more broadly distributed profile. In the case of M87, the radio emissivity is not simply a function of the gas and/or the magnetic pressure. So the rule for particle acceleration must depend upon other factors (e.g. \(\beta\) and \(\Gamma\)v). We have implemented models where the emissivity is governed by total plasma \(\beta\) in the turbulent plasma and by conversion of magnetic-to-particle energy (parametrized by the contribution \(\beta_{e}\) of radiating electrons and positrons) in the relativistic jet. Our models also go beyond what is currently directly observable in simulating the effects of incrementally changing positron fraction; however, SANE and MAD produce a sharp enough dichotomy to currently be distinguished. The key finding is that polarization is a sharp cleaver distinguishing SANE and MAD accretion flows. Particularly, we find distinct polarized emission signatures that depend on the positrons content in radically different ways for SANE and MAD simulations. In summary, the primary findings of the "observing" simulations methodology applied to M87 include: * Both \(R-\beta\) and Critical \(\beta\) turbulent heating models produce ring-like intensity profiles with some MAD cases satisfying linear polarization constraints and all satisfying preliminary circular polarization upper bounds. * The piecewise addition of a Constant \(\beta_{e}\) jet tends to broader annular emission profiles. * MAD and SANE images with polarization at constant overall flux have markedly different morphological properties. The MAD can exhibit a prominent flux eruption in intensity and linear polazrization. * The Faraday depths of the SANE are 2-3 orders greater than for MAD. The SANE linear polarization is more disordered and circular polarization structure is completely scrambled. * The circular polarization degree for MAD maps dominated by intrinsic \(V/I\) exhibit a linear vanishing of \(V/I\) in the fraction of paired emitters. The AGN environment is certainly a messy and chaotic one- replete with winds, gas, dust and molecular clouds to name a few. The task of emission modeling jet/accretion flow/black hole systems in such an uncertain setting, on the other hand, is a fertile wonderland for the creation of theoretical models and the discovery of new phenomenology. With few constraints on black hole spin or jet composition, vast libraries of GRMHD libraries remain viable for even the most well studied sources like M87. The "Observing" JAB Simulations approach embraces this uncertainty by using piecewise models and generic plasma compositions to allow for complex interactions leading to unexpected results such as the positron-mediated Faraday effects leading to the sharp SANE-MAD dichotomy in polarization signatures illustrated in this work. The present application leaves us not only closer characterizing M87 as polarized MAD flow near horizon scales, but also to narrow the possible plasma descriptions for other JAB systems such as the jetted AGN 3C 279 that will be the third work of this series- and the vast universe of future horizons to be discovered. ## 8 Future directions With our suite of turbulent and sub-equipartition heating models with positrons, we have taken a key step in bridging rapidly advancing GRMHD simulations and observations. The stark SANE-MAD dichotomy found in polarized intensity spatial distribution and time evolution presents a key opportunity to rule out SANE models of M87 by comparing variability, e.g., EVPA rotation rate, with the results of M87 2017 combining later observing campaigns. It has been demonstrated that prescriptions involving dissipation as a function of effective magnetic field \(B_{e}=\mathcal{D}|\vec{n}\times\vec{B}|\) exhibit violation of bilateral symmetry across the jet axis both in the stationary, axisymmetric, self-similar semi-analytic model (7Anantua et al., 2020)a (with general relativistic ray tracing in Emami et al. (2021)), and in the time-dependent 3D GRMHD simulation in Anantua et al. (2018). Though barely visible in M87 observations, e.g., at 86 GHz Kim et al. (2018), this is predicted to be a robust - albeit, generic - observation for EHT, with details depending on whether EHT sees a jet or disk-jet in the inner few gravitational radii from the hole. Signs of this bilateral asymmetry from "Observing" JAB Simulations have appeared in 230 GHz EHT observations of 3C 279 (Kim et al., 2020). We may implement prescriptions in \(B_{e}\) in future emission modeling work to reproduce bilateral asymmetry- particularly for 3C 279. FRI disk wind momentum carries jet, while FRII jet momentum carries the wind. Another way jets exchange momentum with their surroundings is through currents. We can apply the current density model (Anantua et al., 2018) to investigate whether current sheets account for limb-brightening past \(100M\). The \(B\)-field alone struggles to remain toroidal past \(100M\) unless it's replenished by the disk. In addition to currents, we may systematically associate the dissipation in JAB systems with a number of plausible physical mechanisms, such as Shakura-Sunyaev momentum transport and Newtonian shear as outlined in the Appendix. These phenomenological models give firm theoretical intuition behind the physical mechanisms powering jets. In future work, we will also incorporate positrons in a broader range of emission models. We may use positron production rates from Wong et al. (2021) to evolve the local positron fraction- a key advance over the single-positron-ratio maps used here. The computational expense of a three-fluid \((e^{-},e^{+},p)\) simulation may be mitigated by spatial symmetry and temporal stationarity of some simulated flows. A key feature of the "Observing" JAB simulations exercise presented here is its generality: a simulation of a general relativistic magnetohydrodynamic flow onto a compact object is flexible enough to model jets from neutron stars, black hole/x-ray binaries and AGN alike. In this work, we started with a suite of simulations fairly representative of an AGN in that it exhibited the commonly occurring combination of a thick ion torus confining electromagnetic flux from a polar outflow from a rotating black hole, then fine-tuned it to M87 to emulate its observed JAB system polarized substructure. Disk emission has been emphasized in other work, starting with Sgr A* at our Galactic Center, replete with new near horizon observations of photon rings courtesy of EHT. Our models will also be applied to near-horizon emission in future EHT observational targets such as the highly variable quasar 3C 279 in the last work of this trilogy. ## Acknowledgments Richard Jude Anantua was supported by the California Alliance at the outset of this investigation and the Oak Ridge Associated Universities Powe Award for Junior Faculty Enhancement and Simons Collaboration on Extreme Electrodynamics of Compact Sources towards the end. Roman Shcherbakov and Alexander Tchekhovskoy have provided excellent guidance and mentorship at the beginning of this investigation. UTSA undergraduate Noah Heredia has been helpful through graphic-related activities. Daniel Palumbo provided observational guidance. Angelo Ricarte was supported by the Black Hole Initiative at Harvard University, made possible through the support of grants from the Gordon and Betty Moore Foundation and the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Moore or Templeton Foundations. Razieh Emami acknowledges the support by the Institute for Theory and Computation at the Center for Astrophysics as well as grant numbers 21-atp21-0077, NSF AST-1816420 and HST-GO-16173.001-A. ## 9 Data availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.13060
Implementing Learning Principles with a Personal AI Tutor: A Case Study
Effective learning strategies based on principles like personalization, retrieval practice, and spaced repetition are often challenging to implement due to practical constraints. Here we explore the integration of AI tutors to complement learning programs in accordance with learning sciences. A semester-long study was conducted at UniDistance Suisse, where an AI tutor app was provided to psychology students taking a neuroscience course (N=51). After automatically generating microlearning questions from existing course materials using GPT-3, the AI tutor developed a dynamic neural-network model of each student's grasp of key concepts. This enabled the implementation of distributed retrieval practice, personalized to each student's individual level and abilities. The results indicate that students who actively engaged with the AI tutor achieved significantly higher grades. Moreover, active engagement led to an average improvement of up to 15 percentile points compared to a parallel course without AI tutor. Additionally, the grasp strongly correlated with the exam grade, thus validating the relevance of neural-network predictions. This research demonstrates the ability of personal AI tutors to model human learning processes and effectively enhance academic performance. By integrating AI tutors into their programs, educators can offer students personalized learning experiences grounded in the principles of learning sciences, thereby addressing the challenges associated with implementing effective learning strategies. These findings contribute to the growing body of knowledge on the transformative potential of AI in education.
Ambroise Baillifard, Maxime Gabella, Pamela Banta Lavenex, Corinna S. Martarelli
2023-09-10T15:35:47Z
http://arxiv.org/abs/2309.13060v1
# Implementing Learning Principles with a Personal AI Tutor: A Case Study ###### Abstract Effective learning strategies based on principles like personalization, retrieval practice, and spaced repetition are often challenging to implement due to practical constraints. Here we explore the integration of AI tutors to complement learning programs in accordance with learning sciences. A semester-long study was conducted at UniDistance Suisse, where an AI tutor app was provided to psychology students taking a neuroscience course (N=51). After automatically generating microlearning questions from existing course materials using GPT-3, the AI tutor developed a dynamic neural-network model of each student's grasp of key concepts. This enabled the implementation of distributed retrieval practice, personalized to each student's individual level and abilities. The results indicate that students who actively engaged with the AI tutor achieved significantly higher grades. Moreover, active engagement led to an average improvement of up to 15 percentile points compared to a parallel course without AI tutor. Additionally, the grasp strongly correlated with the exam grade, thus validating the relevance of neural-network predictions. This research demonstrates the ability of personal AI tutors to model human learning processes and effectively enhance academic performance. By integrating AI tutors into their programs, educators can offer students personalized learning experiences grounded in the principles of learning sciences, thereby addressing the challenges associated with implementing effective learning strategies. These findings contribute to the growing body of knowledge on the transformative potential of AI in education. **Keywords: Artificial Intelligence, Machine Learning, Learning Sciences, AI tutor, Personalization, Retrieval Practice, Spaced Repetition** ## 1 Introduction Learning is one of the most fundamental human processes. One may even say that what makes us human is our extraordinary ability to learn. Our mastery of fire, tools, language, etc. is the fruit of our learning, transmitted from generation to generation. In recent decades our talent for learning has enabled us to build a technology that is itself able to learn. Machine learning, a subset of artificial intelligence (AI), involves the development of computer programs that are capable of improving themselves through experience, or, in other words, learning (Fleuret, 2023). Central to this progress is the evolution of artificial neural networks, loosely inspired by the architecture of the human brain. These networks consist of interconnected neurons organized into layers, and the learning process involves strengthening specific connections to generate increasingly desirable outputs. For example, a classifier network can be trained to identify cats in images. The question of how we as human learners should view this newly emerging machine learning is the subject of much heated debate, often rooted in deep existential fears (Bengio et al, 2023). In this article, however, we adopt a constructive perspective and consider that the most beneficial application of the irrepressible power of machine learning is to focus it on understanding how we learn and how we could learn even better. The application of AI to education, commonly known as AIEd, has been under active research for nearly three decades (see Crompton and Burke (2023) for a recent review). However, the success of this approach has fallen short of early expectations. Researchers attribute this to the failure of AI-powered learning technologies to take into account solid theoretical foundations established by the learning sciences (Bartolome et al, 2018). In a comprehensive review of AIEd, Zawacki-Richter et al (2019) underscored the importance of explicit integration of pedagogical theories in AIEd projects. Indeed, the field of learning sciences has made invaluable advances in understanding the process of human learning. Through empirical and theoretical research, several robust principles have been shown to enhance the effectiveness of learning, including personalization, spaced repetition, and retrieval practice (see for example Kirschner and Hendrick (2020)). In this article, we take a step towards bridging the gap between the learning sciences and AIEd by demonstrating how key pedagogical principles can be effectively implemented using machine learning to improve student performance. During a one-semester course at UniDistance Suisse, we introduced a personal AI tutor app as a complementary learning activity for students. The AI tutor first used large language models to generate relevant microlearning questions from course materials. Based on gradual interactions with students, a neural network then built a predictive model of their dynamic knowledge levels, in order to adapt the learning process to their individual needs and abilities. Our main research question was to assess whether active usage of the AI tutor by students resulted in significantly higher exam grades. We also investigated the reliability of the neural network's modeling of students' knowledge level, as it is a prerequisite for effectively implementing personalized learning. ## 2 Reviews In this section, we review three concepts that are at the core of the empirical research presented in this article: human learning, machine learning, and using machine learning to enhance human learning. ### Human learning The field of learning sciences combines cognitive psychology, neuroscience, and AI to deepen our theoretical understanding of how humans learn and to improve practical learning approaches. Dunlosky et al (2013) systematically discussed ten empirically tested learning techniques and evaluated their utility (see also Weinstein et al (2018)). Here we review some of the most effective techniques identified by the learning sciences: spaced practice, retrieval practice, interleaving, elaboration, and personalization. _Spaced practice_, also known as _distributed practice_, has consistently demonstrated some of the strongest benefits for learning (Benjamin and Tullis, 2010). Research has shown that spacing out learning over time benefits long-term retention more than does massing learning sessions in close succession. The robust advantages of spaced practice have been empirically demonstrated across various settings (Toppino and Bloom, 2002; Roediger and Butler, 2011; Jost et al, 2021), and shown to lead to better long-term retention and understanding of learned material. However, spaced practice requires complex planning and self-awareness, as learning material should be reactivated precisely when it has been partially forgotten. It is indeed believed that letting memory degrade creates a "desirable difficulty" that helps learning in the long term (Bjork and Bjork, 2011). Another obstacle when implementing spaced practice is that learning materials are generally arranged in a linear, block-by-block sequence, which encourages massed learning of each section before moving on to the next one unidirectionally. _Retrieval practice_ is a well-established learning strategy that involves actively recalling information from memory rather than passively reviewing it (Karpicke and Aue, 2015; Pan and Rickard, 2018). Retrieval practice enhances long-term memory, promotes meaningful learning, and facilitates knowledge transfer to new contexts. It has proven effective across a wide range of learning situations (Roediger and Butler, 2011). But the benefit of retrieval practice depends on successful retrieval, as excessively high or low success rates are unlikely to improve memory. Moreover, overly difficult retrieval practice exercises may negatively impact students' self-efficacy and confidence (Weinstein et al, 2018). _Interleaving_ enhances learning by alternating between different ideas or concepts, as opposed to the common practice of focusing on a single theme during a learning session (see for example Rohrer and Taylor (2007)). Interleaving promotes broader understanding, discrimination, and the application of efficient strategies. However, caution is necessary when implementing interleaving, as the relevance and relatedness of the interleaved material can play a significant role (Weinstein et al, 2018). _Elaboration_ involves connecting new information to pre-existing knowledge (Reigeluth, 1979). It enhances memory retention and understanding by encouraging deeper processing and organization of concepts. Elaboration also supports the transfer of knowledge to new situations. While elaboration offers substantial benefits, implementing it in practice poses challenges such as how to instruct students to elaborate, how to measure the depth of processing, and how to make elaboration time-efficient (Weinstein et al, 2018). _Personalization_ aims to create a flexible learning experience tailored to meet the unique needs of each individual. The seminal work of Bloom (1984) reported that students who received personalized one-on-one tutoring performed better than 98% of students who received uniform training in a group. Despite high expectations, the realization of personalized tutoring in practice has remained elusive because of its excessive costs. In addition, recent experiments and literature on personalization have yielded inconclusive results and revealed pedagogical gaps in implementing personalized approaches (Pane et al, 2017; Castaneda and Selwyn, 2018). It is worth noting that these techniques can often be advantageously combined. In this context, personalization plays a crucial role, in particular by determining appropriate spacing intervals for each learner, by selecting the desirable levels of difficulty for retrieval practice exercises, and by interleaving and connecting concepts in a way that is adapted to individual progress (Bloom, 1984). Indeed, desirable difficulty depends on individual capacities, preferences, and energy levels, highlighting the significance of personalized learning to enhance the overall learning experience (Bjork and Bjork, 2011). ### Machine learning Artificial intelligence (AI) is a field of research focused on the development and implementation of computer systems capable of performing tasks that typically require human intelligence. Recent advances in AI have been largely driven by progress in machine learning (ML), which involves the use of algorithms that can automatically enhance their performance and learn from data. By training on large datasets, ML systems can recognize patterns and correlations, enabling them to extract meaningful insights and make accurate predictions. This approach has led to significant breakthroughs in various domains such as image and speech recognition, natural language processing, and text generation (see Fleuret (2023) for a recent introduction to machine learning). A fundamental component of machine learning is the concept of artificial neural networks, which draw inspiration from the structure and functioning of the human brain. Neural networks consist of interconnected nodes or "neurons," usually organized in layers. Through a process known as training, these networks strengthen specific connections to produce increasingly desirable outputs. This ability to adapt and improve their performance autonomously is a crucial aspect of ML models. A notable advancement in neural network architecture was the emergence of "transformer" networks (Vaswani et al, 2017), which have proven to be powerful models for sequence modeling tasks, such as machine translation and document summarization. Transformers excel in natural language processing and understanding by effectively capturing long-range dependencies and contextual information in complex texts. Large language models, which harness the power of transformers and extensive amounts of pre-existing text data, have demonstrated the ability to generate human-like language responses. An example that has garnered significant attention and acclaim is OpenAI's GPT (Generative Pre-trained Transformer) (Brown et al, 2020; OpenAI, 2023). ### Artificial intelligence in education While the application of AI in education (AIEd) has been the subject of research for over three decades, recent advancements in machine learning have unlocked a wide range of possibilities to enhance education (Hannele Niemi, 2022; Ouyang et al, 2022; Crompton and Burke, 2023). These applications can be grouped into four main categories (Zawacki-Richter et al, 2019). (1) Profiling and Prediction: AI algorithms are used to analyze data and create student profiles, enabling timely interventions and predicting outcomes such as admission, academic achievement, and dropout rates. (2) Assessment and Evaluation: AI-based assessment and evaluation streamline the grading process, offering instant feedback and facilitating comprehensive assessments, including the evaluation of creativity and critical thinking skills. Instructors can also benefit from AI assistance in generating questions and creating tests. (3) Adaptive Systems and Personalization: Adaptive systems and personalization address the limitations of the traditional one-size-fits-all approach. These systems provide learning experiences that cater to individual students' needs and learning abilities, enhancing their effectiveness and engagement. (4) Intelligent Tutoring Systems: Intelligent tutoring systems leverage AI to provide personalized and adaptive instruction (Mousavinasab et al, 2021). These systems simulate one-on-one tutoring experiences, tailoring learning materials and feedback to optimize student engagement, knowledge retention, and collaboration. The integration of AI in education presents significant pedagogical opportunities by enhancing student support systems, fostering adaptive learning environments, and improving educational practices and outcomes. AI technologies serve as valuable assistants, providing students with personalized support in their zone of proximal development, where they can rapidly develop with appropriate assistance at any time of the day or week (Vygotsky, 1978). ## 3 Methods ### AI tutor app The AI tutor used in this study was a mobile and web application developed by MAGMA Learning (MAGMA Learning, 2019). It implemented a personalized approach to retrieval practice and spaced repetition, with the goal of enhancing students' learning effectiveness by consolidating their grasp of key concepts for the long term. Before the start of the course in August 2022, a comprehensive set of 800 questions was generated from lecture materials using GPT-3 (Brown et al, 2020) and other natural language processing techniques. The questions encompassed various formats such as definitions, clozes (fill-in-the-blank), true/false, multiple-choice, image-based, and acronyms. The questions also had varying difficulty levels to accommodate the proficiency distribution among students. Each question was linked to the specific lecture slide that inspired its generation, providing students with the relevant contextual feedback if needed. After generation, the questions were also individually reviewed and validated by the course instructor. Examples of the generated questions can be seen in Figure 1 as well as in Appendix 0.A. Based on interactions with students and their answers to questions, the AI tutor dynamically predicted the probability of a correct answer--referred to as the "grasp"--for each student and each question. These predictions were made by an artificial neural network trained with input features derived from information on questions, students, and their historical interactions. With this personalized understanding of each individual student's knowledge levels and their evolution (through learning and forgetting), the AI tutor presented the questions considered most relevant and beneficial whenever the student accessed the app. The questions that were selected by the app aimed to maintain an appropriate level of challenge for each student, avoiding unstimulatedly easy as well as frustratingly hard questions. This approach aligns with the concept of "desirable difficulty" and ensures an engaging learning experience within the student's zone of proximal development (Vygotsky, 1978). The app provided students with the capability to track their progress through a visual representation of their knowledge called the "learnet" (shown on the right of Figure 1). The learnet presented a three-dimensional organization of all the key concepts from the learning materials and their interrelations. Each point of the learnet corresponded to a specific course concept, with its brightness indicating the student's grasp of that particular concept. Darker learnets served as motivation for Figure 1: Examples of questions generated by the AI tutor app. From left to right, we see a definition, a question based on an image, and a multiple-choice question with feedback (see also Appendix 0.A for questions in English). On the right we see the “learnet,” a visual organization of all key concepts and their grasps by the student (47.7% in this case). students, indicating areas where they still had more to learn or had already forgotten. Conversely, brighter learnets communicated to students their high knowledge levels. ### Design The study took place during the Fall semester of 2022-2023 in the "Neuropsychology and Neurosciences" course offered by UniDistance Suisse as part of the bachelor's curriculum in psychology. A total of 61 students enrolled in the course, which was taught by one of the co-authors (PBL). The course was delivered in a distance learning format, with online materials and a total of five live and recorded webinars. It consisted of 23 lessons grouped into 5 main periods. Learning materials were available online on a learning management platform (Moodle, 2002). Throughout the semester, students had the opportunity to take five quizzes on the Moodle platform, each containing approximately 20 multiple-choice questions, which could be attempted repeatedly for further practice. The course was worth 10 ECTS credits, corresponding to a workload of 250-300 hours for the students. On January 28, 2023, a total of 51 students (86% female; mean age = 36.8, \(SD=9.2\)) took the final exam for the course. The exam consisted of 15 multiple-choice questions. The grading system for the exam ranged from 1 to 6, with a minimum passing grade of 4. In parallel to the "Neuropsychology and Neurosciences" course, most students were also enrolled in the parallel "Neuroanatomy" course, and 47 took the exam for the parallel course on the same day. Both courses had similar formats and quantities of contents. The AI tutor app was provided to students as a complementary learning activity for the main "Neuropsychology and Neurosciences" course, but not for the parallel "Neuroanatomy" course. The instructor periodically reminded students to use it in order to enhance their acquisition of the learning materials. Students could use the AI tutor at any time during the semester, but only had access to the questions relevant to the current and preceding periods. Students were free to choose the extent to which they used the app, including the frequency, duration, and timing. ## 4 Results Out of the 51 students who took the final exam for the main course, 43 students could be linked to accounts created on the AI tutor app. There were 47 students who took both the "Neuropsychology and Neurosciences" exam and the parallel "Neuroanatomy" exam, and among them 40 students were identified as users on the AI tutor app. On average, students gave 1800 answers on the app (\(SD=2700\)) and spent 7.2 hours (\(SD=10\)) learning on the app, on 26 distinct days (\(SD=31\)). We studied the relationship between learning with the personal AI tutor app and the grades achieved on the final exam. We then explored the causal nature of this relationship by comparing performance with the parallel "Neuroanatomy" course (without AI tutor), as well as with another online learning activity (quizzes on Moodle). In addition, we investigated the correlation between neural-network predictions of students' grasps and their exam grades. ### Enhanced academic performance Among the cohort of 51 students who took the exam for the main course, 43 students signed up on the AI tutor app and used it to varying extents, while 8 students did not sign up. Our first objective was to determine whether students who were active on the app got better grades on the final exam than inactive students. To define "active" participation, we set a minimum threshold for the number of answers provided on the app, denoted as \(N_{\min}\). The inactive group was comprised of students who provided fewer than \(N_{\min}\) answers or did not sign up at all. Conversely, active students were defined as those who provided more than \(N_{\min}\) answers. Although results could a priori depend on the choice of threshold \(N_{\min}\), we found a significant increase in average grade for the active group, regardless of the value of the threshold on a large range from 0 to 1000, \(N_{\min}\in[0,1000]\). On a scale of 1 to 6, the average grade for the active group was higher by 0.71 points (\(SD=0.08\)) compared to the inactive group (see the evolution of average grades in the right-hand side of Figure 2). We found a substantial effect size, reflected by an average Cohen's \(d\) of 0.69 (\(SD=0.09\)) over the range \(N_{\min}\in[0,1000]\), as illustrated on the right of Figure 2. In order to compare active and inactive students, we conducted independent \(t\)-tests with 49 degrees of freedom. The evolution of the \(p\)-value with respect to \(N_{\min}\) is depicted in Figure 3. Over the range of \(N_{\min}\in[0,1000]\), all \(p\)-values were found to be below 0.06, with 95% of them below 0.05. ### Comparison to parallel course As previously mentioned, most students enrolled in the main "Neuropsychology and Neurosciences" course simultaneously took the parallel "Neuroanatomy" course during the Fall semester of 2022-2023. Whereas the AI tutor app was available for the main course, it was not provided for the parallel course. This presented an opportunity Figure 2: _Left-hand side_: Distribution of students (\(N=51\)) categorized as active or inactive according to the minimum number of answers provided on the app. _Right-hand side_: Average grades for active and inactive students, as distinguished by a minimum number of answers given on the app. The effect sizes (Cohen’s \(d\)) are indicated on the right vertical axis. to investigate whether students who actively learned with the AI tutor for the main course ranked higher (in terms of exam grades) than they did in the parallel course. For the 47 students who took both the main and the parallel exams, we compared their rankings in terms of percentiles between the two courses. Active students were defined as those who provided more than \(N_{\min}=1000\) answers on the AI tutor app (15 active students). Compared to their rankings in the parallel exam, active students gained 18.1 percentile points more than inactive students for the main exam. The independent-samples \(t\)-test gave a \(t\)-statistic of \(t(45)=2.33\) and a \(p\)-value of \(p=0.025\) (Figure 4). The mean percentile gains were \(12.4\%\) for active students and \(-5.7\%\) for inactive students, both with a standard deviation of \(SD_{1}=SD_{2}=25\%\). Cohen's \(d\) was \(0.73\). As a next step, we examined the number of students within the active and inactive groups based on the threshold \(N_{\min}\), along with the average percentiles gained for each group. The results of independent \(t\)-tests showed a sharp differentiation between the two groups around \(N_{\min}=750\) answers (Figure 5). The significance was confirmed by Figure 4: Percentile points gained for the main course, compared to the parallel course (\(N=47\)). Active students (who provided more than 1000 answers on the AI tutor app) gained on average \(12.4\%\) percentile points while inactive students lost \(5.7\%\). Figure 3: \(p\)-values for the \(t\)-tests comparing the average grades of active and inactive students based on the minimum number of answers. The blue dashed line represents the conventional \(p\)-value threshold of \(0.05\). the \(p\)-values shown on the right of Figure 5. While \(p\)-values for \(N_{\min}<742\) were all above 0.2, for \(N_{\min}\geq 742\) all \(p\)-values were below 0.07, with 82% below 0.05. ### Comparison to Moodle quizzes To further put these results in perspective, recall that students of the main course were provided with 5 quizzes on the Moodle platform. Each quiz consisted of around 20 multiple-choice questions covering the different lessons of the course. Students could take the quizzes multiple times and received feedback with respect to their answers. We evaluated whether students who engaged in active learning using the Moodle quizzes for the main course achieved higher rankings (in terms of exam grades) compared to their performance in the parallel course. As shown in both descriptive and inferential statistics in Figure 6, taking larger number of quizzes on the Moodle platform did not consistently lead to higher rankings. The percentile improvement compared to the parallel course was not clearly significant, with \(p\)-values that continued to fluctuate across the range of thresholds. ### Validation of grasp prediction Finally, we investigated the effectiveness of the AI tutor's neural-network models to meaningfully represent each student's grasp of key course concepts. For the AI tutor to effectively personalize the learning experience, it was imperative that these models be realistic. To evaluate the relevance of the AI tutor's predictions, we studied the correlation between the predicted grasps (integrated over the duration of the semester) and students' actual grades on the final exam. We focused on a group of students who consistently engaged with the app on at least 30 different days throughout the semester (14 students). Our analysis revealed a strong positive correlation between predicted Figure 5: _Left-hand side_: Distribution of students (\(N=47\)) categorized as active or inactive based on the minimum number of answers provided on the app. _Right-hand side_: Percentile gain from the parallel exam to the main exam. Inactive students lost around 5 percentile points regardless of the threshold \(N_{\min}\in[0,2000]\) while active students gained between 10 and 15 percentile points for \(N_{\min}>742\). The \(p\)-values are indicated on the right vertical axis. grasp and exam grade, as evidenced by a high Pearson coefficient of \(r(14)=0.81\), \(p<0.001\), \(95\%\) Cl \([0.479,0.936]\) (Figure 7). These results indicated that the AI tutor's neural-network model indeed effectively represented students' knowledge levels. ## 5 Discussion In this article we evaluated the effectiveness of an artificial intelligence app to implement learning strategies based on principles validated by the learning sciences. We provided psychology students at UniDistance Suisse with a personal AI tutor app for a semester. We first found that students who actively engaged with the AI tutor app achieved significantly higher grades on the final exam compared to inactive students (section 4.1). This outcome is consistent with previous research underscoring the efficacy of learning principles such as personalization, retrieval practice, and spaced repetition (Dunlosky et al, 2013). These principles were effectively incorporated into the design of the AI tutor app. Figure 6: _Left-hand side_: Distribution of students (\(N=47\)) categorized as active or inactive based on the minimum number of answers provided on the Moodle quizzes. _Right-hand side_: Percentile gain from the parallel exam to the main exam. Students who took many Moodle quizzes did not get significantly better grades than in the parallel course. Compare with Figure 5. Figure 7: Strong correlation (\(r=0.81\)) between the integral grasp predicted by the AI tutor and the exam grade, for the group of most regular students (\(N=14\)). However, it is important to note that this result in itself does not establish a causal relationship between app usage and improved grades. Indeed, it could be that motivated students were more inclined to use the app, and their inherent motivation would have led to superior academic performance in any event, regardless of their app usage (Busato et al, 2000). In other words, it may be that app usage and improved grade were both influenced by other variables, such as an inherent motivation to study and take part in learning activities. Counter-evidence to this possibility was provided by our findings that students who learned with the AI tutor app improved their rankings by up to 15 percentile points relatively to the parallel course, for which the AI tutor app was not provided (section 4.2). If active students obtained better grades only because they were more motivated (both to use the app and to pass the exam), then their motivation would have likely resulted in superior performance in the parallel course as well, and therefore their relative improvements would average to zero. Note that further evidence for the positive impact of learning with the AI tutor comes from the fact that significant effects on percentile gains only start to appear above a threshold of around 750 answers provided by students. This is precisely what one could expect to observe if the effect was due to the AI tutor app, since the number of available questions was of the same order (800). Still, an explanation for why students only improved in the main course (with the AI tutor) could be that these students simply were more engaged in that course than in the parallel course (without AI tutor). If so, their engagement would have been a common cause for their high levels of participation in all learning activities (including using the AI tutor app) as well as for their superior grades in the main course. However, if this were the case, then we would expect that another learning activity, namely Moodle quizzes, should also lead to consistently significant percentile gains, which was not observed (section 4.3). In sum, all of our findings converge to suggest a beneficial impact of learning with the personal AI tutor app on academic performance. We attribute this success to the capacity of the app's neural networks to model human learning processes in a meaningful way, as suggested by the strong correlation between the predicted grasp and the exam grade (section 4.4). As with all intervention studies, appropriate control groups and conditions are difficult to design and interpret when studying the effectiveness of learning apps. One limitation of our study is that we did not incorporate an active control group with random assignment of participants. We chose not to adopt this approach in our design, as the group of students without access to the app might have felt disadvantaged. Indeed, academic institutions have the responsibility to provide all of their students with the same learning activities and opportunities. Nevertheless, our statistical analyses allowed us to go beyond simply observing the relationship between app usage and exam grade. We were able to provide additional corroborating evidence by comparing the performances of the same group of students across different courses and online learning activities. One interesting direction for future research would consist in providing a personal AI tutor app to a group of students, but only activating personalized learning functionalities for half of them. A preliminary unpublished work (Alemanno, 2022) showed that the group of students with deactivated personalization performed increasingly poorly on the app and eventually disengaged completely. Research on a larger scale could help to confirm the findings presented here and identify precisely what learning behaviors with the AI tutor app enhance performance the most. We have presented strong arguments in favor of the effectiveness of implementing learning principles using a personal AI tutor, using natural language processing and machine learning. In contrast to most studies on the subject, the study presented here was not reductionist, but rather corresponded to realistic learning conditions that lasted an entire semester. Given the wide range of learning activities in any single course (lectures, reading, quizzes, etc.), it is surprising and promising that the addition of an AI tutor can have such significant and beneficial effects. ## Declarations * Funding This research was funded by UniDistance Suisse. * Conflict of interest One of the co-authors (MG) is the CEO of MAGMA Learning, the company that developed the AI tutor app studied in this article. * Ethics approval Data usage was approved by UniDistance Suisse. Since the data are archival and anonymous, no written informed consent was required. ## Appendix A Examples of microlearning questions **Question:** The envelops in and stamps them with a tag indicating the location where they are to be transported. **Correct answers:** Golgi apparatus, proteins, vesicles. **Distractors:** enzymes, ribosomes, lipids, cytoskeletal elements. **Question:** The four diffuse modulatory systems of the central nervous system are: **Correct answers:** Cholinergic system, Serotonergic system, Noradrenergic system, Dopaminergic system. **Distractors:** Glutamatergic system, Gabaergic system. **Question:** The hormone produced by the neurohypophysis that is implicated in reproduction, coupling, organs, parentality, etc. **Correct answers:** Oxytocin. **Distractors:** Prolactin, Luteinizing hormone, Follicle-stimulating hormone, Oxycontin. Question:The model of associative that is based on long-term describes the of an between distinct stimuli that are perceived simultaneously. Correct answers:learning, potentiation, formation, association. Question:The inhibition of inhibitory interneurons interneurons in the leads to an in the liberation of dopamine in the of the ventral striatum. Correct answers:GABAergic, ventral tegmental area, increase, nucleus accumbens. Distractors:gylcenergic, decrease, amygdala. Question (Figure 1):Information is kept in memory indefinitely, perhaps for life; involves the temporal lobes and primary sensory and motor regions depending on the type of information processed. Correct answers:Long-term memory. Distractors:Timeless memory, Episodic memory, Explicit memory, Working memory. Question (Figure 1):Together, the caudate nucleus, the putamen and the nucleus accumbens form this structure: Correct answers:Striatum. Distractors:Entorhinal cortex, Substantia nigra, Thalamus, Globus pallidus. Question (Figure 1):The four diffuse activation systems of the CNS are: Correct answers:The dopaminergic system, The cholinergic system, The noradrenergic system, The serotonergic system. Distractors:The gabaergic system.
2301.00158
robust synergistic hybrid feedback
Synergistic hybrid feedback refers to a collection of feedback laws that allow for global asymptotic stabilization of a compact set through the following switching logic: given a collection of Lyapunov functions that are indexed by a logic variable, whenever the currently selected Lyapunov function exceeds the value of another function in the collection by a given margin, then a switch to the corresponding feedback law is triggered. This kind of feedback has been under development over the past decade and it has led to multiple solutions for global asymptotic stabilization on compact manifolds. The contributions of this paper include a synergistic controller design in which the logic variable is not necessarily constant between jumps, a synergistic hybrid feedback that is able to tackle the presence of parametric uncertainty, backstepping of adaptive synergistic hybrid feedbacks, and a demonstration of the proposed solutions to the problem of global obstacle avoidance.
Pedro Casau, Ricardo G. Sanfelice, Carlos Silvestre
2022-12-31T08:58:48Z
http://arxiv.org/abs/2301.00158v1
# Robust Synergistic Hybrid Feedback ###### Abstract Synergistic hybrid feedback refers to a collection of feedback laws that allow for global asymptotic stabilization of a compact set through the following switching logic: given a collection of Lyapunov functions that are indexed by a logic variable, whenever the currently selected Lyapunov function exceeds the value of another function in the collection by a given margin, then a switch to the corresponding feedback law is triggered. This kind of feedback has been under development over the past decade and it has led to multiple solutions for global asymptotic stabilization on compact manifolds. The contributions of this paper include a synergistic controller design in which the logic variable is not necessarily constant between jumps, a synergistic hybrid feedback that is able to tackle the presence of parametric uncertainty, backstepping of adaptive synergistic hybrid feedbacks, and a demonstration of the proposed solutions to the problem of global obstacle avoidance. Hybrid Systems, Adaptive Control, Robotics, Uncertain Systems ## I Introduction ### _Background and Motivation_ In this paper, we consider the problem of globally asymptotically stabilizing continuous-time plants of the form \[\dot{x}_{p}=F_{p}(x_{p},u_{p},\theta) \tag{1}\] where \(x_{p}\in\mathcal{X}_{p}\) denotes the state of the plant, \(u_{p}\in\mathcal{U}_{p}\) is the input, and \(\theta\) represents the parameters of the plant. To this end, we propose the following hybrid controller \[\dot{\chi}_{c}\in\hat{F}_{c}(x_{p},\chi_{c},x_{c},u_{c})\] \[\dot{x}_{c}\in F_{c}(x_{p},\chi_{c},x_{c})\] \[\chi_{c}^{+}=\chi_{c}\] \[x_{c}^{+}\in G_{c}(x_{p},\chi_{c},x_{c})\] where \(\chi_{c}\in\hat{\mathcal{X}}_{c}\) and \(x_{c}\in\mathcal{X}_{c}\) represent different components of the state of the controller, \(\hat{F}_{c}\) and \(F_{c}\) are the flow maps associated with \(\chi_{c}\) and \(x_{c}\), respectively, \(C\) denotes the flow set, \(G_{c}\) defines the update law for jumps of \(x_{c}\) and \(D\) is the jump set. The key differences between \(\chi_{c}\) and \(x_{c}\) is the fact that \(\chi_{c}\) does not change its value during jumps and also that the flows of \(\chi_{c}\) depend on a virtual input variable \(u_{c}\in\mathcal{U}_{c}\). More precisely, the goal in this paper is to design a controller that globally asymptotically stabilizes a compact set \(\mathcal{A}\) for the closed-loop system resulting from the interconnection between (1) and (2) both when the parameter \(\theta\) is known, but also when it is only known to belong to a given compact set \(\Omega\). In the presence of topological obstructions, this objective is not attainable via continuous feedback and, even though it might be attainable through discontinuous feedback, the resulting closed-loop system may not be robust to arbitrarily small noise (cf. [1] and [2]). To illustrate these limitations of continuous/discontinuous feedback, let us consider the problem of globally asymptotically stabilizing the point \((1,0)\) for the dynamical system \[\dot{x}_{1}=-x_{2}u_{p},\qquad\qquad\dot{x}_{2}=x_{1}u_{p},\] where \(x_{p}:=(x_{1},x_{2})\in\mathcal{X}:=\)\(\mathsf{S}^{1}:=\{x_{p}\in\mathbb{R}^{2}:|x_{p}|=1\}\) is the state variable and \(u_{p}\in\mathbb{R}\) denotes the input. In this direction, let \(h(x_{p})=(1-x_{1})/2\) for each \(x_{p}\in\mathsf{S}^{1}\). The gradient-based feedback law is given by \(u_{p}=\begin{bmatrix}x_{2}&-x_{1}\end{bmatrix}\nabla h(x_{p})\) which represents the projection of the gradient of \(h\) onto the tangent space to \(\mathsf{S}^{1}\) at \(x_{p}\). It follows from standard Lyapunov stability arguments that \((1,0)\) is asymptotically stable for the closed-loop system, but it is not globally asymptotically stable since \(x=(-1,0)\) is also an equilibrium point. It can be argued that the discontinuous feedback law \[u_{p}=\kappa_{p}(x_{p})=\begin{cases}-1&\text{if }x_{1}=-1\\ 0&\text{if }x_{1}=1\\ -\dfrac{x_{2}}{|x_{2}|}&\text{otherwise}\end{cases} \tag{3}\] defined for each \(x_{p}\in\mathsf{S}^{1}\) globally asymptotically stabilizes \((1,0)\) if one considers Caratheodory solutions to the discontinuous closed-loop system because, in this case, \((-1,0)\) is not an equilibrium point. However, due to the discontinuity of the feedback law (3), arbitrarily small noise can induce chattering which is a property that is ellucidated by considering generalized solutions to discontinuous dynamical systems such as Krasovskii solutions (cf. [3]). These limitations of continuous and discontinuous feedbacks constitute the motivation for the development of synergistic hybrid feedback. If \(\hat{F}_{c}\) in (2) defining the dynamics of \(\chi_{c}\) is given, then \(\chi_{c}\) can become part of the state of (1) and the stated objective can be attained through the design of a hybrid controller \(\mathcal{H}_{c}:=(C,F_{c},D,G_{c})\) with state \(x_{c}\in\mathcal{X}_{c}\) and dynamics \[\dot{x}_{c} \in F_{c}(x,x_{c}) (x,x_{c})\in C\] \[x_{c}^{+} \in G_{c}(x,x_{c}) (x,x_{c})\in D\] assigning \(u:=(u_{p},u_{c})\in\mathcal{U}:=\mathcal{U}_{p}\times\mathcal{U}_{c}\) via a feedback law \((x,x_{c})\mapsto\kappa(x,x_{c})\), where \(x:=(x_{p},\chi_{c})\in\mathcal{X}:=\mathcal{X}_{p}\times\hat{\mathcal{X}}_{c}\) is the state of the system to control with dynamics described by the following differential inclusion \[\dot{x}\in F_{\theta}(x,x_{c},u):=F_{p}(x_{p},u_{p},\theta)\times\hat{F}_{c}( x_{p},\chi_{c},x_{c},u_{c}), \tag{5}\] where \(\theta\) is a constant. This formulation enables the controller design for systems whose dynamics depend on the controller state. For example, given a plant with dynamics \(\dot{x}_{p}=f_{p}(x_{p})+H_{p}(x_{p})u_{p}+W_{p}(x_{p})\theta\) where \(f_{p},H_{p},W_{p}\) are functions with the appropriate dimensions, suppose that the reference trajectory to be tracked is denoted by \(x_{d}\) and that it is generated by the system \(\dot{x}_{d}=f_{p}(x_{d})+H_{p}(x_{d})\xi_{d}\) for some signal \(\xi_{d}\). The tracking error \(x:=x_{p}-x_{d}\) can be taken as the state of the system (5), in which case we have that \(F_{\theta}(x,x_{c},u)=f_{p}(x_{d}+x)-f_{p}(x_{d})-H_{p}(x_{d})\xi_{d}+H_{p}(x_ {d}+x)u+W_{p}(x_{d}+x)\theta\) by identifying \(u_{p}\) with \(u\) and by considering \((x_{d},\xi_{d})\) as components of the controller variable \(x_{c}\). More practically, \(x\) can be considered to be the part of the state of the closed-loop system that remains unchanged during jumps. In this paper, we present two novel synergistic hybrid controllers for global asymptotic stabilization of a compact set for a closed-loop system with the plant dynamics in (5). The first controller design considers that the parameter \(\theta\) is known, while the second controller design considers that \(\theta\) is unknown but belongs to a known compact set \(\Omega\). ### _Literature Review_ Synergistic hybrid feedback is a hybrid control strategy that consists of a collection of potential functions that asymptotically stabilize a given compact set by gradient descent feedback. If, for all equilibria that do not lie within the given compact set, there exists another function in the collection that has a lower value and does not share the same equilibria, then it is possible to achieve global asymptotic stabilization of the given compact set through hysteretic switching (see, e.g., [4]). Synergistic hybrid feedback came to prominence with the work [3] on quaternion-based feedback for global asymptotic stabilization attitude tracking, thereby solving the attitude control problem (cf. [5]). The framework of synergistic hybrid feedback provides not only a solution to the problem of attitude control but, more importantly, it provides a robust solution for global asymptotic stabilization on compact manifolds. The works [6] and [7] leverage the concepts at the root of synergistic hybrid feedback and use them to design controllers that are applicable to a broad class of systems. However, most of the contributions on this class of hybrid controllers are on the control of robotic systems, such as pendulum stabilization [8], vector-based rigid body stabilization [9, 10], tracking for marine and aerial vehicle [11, 12], and rigid body tracking through rotation matrix feedback [13, 14]. Within the field of robotics, we single out the problem of obstacle avoidance, which is also addressed in this paper. Obstacle avoidance is an important and longstanding problem that reflects the need to drive a the state of a system from one place to another while avoiding obstacles in its way. Several solutions to this problem have been proposed over the last few decades as highlighted in [15]. In particular, it is possible to find both stochastic [16] as well as deterministic approaches [17] to tackle the obstacle avoidance problem. However, it was shown in [18] that in a "sphere world," there is at least one saddle equilibrium point for each obstacle within the state space, thus precluding global asymptotic stabilization of a setpoint by continuous feedback. To address this limitation, hybrid control solutions to the problem of obstacle avoidance were proposed in [19, 20, 7] and [21]. Though not directly addressed in this paper, the concepts of synergistic hybrid feedback have also been used for observer design, optimization and control barrier function design in in [22, 23], and [21], respectively. ### _Contributions_ The contributions in this paper are as follows: 1) We develop a dynamic synergistic hybrid feedback controller for global asymptotic stabilization of a broad class of dynamical systems. In particular, we consider that the distinguishing feature of synergistic hybrid feedback is the switching logic, thus we depart from earlier works which were limited to controller variables that were constant during flows; 2) We provide a modification to the dynamic synergistic controller that takes into account the presence of parametric uncertainty; 3) We demonstrate how the proposed constructions can be used to develop an adaptive synergistic controller for the stabilization of compact sets for affine control systems under matched uncertainties; 4) We show that the proposed adaptive controller is amenable to hybrid backstepping; 5) We apply the proposed controller designs to the problem of global obstacle avoidance in the presence of parametric uncertainty and illustrate the behavior of the closed-loop system through simulations. The paper is organized as follows: in Section III we present the main assumptions on the plant dynamics. In Section IV-B we provide the conditions under which the closed-loop system is well-posed. In Section IV-C we provide sufficient conditions for global asymptotic stability of a compact set for the closed-loop system. In Section V, we develop the concept of robust synergistic hybrid feedback. In Section VI, we apply the synergistic approach to the development of an adaptive synergistic controller for stabilization of affine control systems subject to matched uncertainties. In Section VII, we apply the proposed controller to the problem of global obstacle avoidance. In Section VIII, we present some concluding remarks. A preliminary version of this paper was presented at the 2019 ACC with a simpler synergistic controller design for global asymptotic stabilization of control affine systems and without the full proofs (cf. [24]). The original version of this paper has been submitted for publication. ## II Notation & Preliminaries ### _Topology, Metric Spaces, Functions, and Set-Valued Maps_ Given a topological space \(X\), a neighborhood of a set \(S\) is any open set that contains \(S\). A topological space \(X\) is said to be Hausdorff if, given any pair of distinct points \(q_{1},q_{2}\in X\), there exist neighborhoods \(U_{1}\) of \(q_{1}\) and \(U_{2}\) of \(q_{2}\) that do not intersect. Any metric space is Hausdorff, hence the Euclidean spaces are Hausdorff. Lemma 4.29 in [29] points out that any closed subspace of an locally compact Hausdorff space is itself locally compact Hausdorff. A set is said to be locally compact if for each point there is a neighborhood which is precompact, i.e., whose closure is a compact set. A topology on a set \(X\) is a collection \(T\) of subsets of \(X\), called open sets, satisfying the following properties: \(X\) and \(\emptyset\) are elements of \(T\); \(T\) is closed under finite intersections; and \(T\) is closed under arbitrary unions. The subspace topology of a subset \(A\) of \(X\) is the collection of subsets of \(A\) that are obtained from the intersection of \(A\) with an open set of \(X\). A subset \(A\) of a topological space \(X\) that is endowed with the subspace topology is said to be a subspace of \(X\). A metric space is a set \(M\) together with a metric \(d\). A set \(S\subset M\) is open in the metric space sense if for each \(x\in S\) there exists \(\epsilon>0\) such that the set points \(y\in M\) satisfying \(d(x,y)<\epsilon\) are contained in \(S\). The metric topology on \(M\) is the collection of all subsets of \(M\) that are open in the metric space sense (cf. [29, Exercise 2.1]). The Cartesian Product \(\mathbb{R}^{n}=\mathbb{R}\times\ldots\mathbb{R}\) of \(n\) copies of the real line together with scalar multiplication and componentwise addition of vectors is known as \(n\)-dimensional Euclidean space. The Euclidean metric topology is the one induced by the metric \(x\mapsto|x|:=\sqrt{x^{\top}x}\). The \(n\)-dimensional Euclidean space has the topology generated by a countable basis of open balls of the form \(c+\epsilon\mathbb{B}:=\{x\in\mathbb{R}^{n}:|x-c|<\epsilon\}\), where \(c\in\mathbb{R}^{n}\) and \(\epsilon>0\). More generally, given a set \(\Omega\subset\mathbb{R}^{n}\), we define \(\Omega+\epsilon\mathbb{B}:=\bigcup_{c\in\Omega}c+\epsilon\mathbb{B}\). The operators \(\partial S\) and \(\overline{S}\) denote the boundary and the closure of a set \(S\), respectively. Given a function \(f:\mathbb{R}^{m}\to\mathbb{R}^{n}\), the preimage of a set \(U\subset\mathbb{R}^{n}\) through \(f\) is \(f^{-1}(U):=\{x\in\mathbb{R}^{m}:f(x)\in U\}\). Similarly, the image of a set \(W\) through \(f\) is \(f(W):=\{y\in\mathbb{R}^{n}:y=f(x)\text{ for some }x\in W\}\). A set-valued map \(M\) from \(S\subset\mathbb{R}^{m}\) to the power set of some Euclidean space \(\mathbb{R}^{n}\) is represented by \(M:S\rightrightarrows\mathbb{R}^{n}\). The domain of a set-valued map is given by \(\operatorname{dom}M:=\{x\in\mathbb{R}^{n}:M(x)\neq\emptyset\}\). Given a subset \(S\) of \(\mathbb{R}^{m}\), a set-valued map \(M:S\rightrightarrows\mathbb{R}^{n}\) is said to be outer semicontinuous (osc) relative to \(S\) if its graph, given by \(\operatorname{\mathsf{p}gh}M:=\{(x,y)\in S\times\mathbb{R}^{n}:y\in M(x)\},\) is closed relative to \(S\times\mathbb{R}^{n}\). The set-valued map \(M\) is locally bounded at \(x\in\mathbb{R}^{m}\) if there exists a neighborhood \(U_{x}\) of \(x\) such that \(M(U_{x})\subset\mathbb{R}^{n}\) is bounded. It is locally bounded relative to \(S\) if the set-valued mapping from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{n}\) defined by \(M(x)\) for \(x\in S\) and \(\emptyset\) for \(x\notin S\) is locally bounded at each \(x\in S\). It is convex-valued if \(M(x)\) is convex for each \(x\in S\). A set-valued map \(M:S\rightrightarrows\mathbb{R}^{n}\) is upper semicontinuous (usc) at \(x\) if, for each open set \(V\subset\mathbb{R}^{n}\) that contains \(M(x)\), there exists a neighborhood \(U\) of \(x\) such that \(x^{\prime}\in U\cap S\) implies \(M(x^{\prime})\subset V\). The map \(M\) is lower semicontinuous (lsc) at \(x\) if, for each open set \(V\subset\mathbb{R}^{n}\) satisfying \(M(x)\cap V\neq\emptyset\), there exists a neighborhood \(U\) of \(x\) such that \(x^{\prime}\in U\cap S\) implies \(M(x^{\prime})\cap V\neq\emptyset\). The map \(M\) is continuous at \(x\) if it is both lsc and use at \(x\). The map \(M\) is usc, lsc, continuous on \(S\) if it is usc, lsc, continuous, respectively, at each \(x\in S\). ### _Differentiability_ The tangent cone to a set \(S\subset\mathbb{R}^{n}\) at a point \(x\in\mathbb{R}^{n}\), denoted by \(\mathsf{T}_{x}S\), is the set of all vectors \(w\in\mathbb{R}^{n}\) for which there exists \(x_{i}\in S\), \(\tau_{i}>0\) with \(x_{i}\to x\), \(\tau_{i}\) convergent to \(0\) from above, and \(w=\lim_{i\to\infty}\frac{x_{i}-x}{\tau_{i}}\). Given a differentiable function \(F:\mathbb{R}^{m\times n}\to\mathbb{R}^{p\times q}\), we define \(\mathcal{D}F(X):=\frac{\partial\operatorname{vec}(F)}{\partial\operatorname{vec }(X)}(X)\) for each \(X\in\mathbb{R}^{m\times n}\), where \(\operatorname{vec}(A):=[e_{1}^{\top}A^{\top}\ \ldots\ e_{m}^{\top}A^{\top}]^{\top}\) for each \(A\in\mathbb{R}^{m\times n}\) and \(e_{i}\in\mathbb{R}^{m}\) is a vector of zeros, except for the \(i\)-th component, which is \(1\). If \(F\) has multiple arguments, say \((X,Y)\in\mathbb{R}^{m\times n}\times\mathbb{R}^{k\times d}\), we define \(\mathcal{D}_{X}F(X,Y):=\frac{\partial\operatorname{vec}(F)}{\partial \operatorname{vec}(X)}(X,Y)\) for each \((X,Y)\in\mathbb{R}^{m\times n}\times\mathbb{R}^{k\times d}\). If \(F:\mathbb{R}^{n}\to\mathbb{R}\), then \(\nabla F(x):=\mathcal{D}F(x)^{\top}\) for each \(x\in\mathbb{R}^{n}\). If \(F:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\), then \(\nabla_{x}F(x,y):=\mathcal{D}_{x}F(x,y)^{\top}\) for each \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\) and \(\nabla_{y}F(x,y):=\mathcal{D}_{y}F(x,y)^{\top}\) for each \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\). Clarke's generalized directional derivative of a function \(V:\mathbb{R}^{n}\to\mathbb{R}\) in the direction \(v\), is defined as follows (cf. [26, Eq. (1)]): \(V^{\circ}(x;v):=\limsup_{x\searrow 0}\frac{V(y+x)v-V(y)}{\lambda}\). ### _Stability of Hybrid Systems_ A hybrid system \(\mathcal{H}\) with state space \(\mathbb{R}^{n}\) is defined in [27] and [28] as \[\begin{array}{rl}\dot{\xi}\in F(\xi)&\xi\in C\\ \xi^{\top}\in G(\xi)&\xi\in D\end{array} \tag{6}\] where \(\xi\in\mathbb{R}^{n}\) is the state, \(C\subset\mathbb{R}^{n}\) is the flow set, \(F:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is the flow map, \(D\subset\mathbb{R}^{n}\) denotes the jump set, and \(G:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) denotes the jump map. A solution \(\xi\) to \(\mathcal{H}\) is parametrized by \((t,j)\), where \(t\) denotes ordinary time and \(j\) denotes the jump time, and its domain \(\operatorname{dom}\xi\subset\mathbb{R}_{\geq 0}\times\mathbb{N}\) is a hybrid time domain: for each \((T,J)\in\operatorname{dom}\xi\), \(\operatorname{dom}\xi\cap([0,T]\times\{0,1,\ldots J\})\) can be written in the form \(\cup_{j=0}^{J-1}([t_{j},t_{j+1}],j)\) for some finite sequence of times \(0=t_{0}\leq t_{1}\leq t_{2}\leq\cdots\leq t_{J}\), where \(I_{j}:=[t_{j},t_{j+1}]\) and the \(t_{j}\)'s define the jump times. A solution \(\xi\) to a hybrid system is said to be _maximal_ if it cannot be extended by flowing nor jumping and _complete_ if its domain is unbounded. A set \(S\) is said to be forward pre-invariant for a hybrid system (6) if each maximal solution of (6) starting in \(S\) remains in \(S\). It is said to be forward invariant if it is forward pre-invariant and each maximal solution from \(S\) is complete (see e.g. [28 1. \(C\) and \(D\) are closed subsets of \(\mathbb{R}^{n}\); 2. \(F:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is osc and locally bounded relative to \(C\), \(C\subset\operatorname{dom}F\), and \(F(x)\) is convex for every \(x\in C\); 3. \(G:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is osc and locally bounded relative to \(D\), and \(D\subset\operatorname{dom}G\). Given a function \(V\colon\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\) that is Lipschitz continuous on a neighborhood of \(C\) in (6) and \(u_{c}:\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\), we say that the growth of \(V\) along flows of (6) is bounded by \(u_{c}\) if the following holds: \[V^{\circ}(\xi;f)\leq u_{c}(\xi)\ \ \ \ \ \forall\xi\in C,\ \forall f\in F(\xi) \cap\operatorname{T}_{\xi}C. \tag{7}\] If, for some function \(u_{c}:\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\), \[V(\xi)-V(\xi)\leq u_{d}(\xi)\ \ \ \ \ \ \forall\xi\in D,\ \forall\xi\in G(\xi), \tag{8}\] then we say that the growth of \(V\) along jumps of (6) is bounded by \(u_{d}\). If both (7) and (8) hold, then we say that the growth of \(V\) along solutions to (6) is bounded by \(u_{c},u_{d}\). A compact set \(\mathcal{A}\) is said to be _stable_ for (6) if for every \(\epsilon>0\) there exists \(\delta>0\) such that every solution \(\phi\) to (6) with \(\left|\left.\phi(0,0)\right|_{\mathcal{A}}\leq\delta\right.\) satisfies \(\left|\left\phi(t,j)\right|_{\mathcal{A}}\leq\epsilon\right.\) for all \((t,j)\in\operatorname{dom}\phi\); _globally pre-attractive_ for (6) if every solution \(\phi\) to (6) is bounded and, if it is complete, then also \(\lim_{t+j\to+\infty}\left|\phi(t,j)\right|_{\mathcal{A}}=0\); _globally pre-asymptotically stable_ for (6) if it is both stable and globally pre-attractive. If every maximal solution to (6) is complete then one may drop the prefix "pre." ## III Problem Setup Given sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), and \(\mathcal{U}\), we consider a dynamical system with state \(x\in\mathcal{X}\) that is governed by the dynamics (5) where \(x_{c}\in\mathcal{X}_{c}\) is a controller variable, \(u\in\mathcal{U}\) is the input, \(\theta\) is a constant parameter that belongs to a compact set \(\Omega\) and \(F_{\theta}\) is a set-valued map with the following properties. **Assumption 1**.: Given sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and \(F_{\theta}\) as in (5) the following properties hold: 1. Each set \(\mathcal{X}\), \(\mathcal{X}_{c}\) and \(\mathcal{U}\) is a closed nonempty subset of some Euclidean space; 2. The set-valued map \(F_{\theta}\) is outer semicontinuous, locally bounded, and convex-valued. Assumption (S1) allows for the use of the analysis tools for hybrid dynamical systems that are provided in [27] which consider sets as subspaces of Euclidean spaces with the Euclidean metric topology. Assumptions (VC) and (S2) are used to prove that the resulting closed-loop system has nontrivial solutions and that it satisfies the hybrid basic conditions, respectively. **Remark 1**.: Since the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\) and \(\mathcal{U}\) are closed relative to their Euclidean ambient spaces, then any of their closed subsets are also closed in the ambient space and locally compact Hausdorff (cf. [29, Lemma 4.29]). In Section IV, we develop a dynamic synergistic controller with the objective of globally asymptotically stabilizing a compact set for the resulting closed-loop system under the assumption that \(\theta\) is known. In Section V, we modify the dynamic synergistic controller to allow for \(\theta\in\Omega\) to be unknown, when \(\Omega\) is known. ## IV Dynamic Synergistic Hybrid Feedback ### _Controller Design_ Dynamic synergistic hybrid feedback (relative to the plant in Section III) is a hybrid control strategy that renders a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\) globally asymptotically stable for the closed-loop system. It is comprised of a feedback law \[\kappa:\operatorname{dom}\kappa\to\mathcal{U} \tag{9}\] and of the hybrid dynamics that are described in the sequel. Given a function \[V\colon\operatorname{dom}V\to\mathbb{R}_{\geq 0}\cup\{+\infty\}, \tag{10}\] satisfying \(\mathcal{X}\times\mathcal{X}_{c}\subset\operatorname{dom}V\) with \(\operatorname{dom}V\) open in the Euclidean space containing \(\mathcal{X}\times\mathcal{X}_{c}\),1 and a set-valued map Footnote 1: The function \(V\) maps values in \(\mathcal{X}\times\mathcal{X}_{c}\) to the one-point compactification of \(\mathbb{R}_{\geq 0}\). More generally, given a topological space \(X\) that is noncompact locally compact Hausdorff space and an object \(\infty\) not in \(X\), the one point compactification of \(X\) is a topological space \(X^{\star}\) with the topology: \(T=\{\text{open subsets of }X\}\cup\{U\subset X^{\star}:X^{\star}(U)\text{ is a compact subset of }X\}\). \[D_{c}:\mathcal{X}\times\mathcal{X}_{c}\rightrightarrows\mathcal{X}_{c} \tag{11}\] we define \[\nu_{\mathcal{V}}(x,x_{c}) :=\min\{V(x,g):g\in D_{c}(x,x_{c})\}, \tag{12a}\] \[\varrho_{\mathcal{V}}(x,x_{c}) :=\arg\min\{V(x,g):g\in D_{c}(x,x_{c})\},\] (12b) \[\mu_{\mathcal{V}}(x,x_{c}) :=V(x,x_{c})-\nu_{\mathcal{V}}(x,x_{c}) \tag{12c}\] for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), under the following assumption: 1. The optimization problem in (12) is feasible for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), i.e., for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), there exists \(g\in D_{c}(x,x_{c})\) such that \(V(x,g)<+\infty\). Given a set-valued map \(F_{c}\) defined on \(\mathcal{X}\times\mathcal{X}_{c}\) that satisfies the following assumption: 1. \(F_{c}\) is outer semicontinuous, locally bounded, and convex-valued. we define the hybrid controller dynamics as follows: \[\dot{x}_{c} \in F_{c}(x,x_{c}) (x,x_{c}) \in C \tag{13a}\] \[x_{c}^{+} \in\varrho_{\mathcal{V}}(x,x_{c}) (x,x_{c}) \in D \tag{13b}\] where \[C :=\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}:\mu_{\mathcal{V}} (x,x_{c})\leq\delta(x,x_{c})\}, \tag{14}\] \[D :=\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}:\mu_{\mathcal{V}} (x,x_{c})\geq\delta(x,x_{c})\},\] and \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\) is a continuous function. The switching logic in (13) implements the following functionality: if the solutions to the closed-loop system reach a state \((x,x_{c})\) where \(\mu_{\mathcal{V}}(x,x_{c})\) is greater than or equal to the predefined value of \(\delta(x,x_{c})\), then the variable \(x_{c}\) is reset to some point \(g\in\varrho_{\mathcal{V}}(x,x_{c})\) and the feedback law changes its value from \(\kappa(x,x_{c})\) to \(\kappa(x,g)\). Since the hybrid controller (13) is derived from \(\kappa\), \(V\), \(D_{c}\), and \(F_{c}\), we represent (13) using the \(4\)-tuple \((\kappa,V,D_{c},F_{c})\). The hybrid closed-loop system \(\mathcal{H}:=(C,F_{cl},D,G_{cl})\) resulting from the interconnection between (5) and \((\kappa,V,D_{c},F_{c})\) is given by \[\begin{pmatrix}\dot{x}\\ \dot{x}_{c}\end{pmatrix}\in F_{cl}(x,x_{c}):=\begin{pmatrix}F_{\theta}(x,x_{c}, \kappa(x,x_{c}))\\ F_{c}(x,x_{c})\end{pmatrix} (x,x_{c})\in C \tag{15a}\] \[\begin{pmatrix}x^{+}\\ x_{c}^{+}\end{pmatrix}\in G_{cl}(x,x_{c}):=\begin{pmatrix}x\\ \varrho_{V}(x,x_{c})\end{pmatrix} (x,x_{c})\in D. \tag{15b}\] **Remark 2**.: Notice that if \(\delta(x,x_{c})\geq 0\) for all \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\) then it follows from the construction of the hybrid controller \((\kappa,V,D_{c},F_{c})\) that \(V(x,g)-V(x,x_{c})\leq 0\) for each \((x,x_{c})\in D\) and each \(g\in\varrho_{V}(x,x_{c})\). In other words, if the function \(\delta\) is nonnegative for all \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), then the function \(V\) does not increase during jumps. The controller design presented in this section is informed by many preceding synergistic hybrid feedback controllers. As mentioned in Section I-C, we preserve the switching logic of the synergistic controllers in [28, Chapter 7], in the sense that controller switching is triggered when the difference between the current value of \(V\) and its lowest possible value exceeds a predefined threshold \(\delta>0\). The main difference between the controller design presented in this paper and synergistic controllers in the literature is that, here, \(x_{c}\) does not necessarily belong to a finite set. Instead, the flows of \(x_{c}\) are described more generally by a differential inclusion and we constrain its jumps using a set-valued map \(D_{c}\). In the sequel, we introduce the assumptions on the controller that allow for the global asymptotic stabilization of a compact subset of the state space. ### _Basic Properties of the Closed-Loop System_ In this section, we provide some conditions on (9), (10) and (11) which ensure that the closed-loop system (15) satisfies the hybrid basic conditions of [27, Assumption 6.5] and that maximal solutions to (15) are complete. To this end, we introduce the following definitions. **Definition 1**.: Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\), \(\kappa\), \(V\), \(D_{c}\) and \(F_{c}\) we say that the hybrid controller \((\kappa,V,D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) for (5) if (C1) and (C2) hold and: 1. \(V\) is continuous, positive definite relative to \(\mathcal{A}\),2 and \(V^{-1}([0,c])\) is compact for each \(c\in\mathbb{R}_{\geq 0}\); Footnote 2: A function \(V:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}_{\geq 0}\) is positive definite relative to \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\) if \(V(x,x_{c})=0\iff(x,x_{c})\in\mathcal{A}\). 2. The set-valued map \(D_{c}\) is outer semicontinuous, lower semicontinuous, and locally bounded; 3. The function \(\kappa\) is continuous and \[\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}:V(x,x_{c})<+\infty\}\subset \operatorname{dom}\kappa.\] Given a synergistic candidate relative to \(\mathcal{A}\), the property (C3) guarantees that sublevel sets of \(V\) are compact and the properties (C3) and (C4) guarantee that the synergy gap function \(\mu_{V}\) in (12c) is continuous and that \(\varrho_{V}\) is outer semicontinuous, as proved in the next result. **Lemma 1**.: _Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\), if \((\kappa,V,D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) for (5), then the following hold:_ 1. _The function_ \(\nu_{V}\) _in (_12a_) is continuous;_ 2. _The set-valued map_ \(\varrho_{V}\) _in (_12b_) is outer semicontinuous and_ \(\varrho_{V}(x,x_{c})\) _is compact for each_ \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\)_;_ 3. _The function_ \(\mu_{V}\) _in (_12c_) is continuous._ Proof.: It follows from (C4) that \(D_{c}\) is outer semicontinuous, hence \(D_{c}(x,x_{c})\) is closed for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Since \(D_{c}\) is also assumed to be locally bounded in (C4), we have that \(D_{c}(x,x_{c})\) is compact for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). In addition, the outer semicontinuity and local boundedness of \(D_{c}\) imply that \(D_{c}\) is upper semicontinuous (cf. [27, Lemma 5.15]). Since \(D_{c}\) is assumed to be lower semicontinuous in (C4), we have that \(D_{c}\) is continuous. Since \(V\) is continuous by assumption (C3), it follows from [30, Theorem 9.14] that \(\nu_{V}\) is continuous and that \(\varrho_{V}\) is compact-valued and upper semicontinuous. Since \(\mathcal{X}\) is locally compact Hausdorff (cf. Remark 1), it follows from [29, Proposition 4.27] that each point \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\) has a precompact neighborhood \(U_{x}\). Since \(\varrho_{V}\) is compact-valued and upper semicontinuous, it follows from [30, Proposition 9.7] that \(\varrho_{V}(\overline{U_{x}})\) is compact. It follows from the fact that \(\varrho_{V}(U_{x})\) is a subset of a the compact set \(\varrho_{V}(\overline{U_{x}})\) that \(\varrho_{V}\) is locally bounded. Since \(\varrho_{V}\) is compact-valued it is, in particular, closed-valued, hence it follows from (27, Lemma 5.15) that \(\varrho_{V}\) is outer semicontinuous. It follows from (C1) that \(\nu_{V}(x,x_{c})<+\infty\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), hence the function \(\mu_{V}\) is continuous because it is the composition of continuous functions. The hybrid basic conditions in [27, Assumption 6.5] are very important to the synthesis of hybrid controllers, because they guarantee that the resulting hybrid closed-loop systems are endowed with nominal robustness to a wide range of perturbations/sensor noise and, in particular, they enable the application of invariance principles for hybrid systems (cf. [27, Chapter 8]). In the following result, we show that these conditions follow directly from the regularity of (12c) and (12) that was proved in Lemma 1. **Corollary 1**.: _Suppose that Assumption 1 holds. Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\) if \((\kappa,V,D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) for (5), then the hybrid closed-loop system (15) satisfies (A1), (A2), and (A3)._ Proof.: Let \(h(x,x_{c}):=\mu_{V}(x,x_{c})-\delta(x,x_{c})\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Since \(\delta\) is assumed to be continuous and \(\mu_{V}\) is continuous under the given assumptions (cf. Lemma 1), it follows that \(h\) is a continuous function. Continuity of \(h\) implies that the flow and jump sets are closed, because they can be written as the preimage of closed sets through \(h\), as follows: \(C=h^{-1}((-\infty,0])\) and \(D=h^{-1}([0,+\infty])\), respectively (cf. [29, Lemma 2.7]). It follows from the construction of the hybrid controller \((\kappa,V,D_{c},F_{c})\) that \(F_{cl}(x,x_{c})\) is defined for each \((x,x_{c})\in C\). From the continuity of \(\kappa\) in (C5) and the assumption that \(F_{\theta}\) is outer semicontinuous, locally bounded and convex-valued (cf. (S2)), it follows that \(F_{cl}\) in (15) is outer semicontinuous and locally bounded relative to \(C\) and \(F_{cl}(x,x_{c})\) is convex for each \((x,x_{c})\in C\). The outer semicontinuity and local boundedness of \(G_{cl}\) in (15) relative to \(D\) follows from Lemma 1. ### _Global Asymptotic Stability of \(\mathcal{A}\)_ In this section, we present further assumptions on the hybrid controller \((\kappa,V,D_{c},F_{c})\) that allow for the global asymptotic stabilization of a compact set \(\mathcal{A}\) for (15). **Definition 2**.: Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\!\times\!\mathcal{X}_{c}\), we say that a synergistic candidate relative to \(\mathcal{A}\) for (5) with data \((\kappa,V,D_{c},F_{c})\), is synergistic relative to \(\mathcal{A}\) for (5) if: * The function \(V\) is Lipschitz continuous on a neighborhood of \(C\) and the growth of \(V\) along flows of (15) is bounded by \(u_{c}\) with \[u_{c}(x,x_{c})\leq 0\qquad\quad\forall(x,x_{c})\in\mathcal{X}\times\mathcal{ X}_{c};\] * The largest weakly invariant subset of \[\dot{x}\in F_{\theta}(x,x_{c},\kappa(x,x_{c}))\] (16) \[\dot{x}_{c}\in F_{c}(x,x_{c})\] in \(\overline{u_{c}^{-1}(0)}\), denoted by \(\Psi\), is such that \[\overline{\delta}_{1}:=\inf\{\mu_{V}(x,x_{c}):(x,x_{c})\in\Psi\backslash \mathcal{A}\}>0.\] (17) If one considers \(V\) as a Lyapunov function candidate, then Assumption (C6) implies that \(V\) is nonincreasing along flows to the closed-loop system (15), implying that there exists a choice of \(\delta\) which renders \(\mathcal{A}\) stable for (15). **Lemma 2**.: _Suppose that Assumption 1 holds. Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\!\times\!\mathcal{X}_{c}\), if \((\kappa,V,D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) for (5) satisfying (C6) and \(\delta(x,x_{c})\geq 0\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), then each sublevel set of \(V\) is forward pre-invariant for (15). If, for each \((x,x_{c})\in C\backslash D\), (VC) there exists a neighborhood \(U\) of \((x,x_{c})\) such that \(F_{cl}(\xi)\cap\mathsf{T}_{\xi}C\neq\emptyset,\) for every \(\xi\in U\cap C\) then each maximal solution to (15) is complete and, consequently, each sublevel set of \(V\) is forward invariant._ Proof.: It follows from the discussion in Remark 2 that the growth of \(V\) along jumps of (15) is bounded by \(u_{d}\) with \[u_{d}(x,x_{c})\leq\begin{cases}-\delta(x,x_{c})&\text{ if }(x,x_{c})\in D\\ -\infty&\text{ otherwise}\end{cases} \tag{18}\] for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Together with assumption (C6) it follows that the growth of \(V\) along solutions to (15) is bounded by \(u_{c},u_{d}\) satisfying \[u_{c}(x,x_{c})\leq 0,\ \ u_{d}(x,x_{c})\leq 0 \tag{19}\] for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). It follows that each solution \(\phi\) to (15) with initial condition \(\xi\) satisfies \(V(\phi(t,j))\leq V(\xi)\) for all \((t,j)\in\operatorname{dom}\phi\), hence each sublevel set of \(V\) is forward pre-invariant for (15). It follows from Corollary 1 that (15) satisfies the hybrid basic conditions, hence we can use [27, Proposition 6.10] to prove the completeness of each maximal solution to (15). Since \(C\cup D=\mathcal{X}\times\mathcal{X}_{c}\), then there are no solutions to (15) starting outside the union between the jump and flow sets. It follows from (VC) that (VC) in [27, Proposition 6.10] is satisfied, hence each maximal solution to (15) either "blows up," leaves \(C\cup D\) in finite time or is complete (cf. conditions _(a),(b)_ and _(c)_ of [27, Proposition 6.10]). Since \(G_{cl}(D)\subset C\cup D\), no solution can leave \(C\cup D\) after a jump (hence, condition (c) in [27, Proposition 6.10] does not occur). Since each sublevel set of \(V\) is compact and forward pre-invariant, then solutions to (15) do not "blow up" (condition (b) in [27, Proposition 6.10] does not occur). It follows that each maximal solution to (15) is complete. **Lemma 3**.: _Suppose that Assumption 1 holds. Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\) if \((\kappa,V,D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) for (5) that satisfies (C6) and \(\delta(x,x_{c})\geq 0\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), then the set \(\mathcal{A}\) is stable for (15)._ Proof.: Since \(\mathcal{X}\times\mathcal{X}_{c}\subset\operatorname{dom}V\), it follows that \(\mu_{V}(x,x_{c})\) is defined for all \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\); hence, for any given continuous function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), at least one of the following conditions holds: 1) \(\mu_{V}(x,x_{c})\geq\delta(x,x_{c})\); 2) \(\mu_{V}(x,x_{c})\leq\delta(x,x_{c})\). It follows from (14) that \(\overline{C}\cup D=\mathcal{X}\times\mathcal{X}_{c}\subset\operatorname{dom}V\). Since \(\operatorname{dom}V\) is also assumed to be open in the Euclidean space containing \(\mathcal{X}\times\mathcal{X}_{c}\), it follows that \(\operatorname{dom}V\) contains a neighborhood of \(\mathcal{A}\cap(C\cup D\cup G(D))\). Positive definiteness of \(V\) with respect to \(\mathcal{A}\) and continuity of \(V\) follows from (C3). From assumption (C6) and from (18), it follows that \(V\) is locally Lipschitz on a neighborhood of \(\overline{C}\) and that the bounds [28, Eqs.(3.18), (3.19)] are satisfied. Since \(\mathcal{A}\) is compact and the hybrid basic conditions are satisfied (cf. Corollary 1), it follows from [28, Theorem 3.19] that \(\mathcal{A}\) is stable for (5). Assumption (C7) guarantees that there exists \(\delta\) satisfying * \(\delta(x,x_{c})>0\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\); * \(\delta(x,x_{c})<\mu_{V}(x,x_{c})\) for each \((x,x_{c})\in\Psi\backslash\mathcal{A}\) with \(\Psi\) defined in (C7). We say that \(\delta\) is positive if it satisfies (D1), and that a hybrid controller \((\kappa,V,D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (5) with synergy gap exceeding \(\delta\) if it is synergistic relative to \(\mathcal{A}\) for (5) and satisfies (D2). When both conditions (D1) and (D2) are satisfied, all the points in the largest weakly invariant subset of (16) in \(\overline{u_{c}^{-1}(0)}\) that are not in \(\mathcal{A}\) lie in the jump set of (15), allowing us to prove that \(\mathcal{A}\) is globally asymptotically stable for the closed-loop system (15). **Theorem 1**.: _Suppose that Assumption 1 holds. Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\) and a positive function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), if \((\kappa,V,D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (5) with synergy gap exceeding \(\delta\), then the set \(\mathcal{A}\) is globally pre-asymptotically stable for (15). If, for each \((x,x_{c})\in C\backslash D\), (VC) is satisfied, then \(\mathcal{A}\) is globally asymptotically stable for (15)._ Proof.: Stability of \(\mathcal{A}\) is proved in Lemma 3 and completeness of solutions is demonstrated in Lemma 2. The global pre-asymptotic stability of \(\mathcal{A}\) for (15) follows from pre-attractivity of \(\mathcal{A}\) for (15), which is demonstrated next through an application of [27, Theorem 8.2]. It follows from Lemmas 3 and 2 that each maximal solution to (15) is precompact and the growth of \(V\) along solutions to (15) is bounded by \(u_{c},u_{d}\) satisfying (19). Therefore, it follows from [27, Theorem 8.2] that every complete solution approaches the largest weakly invariant set \[V^{-1}(r)\cap\left(\overline{u_{c}^{-1}(0)}\cup(u_{d}^{-1}(0)\cap G_{cl}(u_{d}^ {-1}(0))\right) \tag{20}\] for some \(r\) in the image of \(V\). From (18) and the assumption (D1), it follows that \(u_{d}^{-1}(0)=\emptyset\), hence (20) can be rewritten as \[V^{-1}(r)\cap\overline{u_{c}^{-1}(0)}. \tag{21}\] It follows from (C7), (D2) and the definition of \(D\) in (13) that the largest weakly invariant subset of (15) in (21) does not include points that are not in \(\mathcal{A}\) and, consequently, \(\mathcal{A}\) is globally pre-attractive for (15). Global asymptotic stability of \(\mathcal{A}\) for (15) follows from global pre-asymptotic stability if each maximal solution to (15) is complete, which is guaranteed by Lemma 2 under assumption (VC). Note that, if there exists an accumulation point of \(\Psi\backslash\mathcal{A}\) in \(\mathcal{A}\), then \(\overline{\delta}_{1}\) in (17) is equal to \(0\). Therefore, the topology of \(\Psi\) and \(\mathcal{A}\) may preclude global asymptotic stabilization of \(\mathcal{A}\) for (15) since (C7) is not met. Conversely, if one is able to show that \(\overline{\delta}_{1}>0\), then \(\Psi\backslash\mathcal{A}\) does not have accumulation points in \(\mathcal{A}\). With additional conditions on \(\delta\), we are able to show that there exists a neighborhood of \(\mathcal{A}\) contained in \(C\). **Proposition 1**.: _Given a compact subset \(\mathcal{A}\) of \(\mathcal{X}\times\mathcal{X}_{c}\), if the hybrid controller \((\kappa,V,D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) and \(\underline{\delta}:=\inf\{\underline{\delta}(x,x_{c}):(x,x_{c})\in\mathcal{X} \times\mathcal{X}_{c}\}\) satisfies \(\underline{\delta}\in(0,\overline{\delta}_{1})\), where \(\overline{\delta}_{1}\) is given in (C7), then there exists a neighborhood of \(\mathcal{A}\) that is contained in \(C\)._ Proof.: Selecting \(\epsilon\in(0,\underline{\delta})\), we have that \(\mu_{V}^{-1}((-\epsilon,\epsilon))\) is open because \(\mu_{V}\) is continuous (cf. Lemma 1), contains \(\mathcal{A}\) because \(\mu_{V}(\mathcal{A})=0\) and it is a subset of \(C\) because \(\epsilon<\underline{\delta}\leq\delta(x,x_{c})\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). **Remark 3**.: Note that, if the hybrid controller \((\kappa,V,D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (5), then \(\delta\) can be chosen as a constant \(\Delta\in\mathbb{R}\) as long as \(\Delta\in(0,\overline{\delta}_{1})\). In this case, the conditions of Proposition 1 hold, thus the fact that \(\delta\) is state-dependent does not constrain the global asymptotic stability results and it provides more flexibility to the design of the hybrid controller. ## V Robust Synergistic Hybrid Feedback In this section, we propose a new kind of synergistic hybrid controller that, unlike the controller of Section IV, is able to handle the case where \(\theta\) is unknown, but belongs to a known compact set \(\Omega\). In this direction, let \(\mathscr{A}\!:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) denote a collection of compact subsets of \(\mathcal{X}\times\mathcal{X}_{c}\) and let \(\mathscr{V}\!:=\{V_{\theta}\}_{\theta\in\Omega}\) denote a collection of functions satisfying the following assumption: 1. Given a compact set \(\Omega\) and a collection \(\mathscr{V}\!:=\{V_{\theta}\}_{\theta\in\Omega}\) of functions \(V_{\theta}:\operatorname{dom}V_{\theta}\to\mathbb{R}_{\geq 0}\cup\{+\infty\}\) satisfying \(\mathcal{X}\times\mathcal{X}_{c}\subset\operatorname{dom}V_{\theta}\) for each \(\theta\in\Omega\), we assume that \[(x,x_{c},\theta)\mapsto V(x,x_{c},\theta):=V_{\theta}(x,x_{c})\] (22) is continuous. **Remark 4**.: Note that \(\Omega\) might be uncountable, thus the collections \(\mathscr{A}:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) and \(\mathscr{V}\!:=\{V_{\theta}\}_{\theta\in\Omega}\) are not necessarily finite nor countable. Using the previous definitions, we propose the following hybrid controller: \[\dot{x}_{c}\in F_{c}(x,x_{c}) (x,x_{c}) (x,x_{c})\in C_{\Omega} \tag{23a}\] \[x_{c}^{+}\in G_{c}(x,x_{c}) (x,x_{c})\in D_{\Omega} \tag{23b}\] where \[C_{\Omega}:=\left\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c} :\min_{\theta\in\Omega}\mu_{V_{\theta}}(x,x_{c})\leq\delta(x,x_{c})\right\}\] \[D_{\Omega}:=\left\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c} :\min_{\theta\in\Omega}\mu_{V_{\theta}}(x,x_{c})\geq\delta(x,x_{c})\right\} \tag{24}\] with \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\) continuous, and \[\min_{\theta\in\Omega} \mu_{V_{\theta}}(x,x_{c})=\min_{\theta\in\Omega}\left\{V_{\theta} (x,x_{c})-\nu_{V_{\theta}}(x,x_{c})\right\}\] \[=\min_{\theta\in\Omega}\left\{V(x,x_{c},\theta)-\min_{g\in D_{c}(x,x_{c})}V(x,g,\theta)\right\}\] for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), in accordance with the definitions (12c), (12a) and (22), and \(G_{c}:\mathcal{X}\times\mathcal{X}_{c}\rightrightarrows\mathcal{X}\) satisfies the following assumptions: 1. The set-valued map \(G_{c}\) is outer semicontinuous and locally bounded; 2. For each \(\theta\in\Omega\), we assume that \[V_{\theta}(x,x_{c})-V_{\theta}(x,g)\geq\min_{\theta\in\Omega}\mu_{V_{\theta}}(x,x _{c})\] (25) for each \((x,x_{c})\in D_{\Omega}\) and each \(g\in G_{c}(x,x_{c})\) **Remark 5**.: Under assumption (C10), we guarantee by construction that \[V_{\theta}(x,x_{c})-V_{\theta}(x,g)\geq\delta(x,x_{c})\] for each \((x,x_{c})\in D_{\Omega}\) and each \(g\in G_{c}(x,x_{c})\), which implies that the function \(V_{\theta}\) does not increase during jumps if \(\delta(x,x_{c})\geq 0\) for all \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\) (cf. Remark 2). Note that the jump map of the hybrid controller (13) is constructed from the data \(V\) and \(D_{c}\) as shown in (12b). On the other hand, the jump map \(G_{c}\) in (23) is left undefined for the sake of generality. It is possible to construct \(G_{c}\) in (23) from the data \(\mathscr{V}\) and \(D_{c}\), but this requires additional assumptions, as shown in the following remark. Owing to the fact that (23) is derived from \(\kappa\), \(\mathscr{V}\), \(D_{c}\), \(F_{c}\), and \(G_{c}\), we refer to (23) using the \(5\)-tuple \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\). **Remark 6**.: Suppose that \(\Omega\) is compact and convex and that \(D_{c}\) in (11) is convex and compact for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Given \(\mathscr{V}:=\{V_{\theta}\}_{\theta\in\Omega}\), suppose that \(G_{c}\) defined as \[G_{c}(x,x_{c}):=\operatorname*{arg\,max}_{g\in D_{c}(x,x_{c})} \min_{\theta\in\Omega}\left\{V(x,x_{c},\theta)-V(x,g,\theta)\right\}\] \[\forall(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c} \tag{26}\] is outer semicontinuous with \(V(x,x_{c},\theta)=V_{\theta}(x,x_{c})\) for each \((x,x_{c},\theta)\in\mathcal{X}\times\mathcal{X}_{c}\times\Omega\). Furthermore, suppose that, for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), the function \(h(g,\theta):=V_{\theta}(x,x_{c})-V_{\theta}(x,g)\) is continuous, quasi-concave as a function of \(g\), and quasi-convex as a function of \(\theta\).3 Then, it follows from [31, Theorem 3.4] that the \(\min\) and \(\max\) operations in \(\max_{g\in D_{c}(x,x_{c})}\min_{\theta\in\Omega}h(g,\theta)\) commute, yielding Footnote 3: A function \((x,y)\mapsto h(x,y)\) on \(X\times Y\) is quasi-concave as a function of \(x\) if the set \(\{x\in X:h(x,y)\geq c\}\) is convex for each \(y\in Y\) and each \(c\in\mathbb{R}\). The function \(h\) is quasi-convex as a function of \(y\) if the set \(\{y\in X:h(x,y)\leq c\}\) is convex for each \(x\in X\) and each \(c\in\mathbb{R}\). \[\begin{split}\max_{g\in D_{c}(x,x_{c})}&\min_{ \theta\in\Omega}h(g,\theta)=\min_{\theta\in\Omega}\max_{g\in D_{c}(x,x_{c})}h (g,\theta)\\ &=\min_{\theta\in\Omega}\left\{V_{\theta}(x,x_{c})-\min_{g\in D_{ c}(x,x_{c})}V_{\theta}(x,g)\right\}.\end{split} \tag{27}\] It follows from (27) and (12c) that \(\max_{g\in D_{c}(x,x_{c})}\min_{\theta\in\Omega}h(g,\theta)=\min_{\theta\in \Omega}\mu_{V_{\theta}}(x,x_{c})\). We conclude that, for each \(\theta\in\Omega\), the following holds \(V_{\theta}(x,x_{c})-V_{\theta}(x,g)\geq\min_{\theta\in\Omega}\mu_{V_{\theta}} (x,x_{c}),\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\) and each \(g\) belonging to (26), hence condition (25) is verified. The following definition extends the notion of a synergistic controller in order to address the case where \(\theta\in\Omega\) is not known. **Definition 3**.: Given a compact set \(\Omega\), a continuous function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), a collection of compact subsets \(\mathscr{A}:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) of \(\mathcal{X}\times\mathcal{X}_{c}\), and a collection of continuous functions \(\mathscr{V}:=\{V_{\theta}\}_{\theta\in\Omega}\), we say that the hybrid controller \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\) is _synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\)_ if (C8), (C9), (C10) hold, and if, for each \(\theta\in\Omega\), the hybrid controller \((\kappa,V_{\theta},D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}_{\theta}\) for (5). The assumption that the hybrid controller \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\) is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\) ensures that the hybrid closed-loop system \[\begin{pmatrix}\dot{x}\\ \dot{x}_{c}\end{pmatrix}\in F_{cl}(x,x_{c}):=\begin{pmatrix}F_{\theta}(x,x_{c},\kappa(x,x_{c}))\\ F_{c}(x,x_{c})\end{pmatrix}\quad(x,x_{c})\in C_{\Omega} \tag{28a}\] \[\begin{pmatrix}x^{+}\\ x^{+}_{c}\end{pmatrix}\in G_{\Omega}(x,x_{c}):=\begin{pmatrix}x\\ G_{c}(x,x_{c})\end{pmatrix}\qquad\qquad\qquad\qquad\qquad\qquad\qquad(x,x_{c}) \in D_{\Omega} \tag{28b}\] satisfies the hybrid basic conditions as proved next. **Lemma 4**.: _Suppose that Assumption 1 holds. Given a compact set \(\Omega\) and a collection of compact subsets \(\mathscr{A}:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) of \(\mathcal{X}\times\mathcal{X}_{c}\) if \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\) is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\), then the hybrid closed-loop system (28) satisfies (A1), (A2), and (A3)._ Proof.: The continuity of \(\mu_{V_{\theta}}\) (for a fixed \(\theta\in\Omega\)) is established in Lemma 1. It follows from the continuity of \((x,x_{c},\theta)\mapsto V(x,x_{c},\theta)=V_{\theta}(x,x_{c})\) that is assumed in (C8), compactness of \(\Omega\) and from [30, Theorem 9.14] that the function \[(x,x_{c})\mapsto\min_{\theta\in\Omega}\mu_{V_{\theta}}(x,x_{c}) \tag{29}\] is continuous on \(\mathcal{X}\times\mathcal{X}_{c}\). It follows from the continuity of (29) and of \(\delta\) that \(C_{\Omega}\) and \(D_{\Omega}\) are closed, because they are the preimage of the closed sets \((-\infty,0]\) and \([0,+\infty]\), respectively. It follows from Assumption 1, (C2), and (C5) that the flow map \(F_{\Omega}\) is outer semicontinuous, locally bounded and convex-valued. It follows from (C9) that \(G_{\Omega}\) is outer semicontinuous, and locally bounded relative to \(D_{\Omega}\). In the sequel, we demonstrate that, for each \(\theta\in\Omega\), the set \(\mathcal{A}_{\theta}\in\mathscr{A}\) is globally asymptotically stable under appropriate assumptions on \(\delta\). The next result asserts forward pre-invariance of sublevel sets of \(V_{\theta}\in\mathscr{V}\) for the closed-loop system (28) when \(\delta\) is a continuous and nonnegative function. **Lemma 5**.: _Suppose that Assumption 1 holds. Given a compact set \(\Omega\) and a collection of compact subsets \(\mathscr{A}:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) of \(\mathcal{X}\times\mathcal{X}_{c}\), if \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\) is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\) and if \(\delta(x,x_{c})\geq 0\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), then, for each \(\theta\in\Omega\), each sublevel set of \(V_{\theta}\) is forward pre-invariant for (28). If, for each \((x,x_{c})\in C_{\Omega}\backslash D_{\Omega}\)._ _(VC') there exists a neighborhood \(U\) of \((x,x_{c})\) such that \(F_{cl}(\xi)\cap\mathbb{T}_{\xi}C_{\Omega}\neq\emptyset,\) for every \(\xi\in U\cap C_{\Omega}\)_ _then each maximal solution to (28) is complete and, consequently, each sublevel set of \(V_{\theta}\) is forward invariant for (28)._ Proof.: As explained in Remark 5, it follows from (C10) and (24) that \(V_{\theta}(x,x_{c})-V_{\theta}(x,g)\geq\delta(x,x_{c})\) for each \(g\in G_{c}(x,x_{c})\) and each \((x,x_{c})\in D_{\Omega}\). Hence, the growth of \(V_{\theta}\) during jumps of (28) is bounded by \[u_{d,\theta}(x,x_{c}):=\begin{cases}-\delta(x,x_{c})&\text{if }(x,x_{c})\in D_{\Omega}\\ -\infty&\text{otherwise}\end{cases} \tag{30}\] for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Since \((\kappa,\mathscr{V},D_{c},F_{c},G_{c})\) is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\), it follows that \((\kappa,V_{\theta},D_{c},F_{c})\) is synergistic relative to \(\mathscr{A}_{\theta}\) for (5) and, due to this assumption, the remainder of the proof follows closely that of Lemma 2. From Assumption (C6) it follows that the growth of \(V_{\theta}\) along solutions to (28) is bounded by \(u_{c,\theta},u_{d,\theta}\), with \(u_{c,\theta}(x,x_{c})\leq 0\) and \(u_{d,\theta}(x,x_{c})\leq 0\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). This implies that sublevel sets of \(V_{\theta}\) are forward pre-invariant for (28). The completeness of solutions under (VC') follows closely the proof in Lemma 2, thus it is omitted here. Let \ where \(u_{c,\theta}\) is the upper bound on the growth of \(V_{\theta}\) during flows of (28) as defined in (C6). Given a function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\) and a hybrid controller \((\kappa,\mathcal{V},D_{c},F_{c},G_{c})\) that is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\), we say that it has synergy gap exceeding \(\delta\) if, for each \(\theta\in\Omega\) and each \((x,x_{c})\in\Psi_{\theta}\backslash\mathcal{A}_{\theta}\), \(\delta(x,x_{c})<\mu_{V_{\theta}}(x,x_{c})\). **Theorem 2**.: _Suppose that Assumption 1 holds. Given a compact set \(\Omega\), a positive function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), and a collection of compact subsets \(\mathscr{A}:=\{\mathcal{A}_{\theta}\}_{\theta\in\Omega}\) of \(\mathcal{X}\times\mathcal{X}_{c}\) if \((\kappa,\mathcal{V},D_{c},F_{c},G_{c})\) is synergistic relative to \(\mathscr{A}\) for (5) with robustness margin \(\Omega\) and synergy gap exceeding \(\delta\), then, for each \(\theta\in\Omega\), the set \(\mathcal{A}_{\theta}\) is globally pre-asymptotically stable for (28). If, for each \((x,x_{c})\in C_{\Omega}\backslash D_{\Omega}\) (VC') is satisfied, then \(\mathcal{A}_{\theta}\) is globally asymptotically stable for (28)._ Proof.: For each \(\theta\in\Omega\), it follows from (C3) that each sublevel set of \(V_{\theta}\) is compact and, since it is also forward pre-invariant as shown in Lemma 5, we have that each solution to (28) is bounded. In addition, it follows from the proof of Lemma 5 that the growth of \(V_{\theta}\) along jumps of (28) is bounded by (30) and, since \(\delta(x,x_{c})>0\) by assumption, it follows from [27, Theorem 8.2] that each complete solution to (28) approaches the largest weakly invariant subset of \(V_{\theta}^{-1}(r)\cap u_{c,\theta}^{-1}(0)\) for some \(r\) in the image of \(V_{\theta}\), which is to say that each complete solution to (28) approaches \(\Psi_{\theta}\cap C_{\Omega}\). Since each point \((x,x_{c})\in\Psi_{\theta}\backslash\mathcal{A}_{\theta}\) belongs to \(D_{\Omega}\backslash C_{\Omega}\) by Assumption (C7), it follows that each complete solution to (28) converges to \(\mathcal{A}_{\theta}\), which concludes the proof of global pre-attractivity of \(\mathcal{A}_{\theta}\) for (28). The proof of stability of \(\mathcal{A}_{\theta}\) for (15) follows closely the proof of Lemma 3. We conclude that \(\mathcal{A}_{\theta}\) is globally pre-asymptotically stable for (28). Global asymptotic stability of \(\mathcal{A}_{\theta}\) for (28) under assumption (VC') follows directly from global pre-asymptotic stability and completeness of solutions, as shown in Lemma 5. In the next section, we apply the proposed controller to the design of adaptive synergistic feedback control laws for a class of affine systems with matched uncertainties. ## VI Adaptive Backstepping of Synergistic Hybrid Feedback for Affine Control Systems ### _Nominal Synergistic Hybrid Feedback_ In this section, we apply the controller design of Section V to the problem of global asymptotic stabilization of a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\) for a control affine system subject to parametric uncertainty, where \(\mathcal{X}\) and \(\mathcal{X}_{c}\) denote the spaces of the state and controller variables, respectively. In this direction, let \(F_{\theta}\) in (5) be given by \[F_{\theta}(x,x_{c},u):=f(x,x_{c})+H(x,x_{c})u+W(x,x_{c})\theta \tag{31}\] for each \((x,x_{c},u)\in\mathcal{X}\times\mathcal{X}_{c}\times\mathcal{U}\), where \(u\) denotes an input variable subject to the constraint \(u\in\mathcal{U}\), and \[\theta\in\Omega:=\{\theta\in\mathbb{R}^{\ell}:|\theta|\leq\theta_{0}\} \tag{32}\] represents the parametric uncertainty of the model whose norm is assumed to be bounded by a known parameter \(\theta_{0}\in\mathbb{R}_{\geq 0}\). The controller design in this section is applicable under the assumption of matched uncertainties stated next. **Assumption 2**.: There exists a continuously differentiable function \(\widehat{W}\) such that \(W(x,x_{c})=H(x,x_{c})\widehat{W}(x,x_{c})\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). In addition, we assume that we are given a synergistic hybrid controller for the nominal (unperturbed) system as defined next. **Definition 4**.: Given a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\) and a continuous function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), the hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) is said to be _nominally_ synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\) if it is synergistic relative to \(\mathcal{A}\) for \[\dot{x}=F_{0}(x,x_{c},u):=f(x,x_{c})+H(x,x_{c})u \tag{33}\] with synergy gap exceeding \(\delta\), and \(V_{0}\) is continuously differentiable on \(\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}:V_{0}(x,x_{c})<+\infty.\}\). The dynamical system (33) is obtained from (31) by considering that there are no perturbations, i.e., \(\theta=0\). It follows from Theorem 1 that \(\mathcal{A}\) is globally asymptotically stable for the closed-loop system \(\mathcal{H}\) in (15) resulting from the interconnection of (33) and a nominally synergistic controller relative to \(\mathcal{A}\) for (31) when \(\theta=0\). In the next section, we present variations of the nominal synergistic controller so as to deal with nonzero disturbances. ### _Adaptive Synergistic Hybrid Feedback_ In this section, we modify the nominal synergistic controller given in Section VI-A to globally asymptotically stabilize \[\mathcal{A}_{1,\theta}:=\mathcal{A}\times\{\theta\} \tag{34}\] for the closed-loop system when \(\theta\) in (31) is nonzero.4 In this direction, let \(\theta\in\mathbb{R}^{\ell}\) denote an estimate of the parameter \(\theta\) that is generated via Footnote 4: As the controller design exploits ideas in the literature of adaptive control, we refer the reader to [32] for an overview of adaptive controller design and backstepping under the influence of model uncertainty. \[\dot{\hat{\theta}}=\Gamma_{1}\operatorname{Proj}(W(x,x_{c})^{\top}\nabla_{x}V_ {0}(x,x_{c}),\hat{\theta}), \tag{35}\] where \(\Gamma_{1}\in\mathbb{R}^{\ell\times\ell}\) is a positive definite matrix and \(\operatorname{Proj}:\mathbb{R}^{\ell}\times\mathbb{R}^{\ell}\to\mathbb{R}^{\ell}\) is given by \[\operatorname{Proj}(\eta,\hat{\theta}):=\begin{cases}\eta&\text{if } \operatorname{p}(\hat{\theta})\leq 0\text{ or }\nabla\operatorname{p}(\hat{\theta})^{\top}\eta\leq 0\\ \left(I_{\ell}-\frac{p(\hat{\theta})\operatorname{p}(\theta)\nabla\operatorname{p}( \theta)^{\top}\nabla\operatorname{p}(\theta)^{\top}}\eta\right)\eta&\text{ otherwise}\end{cases} \tag{36}\] for each \((\eta,\hat{\theta})\in\mathbb{R}^{\ell}\times\mathbb{R}^{\ell}\), \[\operatorname{p}(\hat{\theta}):=\frac{\hat{\theta}^{\top}\hat{\theta}-\theta_{0}^{ 2}}{\epsilon^{2}+2\epsilon\theta_{0}} \tag{37}\] for each \(\hat{\theta}\in\mathbb{R}^{\ell}\), with \(\epsilon>0\) and \(\theta_{0}>0\) given in (32), and \(W\) as in (31). The function \(\operatorname{Proj}\) in (36) has the following properties (cf. [33]): 1. \(\operatorname{Proj}\) is Lipschitz continuous; * Each solution \(t\mapsto\hat{\theta}(t)\) to \(\dot{\hat{\theta}}=\Gamma_{1}\operatorname{Proj}(\eta(t),\hat{\theta})\), from \(\hat{\theta}\in\Omega+\epsilon\overline{\mathbb{B}}\) with input \(t\mapsto\eta(t)\) satisfies \(\operatorname{rge}\hat{\theta}\subset\Omega+\epsilon\overline{\mathbb{B}}\); * Given \(\theta\in\Omega\), \((\theta-\hat{\theta})^{\top}\operatorname{Proj}(\eta,\hat{\theta})\geq(\theta -\hat{\theta})^{\top}\eta\) for each \((\eta,\hat{\theta})\in\mathbb{R}^{\ell}\times\mathbb{R}^{\ell}\); with \(\epsilon>0\) as in (37). Given a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), and the controller variable \(x_{c,1}:=(x_{c},\hat{\theta})\in\mathcal{X}_{c,1}:=\mathcal{X}_{c}\times( \Omega+\epsilon\overline{\mathbb{B}})\), we define \[\kappa_{1}(x,x_{c,1}) :=\kappa_{0}(x,x_{c})-\widehat{W}(x,x_{c})\hat{\theta} \tag{38a}\] \[V_{1,\theta}(x,x_{c,1}) :=V_{0}(x,x_{c})+\frac{1}{2}(\theta-\hat{\theta})^{\top}\Gamma_{ 1}^{-}(\theta-\hat{\theta})\] (38b) \[D_{c,1}(x,x_{c,1}) :=D_{c}(x,x_{c})\times(\Omega+\epsilon\overline{\mathbb{B}})\] (38c) \[F_{c,1}(x,x_{c,1}) =\begin{bmatrix}F_{c}(x,x_{c})\\ \Gamma_{1}\operatorname{Proj}(W(x,x_{c})^{\top}\nabla_{x}V_{0}(x,x_{c}),\hat{ \theta})\end{bmatrix} \tag{38d}\] for each \((x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}\), where \(\widehat{W}\) comes from Assumption 2. The hybrid closed-loop system resulting from the interconnection between (31) and the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\), is given by \[(\dot{x},\dot{x}_{c,1}) \in F_{d,1}(x,x_{c,1}) (x,x_{c,1})\in C_{1} \tag{39a}\] \[(x^{+},x_{c,1}^{+}) \in G_{d,1}(x,x_{c,1}) (x,x_{c,1})\in D_{1} \tag{39b}\] where \[C_{1} :=\{(x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}:\mu_{V_{1, \theta}}(x,x_{c,1})\leq\delta(x,x_{c})\}\] \[D_{1} :=\{(x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}:\mu_{V_{1, \theta}}(x,x_{c,1})\geq\delta(x,x_{c})\}\] and \[F_{d,1}(x,x_{c,1}) :=\begin{bmatrix}F_{\theta}(x,x_{c},\kappa_{1}(x,x_{c,1}))\\ F_{c,1}(x,x_{c,1})\end{bmatrix}\] \[\forall(x,x_{c,1}) \in C_{1} \tag{40a}\] \[G_{d,1}(x,x_{c,1}) :=\begin{bmatrix}x\\ \varrho_{V_{1,\theta}}(x,x_{c,1})\end{bmatrix} \forall(x,x_{c,1})\in D_{1}. \tag{40b}\] where, for each \((x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}\), \[\nu_{V_{1,\theta}}(x,x_{c,1}) =\nu_{V_{0}}(x,x_{c}) \tag{41a}\] \[\varrho_{V_{1,\theta}}(x,x_{c,1}) =\varrho_{V_{0}}(x,x_{c})\times\{\theta\}\] (41b) \[\mu_{V_{1,\theta}}(x,x_{c,1}) =\mu_{V_{0}}(x,x_{c})+\frac{1}{2}(\theta-\hat{\theta})^{\top} \Gamma_{1}^{-1}(\theta-\hat{\theta}) \tag{41c}\] are directly computed from (12a), (12b) and (12c), respectively. **Remark 7**.: For the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\), the functions (41) are not realizable, because \(\mu_{V_{1,\theta}}\) and \(\varrho_{V_{1,\theta}}\) in (41) depend on the unknown constant \(\theta\). This dependence will be removed when we show that there exists \(G_{c,1}:\mathcal{X}\times\mathcal{X}_{c,1}\rightrightarrows\mathcal{X}_{c,1}\) such that the hybrid controller \((\kappa_{1},Y_{1},D_{c,1},F_{c,1},G_{c,1})\) with \(\mathscr{V}:=\{V_{1,\theta}\}_{\theta\in\Omega}\) is synergistic relative to \(\mathscr{A}_{1}:=\{\mathcal{A}_{1,\theta}\}_{\theta\in\Omega}\) for (31) with robustness margin \(\Omega\). To design (23), we start by showing that the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) is synergistic relative to \(\mathcal{A}_{1,\theta}\) for (31). **Proposition 2**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\theta\in\Omega\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31), the controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) given in (38) is a synergistic candidate relative to \(\mathcal{A}_{1,\theta}\) for (31)._ Proof.: The optimization problems in (12) are feasible for each \(x\in\mathcal{X}\), because they are feasible for \(V_{0}\), hence (C1) is satisfied. Since \(V_{1,\theta}\) corresponds to the sum of \(V_{0}\) with \((\theta-\hat{\theta})^{\top}\Gamma_{1}^{-1}(\theta-\hat{\theta})/2\) and both terms are continuous, it follows that \(V_{1,\theta}\) is continuous. Since \(V_{0}\) is positive definite with respect to \(\mathcal{A}\) and \(\hat{\theta}\mapsto(\theta-\hat{\theta})^{\top}\Gamma_{1}^{-1}(\theta-\hat{ \theta})\) is positive definite relative to \(\theta\), it follows that \(V_{1,\theta}\) is positive definite relative to \(\mathcal{A}_{1,\theta}\). It follows from the assumption that \(V_{0}^{-1}([0,c])\) is compact for each \(c\geq 0\) and radial unboundedness of \(\hat{\theta}\mapsto(\theta-\hat{\theta})^{\top}\Gamma_{1}^{-1}(\theta-\hat{ \theta})\) relative to \(\theta\) that \(V_{1,\theta}^{-1}([0,c])\) is compact for each \(c\geq 0\), thus proving that \(V_{1,\theta}\) satisfies (C3). From (38c), we have that \(D_{c,1}(x,x_{c,1})\) is the Cartesian product between \(D_{c}(x,x_{c})\) and \(\Omega+\epsilon\overline{\mathbb{B}}\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\). Since \(D_{c}\) satisfies (C4) by assumption, we have that \(D_{c,1}\) also satisfies (C4). Since \(\kappa_{0}\) satisfies (C5), then \(\kappa_{1}\) also satisfies (C5). **Proposition 3**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\theta\in\Omega\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31), the controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) given in (38) satisfies (C6)._ Proof.: It follows from (38b), (40a) and (P3) that, for each \((x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}\) and each \(f_{c,1}\in F_{d,1}(x,x_{c,1})\) \[\nabla V_{1,\theta}(x,x_{c,1})^{\top}f_{d,1}\leq \nabla V_{0}(x,x_{c})^{\top}\begin{bmatrix}F_{\theta}(x,x_{c}, \kappa_{1}(x,x_{c,1}))\\ f_{c}\end{bmatrix}\] \[-(\theta-\hat{\theta})^{\top}W(x,x_{c})^{\top}\nabla_{x}V_{0}(x,x_{c})\] where \(f_{c}\in F Since the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) satisfies (C6), we have that \(V_{1,\theta}\) is nonincreasing along solutions to the closed-loop system (39), but satisfying (C7) requires further assumptions on the data, as shown next. **Proposition 4**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\theta\in\Omega\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\) and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), let \(\Psi\) denote the largest weakly invariant subset of_ \[(\dot{x},\dot{x}_{c})\in F_{cl,0}(x,x_{c})=\begin{pmatrix}F_{0}(x,x_{c}, \kappa_{0}(x,x_{c}))\\ F_{c}(x,x_{c})\end{pmatrix}\] _on \((x,x_{c})\in\mathcal{E}:=\{(x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}:\nabla V _{0}(x,x_{c})^{\top}f_{cl,0}=0\) for some \(f_{cl,0}\in F_{cl,0}(x,x_{c})\}\) and let \(\Psi_{1,\theta}\) denote the largest weakly invariant subset of_ \[(\dot{x},\dot{x}_{c,1})\in F_{cl,1}(x,x_{c,1})\qquad\quad(x,x_{c,1})\in \mathcal{E}_{1}\] _with \(\mathcal{E}_{1}:=\{(x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}:\nabla V _{1,\theta}(x,x_{c,1})^{\top}f_{cl,1}=0\) for some \(f_{cl,1}\in F_{cl,1}(x,x_{c,1})\}\). If the projection of \(\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\) onto \(\mathcal{X}\times\mathcal{X}_{c}\) is a subset of \(\Psi\backslash\mathcal{A}\), i.e., 5_ Footnote 5: Given a subset \(S\) of \(X:=X_{1}\times X_{2}\), the projection of \(S\) onto \(X_{1}\) is represented by \(\pi_{X_{1}}(S):=\{x_{1}\in X_{1}:(x_{1},x_{2})\in S\text{ for some }x_{2}\in X_{2}\}\). Similarly, the projection of \(S\) onto \(X_{2}\) is denoted by \(\pi_{X_{2}}(S):=\{x_{2}\in X_{2}:(x_{1},x_{2})\in S\text{ for some }x_{1}\in X_{1}\}\). \[\pi_{\mathcal{X}\times\mathcal{X}_{c}}(\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta})\subset\Psi\backslash\mathcal{A}, \tag{42}\] _then the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) in (38) is synergistic relative to \(\mathcal{A}_{1,\theta}\) for (31) with synergy gap exceeding \(\delta\)._ Proof.: It follows from the definition of \(\mu_{V_{1,\theta}}\) in (41c) that \(\mu_{V_{1,\theta}}(x,x_{c,1})\) is the sum of \(\mu_{V_{0}}(x,x_{c})\) with a quadratic nonnegative term, hence \[\mu_{V_{1,\theta}}(x,x_{c,1})\geq\mu_{V_{0}}(x,x_{c}) \tag{43}\] for each \((x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\) and, consequently, we have that \[\begin{split}\overline{\delta}_{2}&:=\inf\{\mu_{V_{ 1,\theta}}(x,x_{c},\theta):(x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_ {1,\theta}\}\\ &\geq\inf\left\{\mu_{V_{0}}(x,x_{c}):(x,x_{c,1})\in\Psi_{1, \theta}\backslash\mathcal{A}_{1,\theta}\right\}.\end{split} \tag{44}\] The fact that \((x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\) implies \((x,x_{c})\in\pi_{\mathcal{X}\times\mathcal{X}_{c}}(\Psi_{1,\theta}\backslash \mathcal{A}_{1,\theta})\) together with (44) allow us to derive the following inequality: \(\overline{\delta}_{2}\geq\inf\left\{\mu_{V_{0}}(x,x_{c}):(x,x_{c})\in\pi_{ \mathcal{X}\times\mathcal{X}_{c}}(\Psi_{1,\theta}\backslash\mathcal{A}_{1, \theta})\right\}\). It follows from (42) that \(\overline{\delta}_{2}\geq\inf\{\mu_{V_{0}}(x,x_{c}):(x,x_{c})\in\Psi\backslash \mathcal{A}\}\) which is greater than zero by the assumption that the controller \((\kappa_{0},V_{0},D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (33) with synergy gap exceeding \(\delta\). In addition, we have that \(\mu_{V_{1,\theta}}(x,x_{c,1})\geq\mu_{V_{0}}(x,x_{c})>\delta(x,x_{c})\) for each \((x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\), which proves that the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) in (38) is synergistic relative to \(\mathcal{A}_{1,\theta}\) for (31) with synergy gap exceeding \(\delta\). In the next result, we complete the construction of the robust synergistic controller (23) from the data of a nominally synergistic controller \((\kappa_{0},V_{0},D_{c},F_{c})\), by designing a set-valued map \(G_{c,1}:\mathcal{X}\times\mathcal{X}_{c,1}\rightrightarrows\mathcal{X}\) that is outer semicontinuous, locally bounded and satisfies (25). **Proposition 5**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\Omega\) in (32), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), \(\mathcal{A}_{1}:=\{\mathcal{A}_{1,\theta}\}_{\theta\in\Omega}\) with \(\mathcal{A}_{1,\theta}\) in (34), \(\mathcal{Y}_{1}:=\{V_{1,\theta}\}_{\theta\in\Omega}\) with \(V_{1,\theta}\) in (38b), then the hybrid controller \((\kappa_{1},\mathcal{Y}_{1},D_{c,1},F_{c,1},G_{c,1})\) where_ \[G_{c,1}(x,x_{c,1}):=\varrho_{V_{0}}(x,x_{c})\times\hat{G}(\hat{\theta}) \tag{45}\] _for each \((x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}\), and_ \[\hat{G}(\hat{\theta}):=\underset{g\in\Omega+\epsilon\overline{ \mathbb{B}}}{\arg\max}\ \ \underset{\theta\in\Omega}{\min}\ (\theta-\hat{\theta})^{\top}\Gamma_{1}^{-1}(\theta-\hat{\theta})\] \[-(\theta-g)^{\top}\Gamma_{1}^{-1}(\theta-g)\] _for each \(\hat{\theta}\in\Omega+\epsilon\overline{\mathbb{B}}\), is synergistic relative to \(\mathcal{A}_{1}\) for (31) with robustness margin \(\Omega\) and synergy gap exceeding \(\delta\)._ Proof.: In Proposition 4 we demonstrate that the hybrid controller \((\kappa_{1},V_{1,\theta},D_{c,1},F_{c,1})\) is synergistic relative to \(\mathcal{A}_{1,\theta}\) as required by Definition 3. It remains to be shown that the hybrid controller \((\kappa_{1},\mathcal{Y}_{1},D_{c,1},F_{c,1},G_{c,1})\) satisfies assumptions (C8), (C9) and (C10). To prove (C8), one must show that \(\mathcal{X}\times\mathcal{X}_{c,1}\subset\mathrm{dom}\,V_{1,\theta}\). From the definition of \(V_{1,\theta}\) in (38b), we have that \(\mathrm{dom}\,V_{1,\theta}=\mathrm{dom}\,V_{0}\times(\Omega+\epsilon \overline{\mathbb{B}})\). It follows from the assumption that the hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) is nominally synergistic relative to \(\mathcal{A}\) for (31) that \(\mathcal{X}\times\mathcal{X}_{c}\subset\mathrm{dom}\,V_{0}\), hence \(\mathcal{X}\times\mathcal{X}_{c,1}\subset\mathrm{dom}\,V_{1,\theta}\). The function \((x,x_{c,1},\theta)\mapsto V_{1}(x,x_{c,1},\theta):=V_{1,\theta}(x,x_{c,1})\) is continuous because it results from the composition of continuous functions, hence (C8) holds. To prove (C9) and (C10), one must show that \(G_{c,1}\) is rewritten as \(h(g,\theta)=2\theta^{\top}\Gamma_{1}^{-1}(g-\hat{\theta})-g^{\top}\Gamma_{1}^{-1}g+ \hat{\theta}^{\top}\Gamma_{1}^{-1}\hat{\theta}\) for each \((g,\theta)\in(\Omega+\epsilon\overline{\mathbb{B}})\times\Omega\), is quasi-concave as a function of \(g\) and quasi-convex as a function of \(\theta\). The hybrid closed-loop system resulting from the interconnection between \((\kappa_{1},\mathcal{V}_{1},D_{c,1},F_{c,1},G_{c,1})\) and (31) is given by: \[(\dot{x},\dot{x}_{c,1}) \in F_{d,1}(x,x_{c,1}) (x,x_{c,1})\in C_{\Omega,1} \tag{47a}\] \[(x^{+},x_{c,1}^{+}) \in G_{\Omega,1}(x,x_{c,1}) (x,x_{c,1})\in D_{\Omega,1} \tag{47b}\] where \[C_{\Omega,1} :=\left\{(x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}:\min_{ \theta\in\Omega}\mu_{\mathbb{V}_{1,\theta}}(x,x_{c,1})\leq\delta(x,x_{c})\right\}\] \[D_{\Omega,1} :=\left\{(x,x_{c,1})\in\mathcal{X}\times\mathcal{X}_{c,1}:\min_{ \theta\in\Omega}\mu_{\mathbb{V}_{1,\theta}}(x,x_{c,1})\geq\delta(x,x_{c})\right\}\] and \[G_{\Omega,1}(x,x_{c,1}):=\begin{bmatrix}x\\ G_{c,1}(x,x_{c,1})\end{bmatrix}\forall(x,x_{c,1})\in D_{\Omega,1}. \tag{48}\] Global asymptotic stability of \(\mathcal{A}_{1,\theta}\) for (47) follows from the application of Theorem 2 and it is summarized in the next corollary. **Corollary 2**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\Omega\) in (32), a positive function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\mapsto\mathbb{R}\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), for each \(\theta\in\Omega\), the set \(\mathcal{A}_{1,\theta}\) is globally asymptotically stable for (47)._ Proof.: It follows from (43) that \(\min\{\mu_{\mathbb{V}_{1,\theta}}(x,x_{c,1}):\theta\in\Omega\}\geq\mu_{\mathbb{ V}_{0}}(x,x_{c})\) for each \((x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\). Since \(\mu_{\mathbb{V}_{0}}(x,x_{c})>\delta(x,x_{c})\) for each \((x,x_{c,1})\in\Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\) as shown in the proof of Proposition 4, and \(\delta\) satisfies (D1), the conditions of Theorem 2 apply and we are able to conclude that \(\mathcal{A}_{1,\theta}\) is globally asymptotically stable for (47). ### _Backstepping_ Given a nominally synergistic controller \((\kappa_{0},V_{0},D_{c},F_{c})\), we extend the dynamics of the controller in Section VI-B to include the input \(u\) as a controller state:6 Footnote 6: Alternatively, one may consider \(u\) as a plant state rather than a controller state, in which case \(u\) would remain constant during jumps. We have included \(u\) as a controller variable because it is an approach less often found in the literature. \[\dot{x}_{c,2}\in F_{c,2}(x,x_{c,2})\] \[:=\left\{\begin{bmatrix}f_{c}\\ \Gamma_{1}\operatorname{Proj}(v(x,x_{c,2}),\hat{\theta})\\ f_{u}(x,x_{c,2})+\mathcal{D}_{x_{c}}(\kappa_{1}(x,x_{c,1}))f_{c}\end{bmatrix}:f _{c}\in F_{c}(x,x_{c})\right\} \tag{49}\] with \(x_{c,2}:=(x_{c,1},u)\in\mathcal{X}_{c,2}:=\mathcal{X}_{c}\times(\Omega+ \epsilon\overline{\mathbb{B}})\times\mathbb{R}^{m}\), \(\Gamma_{2}\in\mathbb{R}^{m\times m}\) positive definite, \(k_{u}>0\), \[v(x,x_{c,2}):=W(x,x_{c})^{\top}\nabla_{x}V_{0}(x,x_{c})\] \[-W(x,x_{c})^{\top}\mathcal{D}_{x}(\kappa_{1}(x,x_{c,1}))^{\top} \Gamma_{2}^{-1}(u-\kappa_{1}(x,x_{c,1})) \tag{50}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\), and \[f_{u}(x, x_{c,2}):=-\widehat{W}(x,x_{c})\Gamma_{1}\operatorname{Proj}(v(x,x_{c,2}),\hat{\theta})\] \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))-\Gamma_{2}H(x,x_{c})^{\top} \nabla_{x}V_{0}(x,x_{c})\] \[+\mathcal{D}_{x}(\kappa_{1}(x,x_{c,1}))F(x,x_{c},u,\hat{\theta}) \tag{51}\] which is defined for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) assuming that \(\kappa_{0}\) is continuously differentiable and that \(F(x,x_{c},u,\hat{\theta})=F_{\hat{\theta}}(x,x_{c},u)\) denotes the dynamics (31) with \(\theta\) is equal to the estimated value \(\hat{\theta}\). Given the compact set \(\Omega\) of possible (unknown) values of \(\theta\) in (32), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a nominal synergistic controller \((\kappa_{0},V_{0},D_{c},F_{c})\) relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), the main goal of this section is to design a controller of the form (23) that is synergistic relative to \(\mathscr{A}_{2}:=\{\mathcal{A}_{2,\theta}\}_{\theta\in\Omega}\) for (31) with robustness margin \(\Omega\) and synergy gap exceeding \(\delta\), where \[\mathcal{A}_{2,\theta}:=\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:(x,x _{c,1})\in\mathcal{A}_{1,\theta},\\ u=\kappa_{1}(x,x_{c,1})\}. \tag{52}\] In this direction, we define the Lyapunov function \[\begin{split}& V_{2,\theta}(x,x_{c,2}):=V_{1,\theta}(x,x_{c,1})\\ &\quad+\frac{1}{2}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1} (u-\kappa_{1}(x,x_{c,1}))\end{split} \tag{53}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) and the set-valued map \[D_{c,2}(x,x_{c,2}):=\{(g_{c,1},g_{u})\in\mathcal{X}_{c,2}:g_{c,1} \in D_{c,1}(x,x_{c,1}),\\ g_{u}=\kappa_{1}(x,g_{c,1})\} \tag{54}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\). The choice \(u=\kappa_{1}(x,g_{c,1})\) in (54) may seem peculiar, but it turns out that this value minimizes (53) with respect to \(u\), hence it is suitable for the jump logic. From the interconnection between (31) and the hybrid controller \((\kappa_{2},V_{2,\theta},D_{c,2},F_{c,2})\) with \(\kappa_{2}(x,x_{c,2})=u\) for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\), we obtain the hybrid closed-loop system \[(\dot{x},\dot{x}_{c,2})\in F_{cl,2}(x,x_{c,2})\quad(x,x_{c,2})\in C _{2}\] \[:=\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:\mu_{\mathbb{ V}_{2,\theta}}(x,x_{c,2})\leq\delta(x,x_{c})\} \tag{55a}\] \[(x^{+},x_{c,2}^{+})\in G_{cl,2}(x,x_{c,2})\quad(x,x_{c,2})\in D_{2}\] \[:=\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:\mu_{\mathbb{ V}_{2,\theta}}(x,x_{c,2})\geq\delta(x,x_{c})\} \tag{55b}\] where \[F_{cl,2}(x,x_{c,2}) :=\begin{bmatrix}F_{\theta}(x,x_{c,u})\\ F_{c,2}(x,x_{c,2})\end{bmatrix}\quad\forall(x,x_{c,2})\in C_{2}\] (56a) \ Note that, from the definitions (12b) and (12c), we have the following identities for the hybrid controller \((\kappa_{2},V_{2,\theta},D_{c,2},F_{c,2})\): \[\varrho_{V_{2,\theta}}(x,x_{c,2}) =\{(g_{c,1},g_{u})\in\mathcal{X}_{c,2}:g_{c,1}\in\varrho_{V_{1, \theta}}(x,x_{c,1}),\] \[g_{u}=\kappa_{1}(x,g_{c,1})\},\] \[\mu_{V_{2,\theta}}(x,x_{c,2}) =\mu_{V_{1,\theta}}(x,x_{c,1})+\frac{1}{2}\left|\Gamma_{2}^{-\frac {1}{2}}(u-\kappa_{1}(x,x_{c,1}))\right|^{2}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\),7; hence, similarly to (39), the closed-loop system (55) is impossible to implement due to dependence on \(\theta\) in \(C_{2}\), \(D_{2}\), and \(G_{c,l,2}\), but, similarly to the controller of Section VI-B, this dependence will be removed with the design of a hybrid controller that is synergistic relative to \(\mathscr{A}_{2}:=\{\mathcal{A}_{2,\theta}\}_{\theta\in\Omega}\) for (31) with robustness margin \(\Omega\) (cf. Remark 7). Footnote 7: Since \(\Gamma_{2}\in\mathbb{R}^{m\times m}\) is assumed to be positive definite, \(\Gamma_{2}^{-\frac{1}{2}}\) exists and is unique (cf. [34, Section 8.5]). We are able to prove the following result using arguments similar to those of Proposition 4. **Proposition 6**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\theta\in\Omega\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), if (42) is satisfied then the hybrid controller \((\kappa_{2},V_{2,\theta},D_{c,2},F_{c,2})\) is synergistic relative to \(\mathcal{A}_{2,\theta}\) for (31) with synergy gap exceeding \(\delta\)._ Proof.: Similarly to the proof of Proposition 4, it is possible to show that properties (C1), (C3) and (C5) follow directly from the fact that \(\mathcal{A}_{2,\theta}\) is compact and from the assumption that \((\kappa_{0},V_{0},D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (33). It follows from the continuity of \(D_{c,1}\) and \(\kappa_{1}\) that \(D_{c,2}\) is continuous. That \(D_{c,2}\) is compact-valued follows from compactness of \(D_{c,1}\) and continuity of \(\kappa_{1}\), hence (C4) is satisfied. It remains to be shown that properties (C6) and (C7) also hold. It follows from (53) and (56a) that \[\nabla V_{2,\theta} (x,x_{c,2})^{\top}f_{c,l,2}=\nabla V_{0}(x,x_{c})^{\top}\begin{bmatrix} F_{\theta}(x,x_{c},u)\\ f_{c}\end{bmatrix}\] \[-(\theta-\hat{\theta})^{\top}\operatorname{Proj}(\upsilon(x,x_{c, 2}),\hat{\theta})\] \[+(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}\bigg{(}f_{u}(x,x _{c,2})\] (X2) \[-\mathcal{D}(\kappa_{1}(x,x_{c,1}))\begin{bmatrix}F_{\theta}(x,x _{c},u)\\ f_{c}\\ \Gamma_{1}\operatorname{Proj}(\upsilon(x,x_{c,2}),\hat{\theta})\end{bmatrix} \bigg{)}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) and each \(f_{c,l,2}\in F_{c,l,2}(x,x_{c,2})\), where \(V_{2,\theta}\) is continuously differentiable and \(f_{c}\in F_{c}(x,x_{c})\) is the component of \(f_{c,l,2}\) that describes the dynamics of \(x_{c}\). Replacing (51) in (X2), we obtain \[\nabla V_{2,\theta}(x,x_{c,2})^{\top}f_{c,l,2}=\nabla V_{0}(x,x_{ c})^{\top}\begin{bmatrix}F_{\theta}(x,x_{c},u)\\ f_{c}\end{bmatrix}\] \[-(\theta-\hat{\theta})^{\top}\operatorname{Proj}(\upsilon(x,x_{c, 2}),\hat{\theta})\] \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}(u-\kappa_{ 1}(x,x_{c,1}))\] \[-(u-\kappa_{1}(x,x_{c,1}))^{\top}H(x,x_{c})^{\top}\nabla_{x}V_{0} (x,x_{c})\] \[-(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}\mathcal{D}_{x}( \kappa_{1}(x,x_{c,1}))W(x,x_{c})(\theta-\hat{\theta}).\] (X3) It follows from (P3) and (C3) that \[\nabla V_{2,\theta}(x,x_{c,2})^{\top}f_{c,l,2}\leq\nabla V_{0}(x,x _{c})^{\top}\begin{bmatrix}F_{\theta}(x,x_{c},u)\\ f_{c}\end{bmatrix}\] \[-(\theta-\hat{\theta})^{\top}\upsilon(x,x_{c,2})\] \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}(u-\kappa_{ 1}(x,x_{c,1}))\] \[-(u-\kappa_{1}(x,x_{c,1}))^{\top}H(x,x_{c})^{\top}\nabla_{x}V_{0} (x,x_{c})\] \[-(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}\mathcal{D}_{x}( \kappa_{1}(x,x_{c,1}))W(x,x_{c})(\theta-\hat{\theta}).\] (X4) Replacing (50) in (C4), we obtain \[\nabla V_{2,\theta}(x,x_{c,2})^{\top}f_{c,l,2}\leq\nabla V_{0}(x, x_{c})^{\top}\begin{bmatrix}F_{\theta}(x,x_{c},u)\\ f_{c}\end{bmatrix}\] \[-(\theta-\hat{\theta})^{\top}W(x,x_{c})^{\top}\nabla_{x}V_{0}(x,x _{c})\] (X5) \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}(u-\kappa_{1} (x,x_{c,1}))\] \[-(u-\kappa_{1}(x,x_{c,1}))^{\top}H(x,x_{c})^{\top}\nabla_{x}V_{0} (x,x_{c}).\] The control affine structure of (31) allows us to derive the following inequality from (X5): \[\nabla V_{2,\theta} (x,x_{c,2})^{\top}f_{c,l,2}\] \[\leq\nabla V_{0}(x,x_{c})^{\top}\begin{bmatrix}F_{\theta}(x,x_{c}, \kappa_{1}(x,x_{c,1}))\\ f_{c}\end{bmatrix}\] (X6) \[-(\theta-\hat{\theta})^{\top}W(x,x_{c})^{\top}\nabla_{x}V_{0}(x,x _{c})\] \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}(u-\kappa_{1} (x,x_{c,1})).\] Note that it was proved in Proposition 3 that \[\nabla V_{0}(x,x_{c})^{\top}(F_{\theta}(x,x_{c},\kappa_{1}(x,x_{c,1 }))-W(x,x_{c})(\theta-\hat{\theta}))\] \[\leq\nabla V_{0}(x,x_{c})^{\top}F_{0}(x,x_{c},\kappa_{0}(x,x_{c})),\] (X7) thus, from the assumption that \((\kappa_{0},V_{0},D_{c},F_{c})\) is synergistic relative to \(\mathcal{A}\) for (33), we have that \[\nabla V_{2,\theta}(x,x_{c,2})^{\top}f_{c,l,2}\leq\nabla V_{0}(x, x_{c})^{\top}F_{0}(x,x_{c},\kappa_{0}(x,x_{c}))\] \[-k_{u}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1}(u-\kappa_{ 1}(x,x_{c,1}))\leq 0\] (S7) for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) satisfying \(V_{2,\theta}(x,x_{c,2})<+\infty\) and each \(f_{c,l,2}\in F_{c,l,2}(x,x_{c,2})\), hence property (C6) is satisfied. Let \(\Psi_{2,\theta}\) denote the largest weakly invariant subset of \[ \((x,x_{c,2})\in\mathcal{V}_{2,\theta}\backslash\mathcal{A}_{2,\theta}\rangle>0\). It follows from (57) that \(\Psi_{2,\theta}\subset\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:(x,x_{c,1})\in\Psi_{1,\theta},u=\kappa_{1}(x,x_{c,1})\}\) where \(\Psi_{1,\theta}\) is defined in Proposition 4. It follows from (52) that \[\begin{split}\overline{\delta}_{2}&\geq\inf\{\mu_{ \mathcal{V}_{2,\theta}}(x,x_{c,2}):(x,x_{c,1})\in\Psi_{1,\theta}\backslash \mathcal{A}_{1,\theta},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad u= \kappa_{1}(x,x_{c,1})\}\\ &=\inf\{\mu_{\mathcal{V}_{1,\theta}}(x,x_{c,1}):(x,x_{c,1})\in \Psi_{1,\theta}\backslash\mathcal{A}_{1,\theta}\}\end{split} \tag{59}\] which we have shown in Proposition 4 to satisfy \(\overline{\delta}_{2}>0\), under assumption (42). In addition, \(\mu_{\mathcal{V}_{2,\theta}}(x,x_{c,2})=\mu_{\mathcal{V}_{1,\theta}}(x,x_{c,1 })\geq\mu_{\mathcal{V}_{0}}(x,x_{c})>\delta(x,x_{c})\) for each \((x,x_{c,2})\in\Psi_{2,\theta}\backslash\mathcal{A}_{2,\theta}\), hence the hybrid controller \((\kappa_{2},V_{2,\theta},D_{c,2},F_{c,2})\) is synergistic relative to \(\mathcal{A}_{2,\theta}\) for (31) with synergy gap exceeding \(\delta\). To finalize the design of a robust synergistic controller, we provide the construction of the jump map \(G_{c,2}\) in the next proposition. **Proposition 7**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\Omega\) in (32), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), \(\mathcal{A}_{2}:=\{\mathcal{A}_{2,\theta}\}_{\theta\in\Omega}\) with \(\mathcal{A}_{2,\theta}\) in (52), \(\mathcal{Y}_{2}:=\{V_{2,\theta}\}_{\theta\in\Omega}\) with \(V_{2,\theta}\) in (53), the hybrid controller \((\kappa_{2},\mathcal{Y}_{2},D_{c,2},F_{c,2},G_{c,2})\) where_ \[G_{c,2}(x,x_{c,2}):=\{(g_{c,1},g_{u})\in D_{c,2}(x,x_{c,2}):\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad g_{c,1}\in G _{c,1}(x,x,x_{c,1})\} \tag{60}\] _for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) is synergistic relative to \(\mathcal{A}_{2}\) for (31) with robustness margin \(\Omega\) and synergy gap exceeding \(\delta\)._ Proof.: In Proposition 6 we demonstrate that the hybrid controller \((\kappa_{2},V_{2,\theta},D_{c,2},F_{c,2})\) is synergistic relative to \(\mathcal{A}_{2,\theta}\) with synergy gap exceeding \(\delta\) as required by Definition 3. The proof that (C8) is satisfied follows closely the proof of Proposition 5, hence it is omitted here. The outer semicontinuity and local boundedness of \(G_{c,2}\) follows from outer semicontinuity and local boundedness of \(G_{c,1}\) in addition to the continuity of \(\kappa_{1}\), thus (C9) is verified. For each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) and for each \(g_{c,2}\in\mathcal{X}_{2}\), we have that \[\begin{split} V_{2,\theta}&(x,x_{c,2})-V_{2,\theta} (x,g_{c,2})\\ &\geq\min_{g_{c,1}\in\Omega}V_{2,\theta}(x,x_{c,2})-V_{2,\theta} (x,g_{c,2}).\end{split} \tag{61}\] From (60), it follows that \(g_{c,2}:=(g_{c,1},g_{u})\) with \(g_{c,1}\) belonging to (45) and \(g_{u}=\kappa_{1}(x,g_{c,1})\). Replacing (53) in (61) and plugging in the aforementioned values of \(g_{c,1}\) and \(g_{u}\), we have that \[\begin{split} V_{2,\theta}(x,x_{c,2})&-V_{2,\theta} (x,g_{c,2})\\ &\geq\max_{g_{c,1}\in\Omega}\min_{g\in\Omega}V_{1,\theta}(x,x_{c,1})-V_{1,\theta}(x,g_{c,1})\\ &\qquad+\frac{1}{2}(u-\kappa_{1}(x,x_{c,1}))^{\top}\Gamma_{2}^{-1} (u-\kappa_{1}(x,x_{c,1}))\\ &=\max_{g_{c,2}\in D_{c,2}(x,x_{c,2})}\min_{\theta\in\Omega}V_{2, \theta}(x,x_{c,2})-V_{2,\theta}(x,g_{c,2})\end{split} \tag{62}\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) and each \(g_{c,2}:=(g_{c,1},g_{u})\in G_{c,2}(x,x_{c,2})\). Since the \(\max\) and \(\min\) operators in (62) commute as shown in the proof of Proposition 5, it follows that \[V_{2,\theta}(x,x_{c,2})-V_{2,\theta}(x,g_{c,2})\geq\min_{\theta\in\Omega}\mu_{ \mathcal{V}_{2,\theta}}(x,x_{c,2})\] for each \((x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}\) and each \(g_{c,2}:=(g_{c,1},g_{u})\in G_{c,2}(x,x_{c,2})\), thus verifying (C10). The hybrid closed-loop system resulting from the interconnection between \((\kappa_{2},\mathcal{Y}_{2},D_{c,2},F_{c,2},G_{c,2})\) and (31) is given by: \[(\dot{x},\dot{x}_{c,2})\in F_{c,2}(x,x_{c,2}) (x,x_{c,2})\in C_{\Omega,2} \tag{63a}\] \[(x^{+},x_{c,2}^{+})\in G_{\Omega,2}(x,x_{c,2}) (x,x_{c,2})\in D_{\Omega,2} \tag{63b}\] where \[C_{\Omega,2} :=\left\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:\min_{ \theta\in\Omega}\mu_{V_{2,\theta}}(x,x_{c,2})\leq\delta(x,x_{c})\right\}\] \[D_{\Omega,2} :=\left\{(x,x_{c,2})\in\mathcal{X}\times\mathcal{X}_{c,2}:\min_{ \theta\in\Omega}\mu_{V_{2,\theta}}(x,x_{c,2})\geq\delta(x,x_{c})\right\}\] and \[G_{\Omega,2}(x,x_{c,2}):=\begin{bmatrix}x\\ G_{c,2}(x,x_{c,2})\end{bmatrix}\forall(x,x_{c,2})\in D_{\Omega,2}.\] The global asymptotic stability of \(\mathcal{A}_{2,\theta}\) for (63) follows from Theorem 2 and it is stated in the next corollary for the sake of completeness. The proof is omitted because it is identical to the proof of Corollary 2 **Corollary 3**.: _Suppose that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\), \(\mathcal{U}\), and the set-valued map \(F_{\theta}\) in (31) satisfy Assumption 1, and that Assumption 2 holds. Given \(\Omega\) in (32), a positive function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\mapsto\mathbb{R}\), a compact set \(\mathcal{A}\subset\mathcal{X}\times\mathcal{X}_{c}\), and a hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \(\mathcal{A}\) for (31) with synergy gap exceeding \(\delta\), for each \(\theta\in\Omega\), the set \(\mathcal{A}_{2,\theta}\) is globally asymptotically stable for (63)._ In the next section, we apply the controllers proposed in Sections VI-B and VI-C to global asymptotic stabilization of a setpoint for a two-dimensional system in the presence of an obstacle. ## VII Synergistic Hybrid Feedback for Robust Global Obstacle Avoidance To demonstrate the applicability of the synergistic adaptive controller of Section VI, we consider the problem of globally asymptotically stabilizing the origin for a vehicle moving on a plane with an obstacle \(\mathcal{N}:=z_{0}+r\overline{\mathbb{B}}\) with \(z_{0}\in\mathbb{R}^{2}\) and \(r>0\) such that the origin is not contained in \(\mathcal{N}\). We consider that the evolution in time of the position \(z\in\mathbb{R}^{2}\backslash\mathcal{N}\) of the vehicle is described by \[\dot{z}=u+\theta \tag{64}\] where \(u\in\mathbb{R}^{2}\) is the input and \(\theta\in\mathbb{R}^{2}\) is an unknown constant. We have shown in [7, Section IV] \(\mathbb{R}^{2}\backslash\mathcal{N}\) and \(\mathbb{R}\times\mathsf{S}^{1}\), hence global asymptotic stabilization of the origin for (64) is equivalent to the global asymptotic stabilization of \(\psi(0)\). for \[\dot{x}=\mathcal{D}\psi(\psi^{-1}(x))u+\mathcal{D}\psi(\psi^{-1}(x))\theta. \tag{65}\] with \(x\in\mathcal{X}:=\mathbb{R}\times\mathsf{S}^{1}\). Before moving to the controller design, we show that Assumption 1 is verified for the particular problem at hand. **Proposition 8**.: _The sets \(\mathcal{X}:=\mathbb{R}\times\mathsf{S}^{1}\), \(\mathcal{X}_{c}:=\{-1,1\}\), \(\mathcal{U}:=\mathbb{R}^{2}\) and the set-valued map_ \[F_{\theta}(x,x_{c},u):=\mathcal{D}\psi(\psi^{-1}(x))u+\mathcal{D}\psi(\psi^{-1 }(x))\theta \tag{66}\] _defined for each \((x,x_{c},u)\in\mathcal{X}\times\mathcal{X}_{c}\times\mathcal{U}\) satisfies Assumption 1 and_ * _The intersection between_ \(F_{\theta}(x,x_{c},u)\) _and the tangent space to_ \(\mathcal{X}\) _at_ \((x,x_{c},u)\) _is nonempty for each_ \((x,x_{c},u)\in\mathcal{X}\times\mathcal{X}_{c}\times\mathcal{U}\)_._ Proof.: To check that the condition (S1) holds, note that the sets \(\mathcal{X}\), \(\mathcal{X}_{c}\) and \(\mathcal{U}\) are closed subsets of \(\mathbb{R}^{3}\), \(\mathbb{R}\) and \(\mathbb{R}^{2}\), respectively. It follows from the fact that \(\psi\) is a diffeomorphism between \(\mathbb{R}^{2}\backslash\mathcal{N}\) and \(\mathbb{R}\times\mathsf{S}^{1}\) that \(\mathcal{D}\psi(\psi^{-1}(x))\) is an isomorphism between the tangent space to \(\mathbb{R}^{2}\backslash\mathcal{N}\) at \(\psi^{-1}(x)\) and the tangent space to \(\mathbb{R}\times\mathsf{S}^{1}\) at \(x\) for each \(x\in\mathcal{X}\) (cf. [29, Proposition 3.6]), thus (\(\star\) *> 2.5) is verified. Since \(\psi\) is a diffeomorphism it also follows that \(x\mapsto\mathcal{D}\psi(\psi^{-1}(x))\) is continuous, thus \(F_{\theta}\) is also continuous and single-valued, hence it verifies (S2). **Remark 8**.: The condition (\(\star\) *> 2.5) is pivotal in the verification of the conditions (VC) and (VC') for this particular example, which, in turn, allows us to check the completeness of maximal solutions as shown in Theorems 1 and 2, respectively. The controller design of Section VI requires the existence of a hybrid controller of the form \((\kappa_{0},V_{0},D_{c},F_{c})\) that is nominally synergistic relative to \[\mathcal{A}:=\{(x,q)\in\mathcal{X}\times\mathcal{X}_{c}:x=\psi(0)\}. \tag{67}\] for (66), thus we start by showing that the controller provided in [7, Section IV] satisfies the requirements (C1)-(C7). In this direction, let the controller variable \(x_{c}\) in (31) be a logic variable \(q\) which is either \(1\) or \(-1\) and whose values does not change during flows, i.e., \(x_{c}=q\in\mathcal{X}_{c}:=\{-1,1\}\) and \(\dot{q}=F_{c}(x,q):=0\) for all \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\) which verifies (C2). Following the controller design of [7, Section IV], let \(\phi_{q}(x):=\begin{bmatrix}x_{1}&\frac{x_{2}}{1-qx_{3}}\end{bmatrix}^{\top}\) for each \(x:=(x_{1},x_{2},x_{3})\in U_{q}:=\{x\in\mathcal{X}:qx_{3}\neq 1\}\) with \(q\in\mathcal{X}_{c}:=\{-1,1\}\). Furthermore, we define \[V_{0}(x,q):=\begin{cases}\frac{1}{2}\left|\phi_{q}(x)-\phi_{q}(\psi(0)) \right|^{2}&\text{ if }x\in U_{q}\\ +\infty&\text{ otherwise}\end{cases} \tag{68}\] for each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\). Defining \(D_{c}(x,q)=\mathcal{X}_{c}\) for each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\) and noting that \(\{(U_{q},\phi_{q})\}_{q\in\mathcal{X}_{c}}\) covers \(\mathbb{R}\times\mathsf{S}^{1}\) we have that the optimization problem in (12) is feasible, hence assumption (C1) is verified. Since each chart \(\phi_{q}:U_{q}\to\mathbb{R}^{2}\) is a diffeomorphism, \(\phi_{q}(x)=\phi_{q}(\psi(0))\) if and only if \(x=\psi(0)\), hence \(V\) in (68) is positive definite relative to (67). Moreover, \(V_{0}\) is continuous and \(V_{0}^{-1}[(0,c)]\) is compact for each \(c\in\mathbb{R}_{\geq 0}\), thus (C3) is verified. Since \(D_{c}\) is constant and equal to the finite set \(\mathcal{X}_{c}\) for each \((x,x_{c})\in\mathcal{X}\times\mathcal{X}_{c}\), we have that \(D_{c}\) is outer semicontinuous, lower semicontinuous and locally bounded, hence (C4) is verified. Condition (C5) is verified for \[\kappa_{0}(x,q)=-\left(\mathcal{D}\psi(\psi^{-1}(x))\right)^{\top}\mathcal{D} \phi_{q}(x)^{\top}(\phi_{q}(x)-\phi_{q}(\psi(0)))\] for each \((x,q)\in\mathrm{dom}\,\kappa_{0}=\{(x,q)\in\mathcal{X}\times\mathcal{X}_{c}:x \in U_{q}\}\). The previous arguments allow us to make the following assertion. **Proposition 9**.: _Given \(\mathcal{A}\) in (67), the hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) is a synergistic candidate relative to \(\mathcal{A}\) in (67) for (65)._ Even though (68) is continuous, it is not Lipschitz continuous everywhere, hence the proof that (C6) holds might not be immediately obvious. From (12c) and using the fact that \(D_{c}(x,q):=\mathcal{X}_{c}:=\{-1,1\}\) for each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\), we have that \(\mu_{\nu_{0}}(x,q)=\max\{0,V_{0}(x,q)-V_{0}(x,-q)\}\) for each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\) and, in particular, we have that \(\mu_{\nu_{0}}(x,q)=+\infty\) for each \((x,q)\not\in U_{q}\), hence, for any function \(\delta\), it follows that each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\) satisfying \((x,q)\in U_{q}\) does not belong to \(C\). Since \(U_{q}\) is open relative to \(\mathcal{X}:=\mathbb{R}\times\mathsf{S}^{1}\) for each \(q\in\mathcal{X}_{c}\), \(\{(x,q)\in\mathcal{X}\times\mathcal{X}_{c}:x\in\mathcal{X}\backslash U_{q}\}\) and \(C\) are disjoint closed sets, and there exists a neighborhood of \(C\) where \(V_{0}\) is Lipschitz continuous. The generalized derivative of \(V_{0}\) at \((x,q)\) is the direction \(F_{cl,0}(x,q)\) is given by \[V_{0}{}^{\circ}(x,q;F_{cl,0}(x,q))=\] \[-\left|\mathcal{D}\psi(\psi^{-1}(x))^{\top}\mathcal{D}\phi_{q}(x)^ {\top}(\phi_{q}(x)-\phi_{q}(\psi(0)))\right|^{2}, \tag{69}\] for each \((x,q)\in C\), where \(F_{cl,0}\) is the flow map for the closed-loop system resulting from the interconnection between \((\kappa_{0},V_{0},D_{c},F_{c})\) and (65) with \(\theta=0\) (cf. (15)). It follows from (69) that the growth of \(V_{0}\) along the flows of the closed-loop system is upper bounded by \(0\), hence (C6) is verified. It follows from the fact that \(\psi\) and \(\{\phi_{q}\}_{q\in\mathcal{X}_{c}}\) are diffeomorphisms that Assumption (C7) is satisfied with \(\Psi=\mathcal{A}\) (cf. [7]), thus the following holds. **Proposition 10**.: _Given \(\mathcal{A}\) in (67) and a continuous function \(\delta:\mathcal{X}\times\mathcal{X}_{c}\to\mathbb{R}\), the hybrid controller \((\kappa_{0},V_{0},D_{c},F_{c})\) is nominally synergistic relative to \(\mathcal{A}\) in (67) for (65) with synergy gap exceeding \(\delta\)._ An additional property of the kind of synergistic feedback presented above is that any function \(\delta\) satisfies (D2) since \(\Psi\backslash\mathcal{A}=\emptyset\). Therefore, any choice of \(\delta\) satisfying (D1) yields global asymptotic stability for the hybrid closed-loop system as proved in Theorem 1. Since Assumption 2 is satisfied, we meet all the requirements for the controller design of Section VI, thus it is only a matter of applying the procedures described therein to obtain an adaptive synergistic hybrid feedback controller that is able to deal with parametric uncertainty. In the next section, we present some numerical results that illustrate the behaviour of the closed-loop system. ### _Simulation Results_ In this section, we present simulation results of the closed-loop system resulting from the interconnection between (65) and the hybrid controllers that are presented in Section VI considering that there is an obstacle \(\mathcal{N}:=z_{0}+r\overline{\mathbb{B}}\) with \(z_{0}=\begin{bmatrix}1&0\end{bmatrix}^{\top}\) and \(\tau=0.5\). Furthermore, we consider that \(\theta=\begin{bmatrix}\sqrt{2}/2&\sqrt{2}/2\end{bmatrix}^{\top}\) and that the controller parameters are \(k_{u}=1\), \(\Gamma_{1}=\Gamma_{2}=I_{2}\), \(\epsilon=1\), \(\theta_{0}=1\), and \(\delta(x,q)=1\) for each \((x,q)\in\mathcal{X}\times\mathcal{X}_{c}\). For this particular choice of \(\Gamma_{1}\), we have that \[\hat{G}(\hat{\theta})=\begin{cases}\hat{\theta}&\text{if }\left|\hat{\theta} \right|\leq\theta_{0}\\ \theta_{0}\frac{\hat{\theta}}{\left|\hat{\theta}\right|}&\text{otherwise}\end{cases}\] for each \(\hat{\theta}\in\Omega+\epsilon\overline{\mathbb{B}}\), which is outer semicontinuous and locally bounded. Figure 1 represents the trajectory of the vehicle starting from rest at \(z(0)=\begin{bmatrix}2&0\end{bmatrix}^{\top}\) for each of the controllers presented in Section VI. It can be verified both through Figure 1 as well as Figure 2 that the trajectories before and after backstepping are comparable, since the evolution of the distance of the vehicle to the desired setpoint is fairly similar in both cases. The bottom half of Figure 2 depicts the evolution of the estimation error, which has a smaller settling time for the closed-loop system with the controller of Section VI-C than the controller of Section VI-B for this particular simulation. To find out more about the simulation and its implementation, you may explore the source code at [https://github.com/pcasau/synergistic](https://github.com/pcasau/synergistic). ## VIII Conclusion Synergistic hybrid feedback has taken many forms over the years, depending on the particular dynamical system being studied. The unifying framework for synergistic hybrid feedback that we presented in this paper captures the most salient features of existing synergistic hybrid feedbacks in order to help others distinguish between the particular and the general in different instances of synergistic hybrid feedback across the literature. In addition, we provided a controller design that starts from an existing synergistic controller and modified it in order to yield an adaptive controller that is able to compensate for the presence of bounded matched uncertainties in affine control systems. Furthermore, we demonstrated that the proposed controller is amenable to backstepping and can be applied to the problem of global obstacle avoidance.
2309.10569
Task Graph offloading via Deep Reinforcement Learning in Mobile Edge Computing
Various mobile applications that comprise dependent tasks are gaining widespread popularity and are increasingly complex. These applications often have low-latency requirements, resulting in a significant surge in demand for computing resources. With the emergence of mobile edge computing (MEC), it becomes the most significant issue to offload the application tasks onto small-scale devices deployed at the edge of the mobile network for obtaining a high-quality user experience. However, since the environment of MEC is dynamic, most existing works focusing on task graph offloading, which rely heavily on expert knowledge or accurate analytical models, fail to fully adapt to such environmental changes, resulting in the reduction of user experience. This paper investigates the task graph offloading in MEC, considering the time-varying computation capabilities of edge computing devices. To adapt to environmental changes, we model the task graph scheduling for computation offloading as a Markov Decision Process (MDP). Then, we design a deep reinforcement learning algorithm (SATA-DRL) to learn the task scheduling strategy from the interaction with the environment, to improve user experience. Extensive simulations validate that SATA-DRL is superior to existing strategies in terms of reducing average makespan and deadline violation.
Jiagang Liu, Yun Mi, Xinyu Zhang, Xiaocui Li
2023-09-19T12:26:56Z
http://arxiv.org/abs/2309.10569v4
# Task Graph offloading via Deep Reinforcement Learning in Mobile Edge Computing ###### Abstract Various mobile applications that comprise dependent tasks are gaining widespread popularity and are increasingly complex. These applications often have low-latency requirements, resulting in a significant surge in demand for computing resources. With the emergence of mobile edge computing (MEC), it becomes the most significant issue to offload the application tasks onto small-scale devices deployed at the edge of the mobile network for obtaining a high-quality user experience. However, since the environment of MEC is dynamic, most existing works focusing on task graph offloading, which rely heavily on expert knowledge or accurate analytical models, fail to fully adapt to such environmental changes, resulting in the reduction of user experience. This paper investigates the task graph offloading in MEC, considering the time-varying computation capabilities of edge computing devices. To adapt to environmental changes, we model the task graph scheduling for computation offloading as a Markov Decision Process (MDP). Then, we design a deep reinforcement learning algorithm (SATA-DRL) to learn the task scheduling strategy from the interaction with the environment, to improve user experience. Extensive simulations validate that SATA-DRL is superior to existing strategies in terms of reducing average makespan and deadline violation. keywords: Mobile edge computing; Task graph; Computation offloading; Reinforcement learning + Footnote †: journal: ** ## 1 Introduction With the rapid advance of wireless communication technologies and the proliferation of smart mobile devices, such as smartphones, tablets, and wearable devices, these lightweight smart devices have become significant terminals for mobile users (MUs) to access the Internet. Various mobile applications are gaining widespread popularity and are increasingly complex. Moreover, they are often composed of dependent tasks[1] that form the construct of a task graph, such as gesture recognition, mobile healthcare, and augment reality[2; 3]. These applications often have low-latency requirements, resulting in a significant surge in demand for computing resources. Nevertheless, it is difficult for smart devices to effectually support the local execution of these computation-intensive applications due to the naturally lightweight and limited computing resources[4]. The tension between computation-intensive applications and resource-limited smart devices generates a bottleneck for obtaining a high-quality user experience. The emergent mobile edge computing (MEC) is proposed to deploy many small-scale devices, such as micro base stations, edge routers, and roadside units, at the edge of the mobile network in close proximity to users, instead of the remote cloud. Edge computing devices (ECDs) can provide richer computation resources for MUs through wireless access service with reliable low-latency communications[5; 6]. With computation offloading technology, the tasks of computation-intensive applications may be migrated to ECDs, not via the core network, so that the processing of these applications can be expedited, resulting in a high-quality user experience. As different tasks in mobile applications require different workloads and the amount of transfer data, the task scheduling strategy for computation offloading needs to determine which tasks are processed on which ECDs and which tasks are executed locally. As a result, the task offloading strategy becomes the most significant issue in MEC[7]. Most works have focused on optimal task scheduling strategies for task graph offloading in terms of the application completion time[8], energy consumption[9], and communication service[10]. Due to the NP-hardness of task scheduling optimization, most studies on task graph offloading in MEC are based mainly on heuristic or approximation methods[9; 11; 12; 13]. In real-world scenarios, ECDs in the network are mainly responsible for some specialized functions[14], such as routing, relaying, forwarding, and so on. In this case, ECDs only generally share their idle computing resources with the applications offloaded from MUs. Considering that ECDs are still relatively resource-constrained devices compared to the remote cloud, their idle computing resources are incapable of maintaining persistent performance. The computing capabilities of ECDs may vary with time making the MEC environment dynamic. However, these aforementioned works rely heavily on expert knowledge or accurate analytical models so they fail to fully adapt to the dynamic environment, resulting in the deadline violation for the mobile applications since the actual task execution may fluctuate frequently[15; 12; 16; 11]. Consequently, a few works have been attracted to research the computation offloading in MEC by utilizing the novelty technologies that can adapt to the dynamic environment. Deep reinforcement learning (DRL), which combines reinforcement learning (RL) with a deep neural network, is a promising method that adaptively and flexibly makes sequential decisions in a dynamic environment, without the need for expert knowledge. As a result, some researchers have begun to focus on how to solve the problem of computational offloading in dynamic MEC with DRL. A few works investigate the task offloading in MEC by utilizing DRL to adapt to the environment variation[17, 18, 19, 20]. He et al. [17] improved the performance of computation offloading for vehicular networks by dynamic orchestration of network, caching, and computing resources. Ning et al. [18] considered the variation of the channel state and the computation capability and proposed a DRL scheme to optimize traffic scheduling and resource allocation in vehicular networks. Chen et al. [19] took into account the time-varying network dynamics to investigate an optimal computation offloading policy in a sliced radio access network. They proposed a DRL-based offloading decision to maximize the long-term utility performance. However, these works focus the coarse-grained task offloading rather than fine-grained task graph offloading[21]. Coarse-grained task offloading regards an application as an indivisible whole and designs the offloading scheme based on applications' requirements on computing resources. This scheme does not divide the mobile application into tasks from its functional point of view and may result in unexpected computation delay by unreasonable processing compositor of tasks with dependency constraints. Many existing works apply DRL to investigate the fine-grained offloading in MEC[22, 23, 24]. Lu et al. [22] focused on the scenarios of the multi-service nodes and task graph applications in heterogeneous MEC. They proposed a fine-grained offloading scheme based on DRL to achieve the reduction of execution latency, monetary cost, energy consumption, as well as network usage. Song et al. [23] considered the different dynamic preferences between MEC and users to investigate the offloading decision. They proposed a multi-objective optimization strategy based on DRL for task graph applications in MEC. Wang et al.[24] proposed a DRL-based task graph offloading scheme. In their proposal, an off-policy RL empowered by a Sequence-to-Sequence neural network is utilized to make the task offloading plan, while the input of this neural network is the task graph model. Although these above-mentioned works have proposed various task graph offloading strategies, they did not consider the dynamic environment of MEC. This consequently motivates us to consider the time-varying computing capabilities of ECDs to improve the performance of the task graph offloading in MEC. In this paper, we investigate the task graph offloading in MEC when considering the computation capabilities of ECDs are finite continuous values varying with time. We first formulate the task graph offloading in MEC as an optimization problem of minimizing the average application completion. Then, we model the task graph scheduling for computation offloading as a Markov Decision Process (MDP). In this MDP, the characterization of the environment is formulated as the state space, and the task scheduling decisions are abstracted into the action space. The reward with respect to MDP is defined as the benefit for the agent. To overcome the difficulties caused by the large number of state spaces, we designed a DRL algorithm based on DQN to learn the task scheduling strategy from the interaction with the environment. Particularly, the major contributions of this paper are summarized as follows. * The task graph offloading in MEC is formulated as a problem of minimizing the average application completion time. * The processing of the task graph offloading is abstracted into a sequence of discrete time steps. The interaction between the agent and environment is represented to model a MDP. * The process of task graph scheduling is modeled as a MDP. According to the processing of the task graph offloading, the state space, the action space, and the reward are formulated and a DRL algorithm based on DQN is designed to learn the scheduling strategy. * By conducting extensive simulations, we confirm that the proposed task graph offloading algorithm is superior to existing strategies, in terms of reducing average makespan and deadline violation. The remainder of this paper is organized as follows. Section 2 presents the system model and Section 3 formulates the optimization problem of the task graph offloading in MEC. Section 4 gives the RL-based scheduling mechanism, and Section 5 describes the resource allocation based on RL. Then, the simulation evaluation and conclusions are provided in Section 6 and Section 7, respectively. Figure 1: Components of MEC system. ## 2 System Mobel A MEC system, as shown in Fig. 1, is comprised of many heterogeneous ECDs deployed at the edge of the mobile network. Let \(\mathcal{M}\) express the set of ECDs in the MEC system and \(M=|\mathcal{M}|\) denote the total number of ECDs. Then, we use \(m\) (\(\forall m\in\{1,\ldots,M\}\)) to index the \(m\)-th ECD. These ECDs in \(\mathcal{M}\) communicate with each other via wired links [25]. We indicate the transmission rate between any two ECDs as \(B_{m,m^{\prime}}\), where \(m\neq m^{\prime}\) and \(\forall m,m^{\prime}\in\{1,\ldots,M\}\). Based on the literature [26], \(B_{m,m^{\prime}}\) can be estimated generally. Due to physical constraints, the computing resources of ECDs are limited. We assume that each ECD has only one processing element (PE) which provides the computing service for the task execution of the application following First Come First Service manner (FCFS). In practice, the processing capabilities of PEs are finite continuous values varying with time [18]. Moreover, the processing capability state at the next moment is related to the previous moment. We assume that the processing capabilities of PEs are stabilized over a short period when they are within a level. Therefore, the processing capabilities of PEs can be discretized and quantized into several levels, similar to the literature [18]. Then, let \(\delta_{m}\) express the processing capability of ECD \(m\) at a state, which can be measured in millions of instructions per second (MIPS). For the scenarios in which each ECD has multiple PEs with parallel computing capability, we can add more virtual ECDs and let the data transmission cost among them be 0. Each ECD can periodically broadcast the current status information, such as its own workload and the transmission rate with other ECDs. In the MEC system shown in Fig. 1 each ECD can provide cellular communication services to many MUs in a specified area. We use \(\mathcal{N}\) to indicate the set of all MUs that the MEC system can serve, and \(N=|\mathcal{N}|\) denotes the total number of MUs. Thus, \(n\) (\(\forall n\in\{1,\ldots,N\}\)) can index the \(n\)-th MU. Each MU needs to process an application composed of multiple dependent tasks. This proposal can also be easily extended to the scenarios in which a MU can process multiple applications by adding some new MUs that can only process one application to the system. As a result, \(\mathcal{N}\) also represents the set of all applications and \(n\) can index the \(n\)-th application1. We can model the application as a direct acyclic graph (DAG) [9]. Then, \(G_{n}(\mathcal{V}_{n},\mathcal{E}_{n})\) indicates a DAG corresponding to application \(n\). \(\mathcal{V}_{n}\) denotes the set of tasks. \(\mathcal{E}_{n}\) is the set of data dependencies, called directed edges, among these tasks. For any task in \(\mathcal{V}_{n}\), it is an atomic and indivisible component. Footnote 1: In this paper, we will interchangeably use three terms, i.e., MU \(n\), \(n\)-th MU and application \(n\), to represent a user application There is a decision controller in the MEC system, which resides in some ECDs [25], shown in Fig. 1. It can periodically collect the real-time status information broadcasted from all ECDs. MU \(n\) expects to expedite the execution of its application before the deadline, so it needs to offload application data to the MEC system. To achieve this, MU \(n\) must first send the offloading request to the ECD whose cellular communication service covers MU \(n\). Let \(m_{n}\) indicate the ECD that covers MU \(n\), as shown in Fig. 1. \(B_{n}^{m}\) denotes the transmission rate between ECD \(m_{n}\) and MU \(n\). After ECD \(m_{n}\) accepts this request and receives the application data sent by MU \(n\), the decision controller will extract the tasks ready for execution from ECD \(m_{n}\) and push them into a ready queue \(Q^{r}\). Whereafter, the decision controller can make the assignment for the tasks in \(Q^{r}\) according to a scheduling strategy. Moreover, any task can only be executed on one ECD. The result data must send back to MU \(n\) after all tasks of application \(n\) are completed in the MEC system. As MU \(n\) is located within the cellular signal coverage area of ECD \(m_{n}\), the result data needs to be transmitted to MU \(n\) via ECD \(m_{n}\). For any MU \(n\), it has the same network topology for task execution as all ECDs in this system can execute tasks offloaded from MU \(n\). Hence, we only focus on the topology with one MU for simplicity, which is shown in Fig. 2. More MUs connected to the MEC system only increase the number of applications offloaded to this system, which does not affect this study. Built upon this topology with one MU, we can assume MU \(n\) as the fictitious ECD which is the 0-th edge computing device in \(\mathcal{M}\). To model the application data sent by MU \(n\) and the result data received by MU \(n\), we add two dummy tasks to each application in \(\mathcal{N}\). Moreover, the two dummy tasks must be on this fictitious ECD corresponding to MU \(n\). As a result, all tasks in application \(n\) can be offloaded to the MEC system. Considering the above characteristics of the computing offloading, we use the triple elements \(\{r_{n},d_{n},\mathcal{G}_{n}\}\) to indicate the application that MU \(n\) offloads to the MEC system. Here, \(r_{n}\) denotes the time of the offloading request sent by MU \(n\) and \(d_{n}\) is the deadline of application \(n\). In the task graph \(\mathcal{G}_{n}\), \(I_{n}=\{\mathcal{V}_{n}|\) expresses the number of tasks in application \(n\). We indicate each task in \(\mathcal{V}_{n}\) as \(v_{ui}\) (\(i\in\{0,\ldots,I_{n}\}\)). Here, \(i\) indexes the \(i\)-th task. For ease of expression, meanwhile, we use \(v_{n0}\) and \(v_{nt}\) to denote the two dummy tasks, respectively. For task \(v_{ni}\), it has a workload \(\rho_{ni}\), which is the number of instructions needed by the PE to process \(v_{ni}\). Generally, \(\rho_{ni}\) can be acquired by profile analytics [27]. Accordingly, \(\rho_{n0}\) and \(\rho_{nt}\) are set to zero since \(v_{n0}\) and \(v_{nt}\) are the dummy tasks added to application \(n\). Due to the dependency constraints among tasks, we represent the directed edge between \(v_{ni}\) and \(v_{nj}\) as \(\varepsilon_{nij}\in\mathcal{E}_{n}\). If there exist the output data from \(v_{ni}\) to \(v_{nj}\), we call \(v_{ni}\) the parent of \(v_{nj}\). Accordingly, \(v_{nj}\) is the child of \(v_{ni}\). We use \(p(v_{ni})\) and \(c(v_{ni})\) to define the sets of \(v_{ni}\)'s parents and children, respectively. For the two dummy \(v_{n0}\) and \(v_{nt}\), especially, \(p(v_{n0})\) and \(c(v_{nt})\) are empty sets. Un Figure 2: Network Topology. der the scheduling strategy made by the decision controller, the output data from a completed task need to be transferred to the ECD where the child tasks are located, due to the data dependencies between tasks. For example, task \(v_{ni}\) is on ECD \(2_{n}\) and \(v_{ni}\)'s child \(v_{nj}\) is on ECD 3 in Fig. 2. After ECD \(2_{n}\) completes task \(v_{ni}\), ECD \(2_{n}\) needs to transmit the all output data to ECD 3 via the communication link since \(v_{ni}\)'s child \(v_{nj}\) is on ECD 3. Let \(e_{nij}\) denote the size of data transfer from \(v_{ni}\) to \(v_{nj}\). Especially, if both \(v_{ni}\) and its child \(v_{nj}\) on the same ECD \(m\), the transmit data \(e_{nij}\) from \(v_{ni}\) can be delivered directly to \(v_{nj}\) without going through the communication link. To ease the understanding, Fig. 3 shows an example of the task graph model with eight tasks. This task graph for an application contains eight tasks, where the dotted red circles indicate dummy tasks and the gray circles denote real tasks. The dotted arrows express the application data to be offloaded and the result data to be sent back to MU, respectively. Specifically, the former are the outgoing arcs attached to \(v_{0}\) and the latter correspond to the incoming arcs attached to \(v_{7}\). Moreover, the numbers attached to the directed arcs, i.e., solid arrows and dotted arrows, denote the size of transfer data. ## 3 Problem Formulation We define a strategy \(\mathbf{x}_{ni}=(x_{ni}^{0},\ldots,x_{ni}^{M})\) for task \(v_{ni}\) in application \(n\). Peculiar, the item \(x_{ni}^{m}\) (\(\forall m\in\{0,\ldots,M\}\)) in \(\mathbf{x}_{ni}\) is the binary variable that denotes the scheduling plan. Since each real task \(v_{ni}\) (\(i\notin\{0,I\}\)) can be executed on any real ECD \(m\) (\(m\neq 0\)), we can define \(x_{ni}^{m}\) as follows. \[x_{ni}^{m}=\begin{cases}1,&\text{if ECD $m$ can execute $v_{ni}$},\\ 0,&\text{otherwise}.\end{cases} \tag{1}\] For the dummy tasks \(v_{n0}\) and \(v_{ni}\), they must be on the ECD 0 corresponding to MU \(n\). Hence, we have \[x_{n0}^{0}=1,\quad x_{ni}^{0}=1. \tag{2}\] For the other real tasks \(v_{ni}\), i.e., \(i\neq\{0,I\}\), they need to be offloaded to the MEC system rather than executed locally. Thus, we have \[x_{ni}^{0}=0,\quad i\neq\{0,I\}. \tag{3}\] As any task can only be scheduled to one ECD for execution, we have \[\sum\limits_{m=0}^{M}x_{ni}^{m}=1,\quad\forall i\in\{0,\ldots,I\}. \tag{4}\] If task \(v_{ni}\) is scheduled to ECD \(m\) under the strategy \(\mathbf{x}_{ni}\), we can define the execution time \(E(\mathbf{x}_{ni})\) of \(v_{ni}\) on ECD \(m\) as follows. \[E(\mathbf{x}_{ni})=\begin{cases}0,&\text{if $i\in\{0,I\}$},\\ \sum\limits_{m=1}^{M}\frac{e_{ni}}{\lambda_{ni}}\cdot x_{ni}^{m},&\text{ otherwise}.\end{cases} \tag{5}\] Since there exists transmit data \(e_{nij}\) between two tasks \(v_{ni}\) and \(v_{nj}\) with the dependency constraints, we use \(T(\mathbf{x}_{ni},\mathbf{x}_{nj})\) to indicate the data transfer time via the communication link. Considering that \(e_{nij}\) from \(v_{ni}\) can be delivered to \(v_{nj}\) without going through the communication link when \(v_{ni}\) and \(v_{nj}\) are scheduled to the same ECD, we define \(T(\mathbf{x}_{ni},\mathbf{x}_{nj})=0\) when \(\mathbf{x}_{ni}=\mathbf{x}_{nj}\). If \(v_{ni}\) is the task to which the application data offloaded from MU \(m\) is to be transferred, i.e., \(v_{n0}\in p(v_{ni})\), there are two cases of ECD \(m\) that \(v_{ni}\) can be scheduled to. In one case, \(m=m_{n}\), that is, MU \(n\) is in the cellular signal area of ECD \(m\). Thus, MU \(n\) can connect to ECD \(m\) directly. Hence, we have \[T(\mathbf{x}_{n0},\mathbf{x}_{ni})=\frac{e_{n0}}{B_{n}^{m}},\quad v_{ni}\in c (v_{n0}),\text{ and }m=m_{n}, \tag{6}\] where \(B_{n}^{m}\) is the transmission rate between MU \(n\) and ECD \(m_{n}\), and \(e_{n01}\) is the size of data transfer from \(v_{n0}\) to \(v_{ni}\). In the other, \(m\neq m_{n}\). The application data offloaded from MU \(n\) must be transferred to ECD \(m\) via ECD \(m_{n}\). Therefore, we calculate \(T(\mathbf{x}_{n0},\mathbf{x}_{ni})\) by \[\frac{e_{n0i}}{B_{n}^{m}}+\sum\limits_{m=1}^{M}\frac{e_{n0i}}{B_{m_{n},m}^{m}} x_{ni}^{m},\ v_{ni}\in c(v_{n0})\text{ and }m\neq m_{n}. \tag{7}\] Furthermore, if \(v_{ni}\) is the task that outputs the result data to MU \(n\), i.e., \(v_{ni}\in c(v_{ni})\), there are also two cases of ECD \(m\). For the first case, \(m=m_{n}\), so that the result output by \(v_{ni}\) on ECD \(m\) can be transferred directly to MU \(n\). Thus, we have \[T(\mathbf{x}_{ni},\mathbf{x}_{ni})=\frac{e_{ni1}}{B_{n}^{m}},\quad v_{ni}\in p (v_{ni}),\text{ and }m=m_{n}. \tag{8}\] In the second case, the result output by \(v_{ni}\) on ECD \(m\) must be transferred to MU \(n\) via the ECD \(m_{n}\) such that \(T(\mathbf{x}_{ni},\mathbf{x}_{ni})\) can be defined as \[\frac{e_{ni1}}{B_{n}^{m}}+\sum\limits_{m=1}^{M}\frac{e_{ni1}}{B_{m,m_{n}}}x_{ni }^{m},\ v_{ni}\in p(v_{ni})\text{ and }m\neq m_{n}. \tag{9}\] Except for the aforementioned cases, \(v_{ni}\) and \(v_{nj}\) are respectively scheduled to real ECDs \(m\) and \(m^{\prime}\) (\(m\neq m^{\prime}\)) which can communicate directly with each other. We have \[T(\mathbf{x}_{ni},\mathbf{x}_{nj})=\sum\limits_{m=1}^{M}\sum\limits_{n^{\prime} =1}^{M}\frac{e_{nij}}{B_{m,m^{\prime}}}\cdot x_{ni}^{m}\cdot x_{nj}^{m^{\prime}},\quad m\neq m^{\prime}. \tag{10}\] The decision controller can detect the completion time of task \(v_{ni}\), after ECD \(m\) completes \(v_{ni}\). We use \(F(\mathbf{x}_{ni})\) to indicate Figure 3: Task Graph Model. \(v_{ni}\)'s completion time. If \(v_{ni}\) is the dummy task \(v_{n0}\), i.e., \(i=0\), the time when MU \(n\) sends the application data can be seen as the completion time of \(v_{ni}\). If \(v_{ni}\) is the dummy task \(v_{ni}\), i.e., \(i=I\), moreover, the time when MU \(n\) receives the result data can be regarded as the completion time of \(v_{ni}\). Therefore, we calculate \(F(\textbf{x}_{ni})\) by \[\begin{cases}r_{n},&\text{if }i=0,\\ \max_{v_{ni}\in[V_{ni}]}\{F(\textbf{x}_{ni})+T(\textbf{x}_{nj},\textbf{x}_{ni })]+E(\textbf{x}_{ni}),&\text{if }i=I.\end{cases} \tag{11}\] For any real task \(v_{ni}\), it can be offloaded to ECD \(m\) (\(\forall m\in\{1,\ldots,M\}\)) for execution. As ECD \(n\) processes \(v_{ni}\) following FCFS, we must take into account the queue delay on the ECD \(n\) and the arrival of input data to \(v_{ni}\). As a result, we can define \(F(\textbf{x}_{ni})\) as follows. \[F(\textbf{x}_{ni})=\max\left\{Q(\textbf{x}_{ni}),\max_{v_{n}\in p(v_{ni})}A( \textbf{x}_{nj})\right\}+E(\textbf{x}_{ni}). \tag{12}\] In Eq. (12), \(Q(\textbf{x}_{ni})\) expresses the time when ECD \(n\) is ready to execute \(v_{ni}\) under the scheduling plan \(\textbf{x}_{ni}\). Specifically, \(Q(\textbf{x}_{ni})=\sum_{n=1}^{M}Q_{n}x_{ni}^{m}\). Here \(Q_{n}\) is the queue delay of ECD \(m\), to which \(v_{ni}\) will be scheduled. Moreover, \(A(\textbf{x}_{nj})\) is the arrival time of input data to \(v_{ni}\). We can define \(A(\textbf{x}_{nj})\) as follows. \[A(\textbf{x}_{nj})=F(\textbf{x}_{nj})+T(\textbf{x}_{nj},\textbf{x}_{ni}), \quad\forall v_{nj}\in p(v_{ni}). \tag{13}\] After all of MU \(n\)'s tasks are completed in the MEC system, the decision controller can obtain the completion time \(\Psi_{n}\) of application \(n\), i.e., the makespan. Therefore, we can calculate \(\Psi_{n}\) based on the maximum completion time among all tasks. Hence, we have \[\Psi_{n}=\max_{v_{n}\in V_{n}}F(\textbf{x}_{ni})-r_{n}. \tag{14}\] All MUs in the MEC system expect to expedite the processing of applications. Therefore, the objective of this paper is to design a scheduling strategy to minimize the average makespan of applications for all MUs. We can formulate the optimization problem as follows. \[\textbf{P}: \min_{\{\textbf{x}_{ni}\}}\frac{1}{N}\sum_{n=1}^{N}\Psi_{n}\] (15) s.t. \[\eqref{eq:MU respectively. The calculation of \(\hat{\delta}\) and \(\hat{B}\) does not take into account the devices of mobile uses. As the MEC system can serve multiple MUs, each arrived application needs to be calculated by the agent based on Eq. (17). Therefore, we construct a list for each application arriving at different ECDs, in which the application tasks are sorted by the ascending order of the task priority. Obviously, the number of lists is equal to the number of applications to be processed concurrently by the system. We denote the set of the lists as \(\mathcal{H}=\{\xi_{n}\}\) (\(\forall n\in\mathcal{N}\)). As the execution of the task graph is constrained by the dependencies among tasks, the scheduling plan of a task can be made after its parents have been made. We call such a task the ready task. When the agent receives the arrival event or the completion event, it will pick out the ready tasks from \(\xi_{n}\) and arrange them into the queue \(Q^{r}\) in an ascending order of their priorities. Subsequently, the agent will fetch the first ready task from \(Q^{r}\), in turn, and make the scheduling plan for it. Built upon such a mechanism, we think of the time when the agent gets each task out of \(Q^{r}\) at a time step. The agent will interact with the environment to plan for task scheduling via a reinforcement learning algorithm at each time step. ### MDP Mode of Agent The MDP mode requires that the agent can receive reward signals by observing the state of the environment. The decision controller can periodically collect real-time status information about the system so that the agent can receive reward signals from the state of the environment. The reward signals are used to evaluate the quality of actions available to the agent. Then, the agent selects an action based on the evaluation. Afterward, at the next time step the environment changes into a new state. The agent can receive new reward signals according to the feedback of the environment. The agent chooses actions with the goal of maximizing the sum of the expected reward obtained at each time step. Next, we represent the state space, the action space and the reward function in our MDP as follows. 1) State Space: In a MDP the environment can be abstracted into a state space, which will change from one state to another after an action is selected by the agent. We define the state of MDP corresponding to the MEC system at time step \(\tau\) as a vector \(\mathbf{s}_{\tau}=(\hat{\hat{B}},B_{n}^{m},\hat{\delta},w^{\prime},w^{m})\), which contains five parameters about computing delay and transmission delay. Specifically, \(\hat{B}\) denotes the sum of transmission rates between different ECDs in the network, i.e., \(\hat{\hat{B}}=\sum_{m=1}^{M}\sum_{m^{\prime}=1}^{M}B_{m,m^{\prime}}\); \(\hat{\delta}\) expresses the sum of processing capacities of all ECDs in the network, i.e., \(\hat{\hat{\delta}}=\sum_{m=1}^{M}\delta_{m}\); \(w^{\prime}\) indicates the total workloads of the tasks in the ready queue \(Q^{r}\); \(w^{m}\) is the total workloads of the tasks in computing queues of all ECDs. Note that, all parameters in \(\mathbf{s}_{\tau}\) can be observed by the agent from the real-time status information. 2) Action Space: The MDP uses the action to describe the behavior of an agent. Specifically, the action is the decision made by the agent after watching the state at a certain time step \(\tau\). We use \(\mathbf{a}_{\tau}\) to denote the action space available to the agent when making scheduling decisions for each task. As each task except for the two dummy tasks can be scheduled to one of the ECDs in the network, \(\mathbf{a}_{\tau}=(a^{0},a^{1},\dots,a^{M})\) expresses the strategy space to schedule task \(v_{ni}\). The item \(a^{m}\) (\(\forall n\in\{0,1,\dots,M\}\)) is the binary variable so that \(\mathbf{a}_{\tau}\) corresponds to the strategy \(\mathbf{x}_{ni}\) defined in Section 3. 3) Reward: The MDP utilizes the reward to express the feedback from the environment to the agent. Specifically, the agent takes an action based on a decision, and then the environment provides feedback in the form of a reward value to the agent. In this work, the reward is the value that the agent calculates based on the state of the environment after it chooses an action for the ready task. The reward is defined as \(\mathbf{r}=\mathcal{U}-\mathcal{D}-\mathcal{P}\). Here \(\mathcal{U}\) is a utility function, and \(\mathcal{D}\) is a duration factor, and \(\mathcal{P}\) expresses the penalty factor. We define \(\mathcal{U}\) as follows. \[\mathcal{U}=\beta\cdot\log_{2}\rho_{ni}, \tag{18}\] where \(\beta\) is a weight factor. \(\mathcal{D}\) can be calculated by \[\mathcal{D}=\psi\cdot(\max_{t_{n}\in\mathcal{U}_{ni}}A(\mathbf{x}_{ni})+Q( \mathbf{x}_{ni})+E(\mathbf{x}_{ni}))/\rho_{ni}, \tag{19}\] where \(\psi\) is a weight factor. Here, \(A(\mathbf{x}_{ni})\) can be obtained after \(v_{ni}\)'s each parent \(v_{nj}\) is completed, as \(v_{ni}\) is the ready task fetched from \(Q^{r}\). Moreover, \(\mathcal{P}\) can be defined as follows. \[\mathcal{P}=\eta\cdot(F(\mathbf{x}_{ni})-F_{ni}^{\mathrm{tr}})/\rho_{ni}, \tag{20}\] where \(\eta\) is a weight factor. ### Deep Reinforcement Learning The reinforcement learning algorithm makes the decision based on the quality of actions, i.e., Q-value, when the agent interacts with the environment. However, in this work, the number of state spaces is so large that it is difficult to express them in the traditional Q-Table. Therefore, we introduce the DQN algorithm to solve the optimal scheduling decision. DQN uses the deep neural network to estimate the Q-value of the optimal decision, which can change Q-Table updating problem into the calculation of an approximation function. DQN can get similar output actions by the approximate value function after the agent observes the current state. Thus, the shortcoming of traditional Q-Table updating in high-dimensional and continuous problem can be overcome by a function fitting. Particularly, the approximate value function is represented as a parameterized functional form with parameters, as shown in the following formula. \[\begin{split} Q(\mathbf{s}_{\tau},\mathbf{a}_{\tau},\theta)\gets& Q(\mathbf{s}_{\tau},\mathbf{a}_{\tau},\theta)\\ &+\alpha(\mathbf{r}_{\tau+1}+\gamma\max_{\hat{\mathbf{a}}}Q( \mathbf{s}_{\tau+1},\hat{\mathbf{a}},\theta^{-})\\ &-Q(\mathbf{s}_{\tau},\mathbf{a}_{\tau},\theta)),\end{split} \tag{21}\] where \(\mathbf{s}_{\tau+1}\) denotes the next state after the agent takes the action \(\mathbf{a}_{\tau}\) at time step \(\tau\), \(\mathbf{r}_{\tau+1}\) is the reward that the agent receives at the next time step after taking action \(\mathbf{a}_{\tau}\), \(\hat{\mathbf{a}}\) is the action that maximizes the value at state \(\mathbf{s}_{\tau+1}\), \(\gamma\) is the discount coefficient in the process of accumulating values, and \(\alpha\) is the learning rate. Besides, \(\theta\) and \(\theta^{-}\) are parameter vectors at time step \(\tau\) and \(\tau+1\), respectively. DQN algorithm can improve the search speed of the traditional Q-Learning by introducing the deep neural network, called the prediction network. The prediction network is responsible for estimating the Q-value at state \(\mathbf{s}_{\tau}\). The algorithm selects an action to make a scheduling decision. To improve the algorithm's stability and diversity, DQN sets the experience pool and adds the target network. The structure of the target network is consistent with that of the prediction network. The experience pool is the memory space that stores the records of the transition samples captured by the agent from the environment at each time step. Each record consists of four items (\(\mathbf{s}_{\tau},\mathbf{a}_{\tau},\mathbf{r}_{\tau+1},\mathbf{s}_{\tau+1}\)), i.e., the state, the selected action, the received reward and the subsequent state. DQN extracts a batch of samples randomly from the experience pool to improve the data correlation and non-static distribution. Specifically, a batch of elements \(\mathbf{s}_{\tau}\) in the transition records is used as the input to the prediction network. Meanwhile, a batch of elements \(\mathbf{s}_{\tau+1}\) in the transition records is transmitted to the target network. During the training process, the parameters of the prediction network are copied to the target network after a certain number of round iterations. In DQN, the Q-value of the training process generated by the prediction network is called the current Q-value, and that of the training process generated by the target network is called the target Q-value. DQN maintains the difference between the parameters of the two neural networks to calculate the loss function. Whereafter, the parameters of the prediction network are inversely updated by leveraging the stochastic gradient descent (SGD). We define the loss function as follows. \[L(\theta)=E[(Q_{tar}-Q_{cur})^{2}], \tag{22}\] where \(Q_{cur}=Q(\mathbf{s}_{\tau},\mathbf{a}_{\tau},\theta)\) is the current Q-value of the state-action pair outputted by the prediction network according to a batch of elements \(\mathbf{s}_{\tau}\) in the samples. \(Q_{tar}\) is the target Q-value defined as follows. \[Q_{tar}=\mathbf{r}_{\tau+1}+\gamma\max_{\hat{\mathbf{a}}}Q(\mathbf{s}_{\tau+1 },\hat{\mathbf{a}},\theta^{-}), \tag{23}\] where \(Q(\mathbf{s}_{\tau+1},\hat{\mathbf{a}},\theta^{-})\) is the output of the target network according to a batch of elements \(\mathbf{s}_{\tau+1}\) in the samples. ## 5 Resource Allocation Based on Reinforcement Learning We proposed a Scheduling strAtegy for the Task grAph based on the Deep Reinforcement Learning, named as SATA-DRL, in order to solve the problem for computing resource allocation in the MEC system. To ease the understanding, we first present the reinforcement learning framework of scheduling strategy, and then introduce the scheduling algorithm for computing resource allocation. ### Reinforcement Learning Framework for Task Graph Scheduling The framework of SATA-DRL is shown in Fig. 4. In this figure the agent is responsible for scheduling the ready tasks in the queue \(Q^{\prime}\) by interacting with the environment of the MEC system. The agent has to be trained iteratively in the environment to determine optimized scheduling decisions. In training the agent first observes state \(\mathbf{s}_{\tau}\) from the environment at the time step \(\tau\), and inputs \(\mathbf{s}_{\tau}\) into the prediction network. According to the Q-value output from the prediction network, an action \(\mathbf{a}_{\tau}\) is chosen for the task scheduling by using the \(\varepsilon\)-greedy exploration. The chosen action \(\mathbf{a}_{\tau}\) is defined as \[\mathbf{a}_{\tau}=\begin{cases}\text{an action chosen randomly},&\text{ with probability }\varepsilon,\\ \arg\max_{\hat{\mathbf{a}}}Q(\mathbf{s}_{\tau},\hat{\mathbf{a}}),&\text{ with probability }1-\varepsilon.\end{cases} \tag{24}\] Whereafter, the environment evolves to the next state \(\mathbf{s}_{\tau+1}\) and outputs the reward \(\mathbf{r}_{\tau+1}\). The agent observes \(\mathbf{s}_{\tau+1}\) and receives \(\mathbf{r}_{\tau+1}\) from the MEC system. The subsequent action is chosen based on the new Q-value after state \(\mathbf{s}_{\tau+1}\) is passed to the prediction network. At the same time, a transition record consisting of the state \(\mathbf{s}_{\tau+1}\), the reward \(\mathbf{r}_{\tau+1}\), the previous \(\mathbf{s}_{\tau}\) and the chosen \(\mathbf{a}_{\tau}\) is deposited into the experience pool. This process works repeatedly. After a few rounds of training, the agent randomly samples a batch of transitions from the experience pool. A batch of elements \(\mathbf{s}_{\tau}\) in these transitions is again transmitted to the prediction network as its inputs. Simultaneously, a batch of elements \(\mathbf{s}_{\tau+1}\) is used as the input to the target network. The agent chooses the maximum Q-value from the output from the target network, which is calculated as \(\max_{\hat{\mathbf{a}}}Q(\mathbf{s}_{\tau+1},\hat{\mathbf{a}})\). For example, the red value 6 in the output table from the target network in Fig 4 is the maximum Q-value. Then, based on Eq. (22), the agent calculates the loss function using the Q-value output from the prediction network and the Q-value output from the target network. Finally, the agent performs gradient descent with the goal of minimizing the variance-error for the parameter updating of the prediction network. ### Resource Allocation Algorithm for Task Graph The proposed algorithm consists of the scheduling algorithm for the task graph, named SATA, and the deep reinforcement learning algorithm, called DRL. SATA algorithm undertakes the preparation for MDP. It takes charge of fetching out all the ready tasks from \(\mathcal{H}\) and pushing them into queue \(Q^{\prime}\) to invoke DRL on receipt of the aforementioned two events. Figure 4: Framework of SATA-DRL. Algorithm 1 shows the processing of the task graph scheduling. This algorithm is launched on receipt of the arrival and completion events. In Lines 2-7 the stacks of an arrived application \(n\) are sorted in list \(\xi_{n}\) by an ascending order of \(F_{ni}^{clr}\), meanwhile, and list \(\xi_{n}\) is appended to \(\mathcal{H}\) when the arrival event triggers. This algorithm's Lines 8-19 will be launched when either the arrival event or the completion event triggers. Specifically, Lines 8-11 express that all ready tasks in the set \(\mathcal{H}\) are mapped into \(Q^{\prime}\) by an ascending order of the latest completion time. In Lines 12-19 SATA generates the sequence of discrete time steps for the task graph scheduling by iteratively fetching out a ready task \(v_{ni}\) from \(Q^{\prime}\). SATA obtains the current state \(\mathbf{s}_{\tau}\) from the environment at each time step \(\tau\). Notice that, if the fetched task is the first of all application tasks, the reward is a null value. Otherwise, SATA will calculate the current reward \(\mathbf{r}_{\tau}\) based on Eqs. (18),(19),(20) according to the environment information. In Line 17 SATA passes \(\mathbf{s}_{\tau}\) and \(\mathbf{r}_{\tau}\) to the deep reinforcement learning algorithm, and subsequently receives the action space \(\mathbf{a}_{\tau}\) from it. Thus, the task \(v_{ni}\) can be scheduled to the target ECD in the light of \(\mathbf{a}_{\tau}\). The SATA algorithm will end when no new applications arrive or no tasks are completed. ``` 1:\(\mathbb{O}\leftarrow\varnothing\); \(Q^{\prime}\leftarrow\varnothing\); \(\tau\gets 1\); \(\mathbf{r}\leftarrow\varnothing\); 2:if A new application \(n\) arrives at the MEC system then 3: Add application \(n\) into \(\mathbb{O}\); 4:for each application \(n\in\mathbb{O}\)do 5: Estimate \(F_{ni}^{clr}\) for each \(v_{ni}\) based on Eq. (17); 6: Rank all tasks in list \(\xi_{n}\) by an ascending order of \(F_{ni}^{clr}\); 7: Append \(\xi_{n}\) to \(\mathcal{H}\); 8:for each \(\xi_{n}\in\mathcal{H}\)do 9:if\(v_{ni}\) in the head of \(\xi_{n}\) is the ready task then 10: Map \(v_{ni}\) into \(Q^{\prime}\); 11: Rank \(Q^{\prime}\) by an ascending order of \(F_{ni}^{clr}\); 12:for each \(v_{ni}\) in \(Q^{\prime}\)do 13: Obtain state \(\mathbf{s}_{\tau}\) from the environment at step \(\tau\); 14:if\(\tau>1\)then 15: Calculate \(\mathcal{U}\), \(\mathcal{D}\) and \(\mathcal{P}\) based on Eqs. (18),(19),(20); 16:\(\mathbf{r}\leftarrow\mathcal{U}-\mathcal{D}-\mathcal{P}\); 17: Pass \(\mathbf{s}_{\tau}\) and \(\mathbf{r}\) to the DRL algorithm, and then receive \(\mathbf{a}_{\tau}\) from the DRL algorithm; 18: Schedule \(v_{ni}\) to the target ECD based on \(\mathbf{a}_{\tau}\); 19:\(\tau\leftarrow\tau+1\); ``` **Algorithm 1** SATA (receipt of two events) For the DRL algorithm, it is to implement deep reinforcement learning. Algorithm 2 shows the processing that SATA makes the scheduling decision based on deep reinforcement learning. This algorithm first constructs two neural networks with stochastic parameters \(\theta\) and \(\theta^{-}\), respectively. Then, it will receive state \(\mathcal{S}\) and reward \(\mathcal{R}\) from the SATA algorithm at each time step. The prediction network outputs the Q-value by inputting state \(\mathcal{S}\). Based on the \(\varepsilon\)-greedy exploration, action \(\mathcal{A}\) is chosen out. The above processes are described in lines 4-7 of this algorithm. Notice that, if the current time step is processing the first task of all applications to be scheduled, i.e., \(\tau=1\), the algorithm will skip the storage of the experience pool and the computation of the target network, and it will prepare the state and action for storing the experience transition in the next time step. So, Lines 8-16 are skipped and Line 17 is run directly when \(\tau=1\). When \(\tau>1\), the transition consisting of the last state \(\mathbf{s}_{\tau}\) and action \(\mathbf{a}_{\tau}\) as well as the current \(\mathbf{r}_{\tau+1}\) and \(\mathbf{s}_{\tau+1}\) is stored into the experience pool \(\mathbb{D}\), as shown in Line 10. Then, the DRL algorithm will sample a batch \(\mathcal{B}\) of transitions from \(\mathbb{D}\) when the number of time steps is more than the predefined batch size, as shown in Lines 11-12. For each transition in \(\mathcal{B}\), the DRL algorithm needs to calculate the loss function based on the output from the prediction network and target network, respectively, and it finally performs SGD to update the prediction network. These pseudo-codes are shown in Lines 13-16. ``` 1:The step size \(K\) for updating the target network; The batch size batch for sampling transitions; 2:Initialize the prediction network and target network with stochastic parameters \(\theta\) and \(\theta^{-}\), respectively; 3:\(\mathbb{D}\leftarrow\varnothing\); 4:for\(k=1,2,\cdots,K\)do 5: Receive state \(\mathcal{S}\) and reward \(\mathcal{R}\) from the SATA algorithm at step \(\tau\); 6: Get the Q-value in the prediction network using \(\mathcal{S}\); 7: Choose action \(\mathcal{A}\) by utilizing Eq. (24); 8: Pass \(\mathcal{A}\) to the SATA algorithm; 9:if\(\tau>1\)then 10:\(\mathbf{s}_{\tau+1}\leftarrow\mathcal{S}\); \(\mathbf{r}_{\tau+1}\leftarrow\mathcal{R}\); 11: Store transition (\(\mathbf{s}_{\tau}\), \(\mathbf{a}_{\tau}\), \(\mathbf{r}_{\tau+1}\), \(\mathbf{s}_{\tau+1}\)) into \(\mathbb{D}\); 12:if\(k>batch\)then 13: Sample a mini-batch \(\mathcal{B}\) of transitions from \(\mathbb{D}\); 14:for Each transition (\(\mathbf{s}_{i}\), \(\mathbf{a}_{i}\), \(\mathbf{r}_{i+1}\), \(\mathbf{s}_{i+1}\)) in \(\mathcal{B}\)do 15: Get \(Q_{cur}\) in the prediction network using \(\mathbf{s}_{i}\); 16: Get \(Q_{\mathbf{s}_{i+1}}\), \(\mathbf{a},\theta^{-}\)) in the target network using \(\mathbf{s}_{i+1}\), and then calculate \(\mathcal{Q}_{cur}\) according to Eq. (23); 17: Perform SGD to update the prediction network; 18:\(\mathbf{s}_{\tau}\leftarrow\mathcal{S}\); \(\mathbf{a}_{\tau}\leftarrow\mathcal{A}\); 19:\(\theta^{-}\leftarrow\theta\); ``` **Algorithm 2** DRL algorithm The agent can work alone for the task graph applications in the MEC system after deep neural networks converge. At this moment, the agent can obtain the optimized scheduling decision by the output from the prediction network according to state \(\mathbf{s}_{r}\) in each time step. Algorithm 4 shows that the agent makes the task scheduling decision by the output from the prediction network. This algorithm is similar to Algorithm 1 in that it also starts working based on the arrival and completion events. Since the neural network for reinforcement learning has converged, the prediction network can directly output \(Q(\mathbf{s}_{r},\mathbf{\hat{a}})\) according to the state \(\mathbf{s}_{r}\) obtained from the environment, as shown in Lines 13-14. Then, Algorithm 4 chooses the action \(\mathbf{a}_{r}\) that maximizes \(Q(\mathbf{s}_{r},\mathbf{\hat{a}})\) for the task scheduling. ``` 0: the set \(\mathcal{N}\) of all applications to offload 1:\(\mathbb{O}\leftarrow\varnothing\); \(Q^{\prime}\leftarrow\varnothing\); 2:if A new application \(n\) arrives at the MEC system then 3: Add application \(n\) into \(\mathbb{O}\); 4:for each application \(n\in\mathbb{O}\)do 5: Estimate \(F_{ni}^{kt}\) for each \(v_{ni}\) based on Eq. (17); 6: Rank all tasks in list \(\xi_{n}\) by an ascending order of \(F_{ni}^{kt}\); 7: Append \(\xi_{n}\) to \(\mathcal{H}\); 8:for each \(\xi_{n}\in\mathcal{H}\)do 9:if\(v_{ni}\) in the head of \(\xi_{n}\) is the ready task then 10: Map \(v_{ni}\) into \(Q^{\prime}\); 11: Rank \(Q^{\prime}\) by an ascending order of \(F_{ni}^{lt}\); 12:for each \(v_{ni}\) in \(Q^{\prime}\)do 13: Obtain state \(\mathbf{s}_{r}\) from the environment; 14: Get \(\mathbf{a}_{r}=\arg\max_{\mathbf{\hat{a}}}Q(\mathbf{s}_{r},\mathbf{\hat{a}})\) in the prediction network using \(\mathbf{s}_{r}\); 15: Schedule \(v_{ni}\) to the target ECD based on \(\mathbf{a}_{r}\); ``` **Algorithm 4** SATA-Work (except of two events) ## 6 Performance evaluation In this section we evaluate the SATA-DRL algorithm in the simulator built by combining ElasticSim[30] and EdgeCloudSim[31]. Moreover, we compare SATA-DRL with several competing algorithms for the evaluation of task graph scheduling in the MEC system. ### Simulation Setup There are four ECDs in our simulator, and the processing capability of each MU, i.e., the 0-th edge computing device, is set to 1000 MIPS. In practice, the processing capability of each ECD's PE may change over time. Similar to the literature [18], we divide the processing capability of each ECD's PE into discrete levels: \(\{6000,5500,5000,4500,4000\}\) (MIPS). Each level corresponds to one processing capability state. Thus, the transition of the processing capability level of each ECD can be modeled as a Markov chain, whose state transition probability matrix can be derived as follows [18]. \[\left[\begin{array}{cccc}0.5&0.25&0.125&0.0625&0.0625\\ 0.0625&0.5&0.25&0.125&0.0625\\ 0.0625&0.0625&0.5&0.25&0.125\\ 0.125&0.0625&0.0625&0.5&0.25\\ 0.25&0.125&0.0625&0.0625&0.5\\ \end{array}\right] \tag{25}\] In our experiments, we let the processing capabilities of four ECDs transition from one state to another according to this probability matrix after a task is completed on any computing device. Besides, the transmission rate between ECDs is set to 440 Mbps, and that between each MU and ECD \(m_{n}\), i.e., the ECD covering the MU, is set to \(10^{3}\) Mbps. These two transmission rates are similar to the literature [9]. In addition, \(\hat{\delta}\) and \(\hat{B}\) are set to 6000 MIPS and \(10^{3}\) Mbps, respectively. We utilize one type of workflow datasets [32], i.e., Montage with 25 nodes, to simulate the task graph of each UT's application. The workload of a task in an application is set to 500 if the original value of the element _runtime_ recorded in the DAX file is greater than 500. Otherwise, that of a task is set to 100 if the original _runtime_ is less than 100. To set the size of transfer data, we need to calculate the average transmission rate \(\overline{B}\) as \((440\times 6+10^{3})/7=520\) (Mbps) in the MEC system. Then, based on the element _size_ recorded in the DAX file, we can calculate a base communication time, named \(bc\). We set \(bc\) to \(size/\overline{B}\), if \(bc=size/\overline{B}\) does not exceed the range of \([10^{-3},10^{-2}]\). Otherwise, let \(bc\) equal to \(10^{-3}\) and \(10^{-2}\), respectively. Based on this, we set the size of the data transfer between tasks in an application to \(bc\cdot\overline{B}\). The size of the data transfer associated with the dummy tasks is set to \(bc\cdot\overline{B}\). Since the offloading request of MUs can randomly generate, we fetch some applications from Montage workflows with a stochastic time interval, following the Poisson distribution Figure 5: Cumulative rewards with different \(\lambda\) with \(\lambda\). Furthermore, to simulate the stochastic arrival of applications in the MEC system, we let MUs randomly generate the offloading request in the celluar communication service area of four ECDs, following a uniform distribution. In addition, to set the deadline of each application, we assume that each task in a fetched application \(n\) is scheduled to the different ECD with the maximum average processing capability, i.e., \(5000=(6000\times 4+1000)/6\), among the MEC system, and all transfer data among tasks in application \(n\) are ignored. Thus, we can calculate a basic makespan \(MS_{n}\) for each application \(n\). Whereafter, we set the deadline of application \(n\) to \(d_{n}=r_{n}+6\cdot MS_{n}\). ### Setup for Intelligent Agent There are two neural networks with the same structure in SATA-DRL. In our experiments, these neural networks are fully connected networks consisting of five layers. The number of nodes in these networks are set to 128, 64, 32, 16, 5, respectively. The activation function is _linear_ in the first four layers, and that is \(softmax\) in the last layers. We use _Adam_ as the optimization method for the gradient descent and set the learning rate of the agent to 0.0006. Moreover, we set the size of the experience pool for DQN to 200000. In Algorithm 2 the batch size _batch_ is set to 64. \(\beta\) of Eq. (18) is set to 0.6 and \(\psi\) of Eq. (19) is set to 5. Meanwhile, \(\eta\) of Eq. (20) is set to 40. Besides, in Eq. (23), we set the discount factor \(\gamma\) for the calculation of the target Q-value to 0.95. ### Evaluation for Reinforcement Learning In this section, we use cumulative rewards to evaluate the performance of our reinforcement learning. Considering that MUs can randomly send offloading requests of applications, we evaluate cumulative rewards of the reinforcement learning with different arrival rates, i.e., \(\lambda=\{5,7,9\}\), when it handles 10 applications. In the experiments, we let 10 applications per episode reach the simulator with different arrival rates, and then launch Algorithm 3 to train the agent and observe the reward values. From Figs. 5, we can observe that the total rewards obtained by the agent are not large at the beginning of the learning process. The total rewards increase as we augment the number of episodes for training. Moreover, the total rewards show relatively stable trends when agent is trained for more than 600 episodes, which is shown in Fig. 5(c). This can be interpreted that the agent can make a relatively optimized decision in every scheduling for the task. As a result, the cumulative rewards can converge. The above results demonstrate the convergence performance of SATA-DRL. ### Evaluation for Optimization Objective To evaluate the optimization performance of SATA-DRL, we compare it with HEFT[33], Zhang's PCP[8] and OnDoc[34]. HEFT is a classic workflow scheduling algorithm for the heterogeneous computing. It has been widely used for task graph scheduling in different distributed systems. Zhang's PCP is a task graph scheduling algorithm for the MEC system, which utilizes the partial critical path to reduce the transfer time between dependent tasks. OnDoc is an online task graph scheduling algorithm for multiple applicatons in the MEC system. In these group experiments, let the three algorithms handle many applications with different arrival rates. We randomly fetch many applications following Possion distribution with \(\lambda\) to store them into a file. Each of four algorithms loads the fetched applications from the file for performance evaluation. To alleviate the randomness of the experimental results, the four algorithms handle the same applications for 30 times, respectively. Then, we utilize the averages of results to plot. In experiments, we evaluate the average makespan and the deadline violation rate for all applications, respectively. The deadline violation rate can reveal how many applications will lose their deadlines. We express the total number of applications as \(|\mathcal{N}|\) and indicate the total number of applications violating deadlines as \(|\mathcal{\overline{N}}|\). Thus, the deadline violation is defined as \(|\mathcal{\overline{N}}|/|\mathcal{N}|\times 100\%\). Fig. 6 shows the makespan comparison for handling various applications with \(\lambda=\{5,7,9\}\), respectively. We can observe that SATA-DRL achieves the lowest makespan among the four algorithms. Particularly, the makespans generated by HEFT and Zhang's PCP increase along with the increase of application arrival rates but SATA-DRL and OnDoc are hardly affected by arrival rates. This is because that HEFT and Zhang's PCP Figure 6: Average Makespan Comparison only make scheduling decision for the task among a single application every time without considering the arrival of multiple applications. Therefore, the application tasks newly sent from other MUs may still be scheduled to the busy ECDs, resulting in backlogs in the computing queues of ECDs. In addition, the gap between SATA-DRL and OnDoc is relatively small but the makespan achieved by SATA-DRL is less than OnDoc. The reason is that SATA-DRL can learn from the variation of environments and adaptively make more appropriate scheduling decisions for task graph scheduling. Furthermore, we can observe from Fig. 7 that the deadline violation rate of SATA-DRL is much lower than the other algorithms. This further illustrates that SATA-DRL can fully orchestrate the computing resources of the system to make scheduling decisions with careful consideration of the variation of environments. ## 7 Conclusions This paper has investigated the problem of the task graph offloading in MEC, where the computation capabilities of edge computing devices are time-varying. To adapt to environmental changes, we have modeled the task graph scheduling for computation offloading as a MDP. According to the characterization of the environment, we have formulated it as the state space and abstracted the task scheduling decisions into the action space. Moreover, we have defined the reward with respect to MDP as the benefit for the agent. Built upon the task graph scheduling mechanism, we have designed the SATA-DRL algorithm to learn the task scheduling strategy from the interaction with the environment, improving user experience. Extensive experiment results have also validated the superiority of SATA-DRL, by comparing it with existing algorithms, in terms of reducing average makespan and deadline violation. ## Acknowledgments This work is supported in part by Hunan Provincial Natural Science Foundation of China under Grant 2022JJ50147, in part by the 14-th Five-Year-Plan Project of Hunan Provincial Education Science under Grant ND228128.
2309.07015
Résumé Parsing as Hierarchical Sequence Labeling: An Empirical Study
Extracting information from r\'esum\'es is typically formulated as a two-stage problem, where the document is first segmented into sections and then each section is processed individually to extract the target entities. Instead, we cast the whole problem as sequence labeling in two levels -- lines and tokens -- and study model architectures for solving both tasks simultaneously. We build high-quality r\'esum\'e parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish. Based on these corpora, we present experimental results that demonstrate the effectiveness of the proposed models for the information extraction task, outperforming approaches introduced in previous work. We conduct an ablation study of the proposed architectures. We also analyze both model performance and resource efficiency, and describe the trade-offs for model deployment in the context of a production environment.
Federico Retyk, Hermenegildo Fabregat, Juan Aizpuru, Mariana Taglio, Rabih Zbib
2023-09-13T15:17:29Z
http://arxiv.org/abs/2309.07015v1
# Resume Parsing as Hierarchical Sequence Labeling: An Empirical Study ###### Abstract Extracting information from resumes is typically formulated as a two-stage problem, where the document is first segmented into sections and then each section is processed individually to extract the target entities. Instead, we cast the whole problem as sequence labeling in two levels --lines and tokens-- and study model architectures for solving both tasks simultaneously. We build high-quality resume parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish. Based on these corpora, we present experimental results that demonstrate the effectiveness of the proposed models for the information extraction task, outperforming approaches introduced in previous work. We conduct an ablation study of the proposed architectures. We also analyze both model performance and resource efficiency, and describe the trade-offs for model deployment in the context of a production environment. Sequence labeling, deep learning, resume parsing + Footnote †: * Casting the task of resume parsing as hierarchical sequence labeling, with line-level and token-level objectives, and presenting an efficient resume parsing architecture for simultaneous labeling at both levels. We propose two variants of this model: one optimized for latency and the other optimized for performance. * A comprehensive set of experiments on resume parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish, each covering diverse industries and locations. We share our experience in the process of developing such annotations. These experiments compare our proposed system to previous approaches and include an extensive ablation study, examining various design choices of the architecture. * Insights into the process of deploying this model in a global-scale production environment, where candidates and recruiters from more than 150 countries use it to parse over 2 million resumes per month in all these languages. We analyze the trade-off between latency and performance for the two variants of the model we propose. Our empirical study suggests that the proposed hierarchical sequence labeling model can parse resumes effectively and outperform previous work, even with a task definition that involves labeling significantly large text sequences and a relatively large number of entity labels. ## 2 Related Work Our work builds upon prior research on deep learning for sequence labeling, specifically those applying neural networks in combination with Conditional Random Fields (CRFs) to various sequence labeling tasks. Huang et al. (2015) investigated an architecture based on Bidirectional Recurrent Neural Networks (BiRNNs) and CRFs [4]. They use both word embeddings and handcrafted features as initial representations. Lample et al. (2016) extended this architecture by introducing character-based representations of tokens as a third source of information for the initial features [5]. An alternative character-based approach was proposed by Akbik et al. (2018), which uses a BiRNN over the character sequence to extract contextualized representations that are then fed to a token-level BiRNN+CRF [6]. In addition, Devlin et al. (2019) introduce a simple Transformer-based approach that avoids the utilization of CRF. This consists of a pre-trained BERT encoder, which is fine-tuned, followed by a linear classification layer applied to the representation of each token [7]. We refer interested readers to the surveys by Yadav and Bethard (2018) and Li et al. (2022) for a more comprehensive review of deep neural networks for sequence labeling [8, 9]. Prior work on parsing resumes usually divides the problem into two tasks, and tackles each separately [1, 2, 3, 10, 11]. The resume is first segmented into sections and groups, and then section-specific sequence labeling models are applied to extract target entities. The early work by Tosik et al. (2015) focuses on the second task only, as they experiment with already-segmented German resumes [1]. They train named entity recognition models for the _contact information_ and _work experience_ sections, each with a small set of labels. The architecture they apply uses word embeddings as direct features for the CRF. Zu et al. (2019) use a large set of English resumes collected from a single Chinese job board to experiment with several architectures for each of the two stages [2]. For segmentation, they classify each line independently (without document context). Then to extract entities, they train different models for each section type. The input to these sequence labeling models is the text of each independent line. While for the line classification task they use manually annotated samples, the sequence labeling models are trained using automatic annotations based on gazetteers and dictionaries. Barducci et al. (2022) work with Italian resumes. They first segment the resume using a pattern-matching approach that relies on a language- and country-specific dictionary of keywords [3]. After this, they train independent sequence labeling models for each section type. The architecture they use for the sequence labeling component is based on the approach described above that uses BERT [7] with a classification layer on top. Finally, Pinzon et al. (2020) work with a small corpus of English resumes [12]. They bypass the segmentation task (ignoring sections and groups) and propose a model that directly extracts entities from the resume text. They use a BiRNN+CRF model for the token-level sequence labeling task. Among the related work we examined, this is the only one that made their dataset public. Nevertheless, a manual examination of the corpus led us to conclude that the sample is far from representative of real-world English resumes and that the labeling scheme they use is limited and inadequate for our scope. We extend the previous work by exploring a joined architecture that predicts labels for both lines and tokens, treating each as a sequence labeling task. Furthermore, as in Pinzon et al. (2020) [12], we unify the extraction of entities for any section. This setup is challenging, since resumes are unusually long compared to typical Information Extraction tasks, and the set of labels for entities is also bigger. But the advantage is the improvement of efficiency in terms of execution time and memory usage, and the simplification of the engineering effort since only one model needs to be trained, deployed, and maintained. Our work is also the first one to study resume parsing in seven languages, with large corpora of resumes selected from many different industry sectors, and using high-quality manual annotations for both the line and token tasks. ## 3 Task Description We cast resume parsing as a hierarchical sequence labeling problem, with two levels: the line-level and the token-level. These two tasks can be tackled either sequentially or in parallel. For the first, we view the resume as a sequence of lines and infer the per-line labels that belong to different section and group types. This is a generalization of the task definition used in previous work, where the label (class) for each line is inferred independent of information about the text or the predicted labels of other lines. We assume that section and group boundaries are always placed at the end of a line, which is the case in all the resumes we came across during this project. The label set for this part of the task includes a total of 18 sections and groups, which are listed in Appendix A.1. For the second level, we view the resume as a long sequence of tokens that includes all the tokens from every line concatenated together. We infer the per-token labels that correspond to the different entities. The label set for this part of the task includes 17 entities, which are in turn listed in Appendix A.2. The scope of this paper revolves around the extraction task and therefore we do not focus on the conversion of the original resume (e.g. a docx or pdf file) into plain text format. Rather, the systems studied in this work assume textual input. ## 4 Corpora We built resume parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish. Some statistics on the corpora are reported in Table 1. For each of these languages, resumes were randomly sampled from public job boards, covering diverse locations and industries. For all but Chinese, we controlled the sampling process in order to enforce diversity in locations. For example, although the English corpus is biased toward the USA, there is a fraction of resumes from other English-speaking countries including the UK, Ireland, Australia, New Zealand, South Africa, and India. Although we did not control for industry variability, we observe a high level of diversity in the selected collections. We then used third-party software to convert into plain text the original files, which came in varied formats such as pdf, doc, and docx. Since this effort is aimed at building a real-world application, annotation quality is highly important. For that purpose, we implemented a custom web-based annotation tool that allows the user to annotate section and group labels for each line of a resume, and to annotate entity labels for each arbitrary span of characters. We developed the annotation guidelines by starting with a rough definition for each label and performing exploratory annotations on a small set of English resumes -a mini-corpus that we later used for onboarding the annotators. The guidelines were then iteratively refined for the whole duration of the project, achieving a stable and rigorous version at the end. In Appendix A we define the section, group, and entity objectives covered in our corpora, and we provide a screenshot of the annotation tool user interface for reference. Each language corpus was managed as an independent annotation project. We recruited 2 or 3 annotators, who are native speakers of the target language and without specifically seeking domain expertise, through an online freelance marketplace. The annotators did not communicate with each other during the process, maintaining the independence of the multiple annotations. Before starting the annotations on the target corpus, we asked each annotator to carefully read the guidelines, and annotate the onboarding English mini-corpus. After reviewing and providing feedback, the annotator was instructed to annotate all the resumes in the target corpus. The estimated inter-annotator agreement (IAA) for the corpus in each language, computed as suggested by Brandsen et al. (2020)[13] in terms of \(F_{1}\), ranges from 84.23 to 94.35% and the median is 89.07%. Finally, we adjudicated the independent annotations in order to obtain the gold standard annotations. This process involved resolving any conflicting decisions made by individual annotators through the majority voting method. In cases where a majority decision was not attainable, the adjudicator was instructed to review the decisions of each annotator and apply their own criteria to arrive at a final decision. \begin{table} \begin{tabular}{l r r r} \hline \hline **Corpus** & **Resumes** & **Lines** & **Tokens** \\ \hline English & 1196 & 73.3 & 834.1 \\ French & 1044 & 54.4 & 539.1 \\ Chinese & 1023 & 50.6 & 664.8 \\ Spanish & 846 & 68.6 & 667.4 \\ German & 738 & 80.5 & 608.6 \\ Portuguese & 628 & 73.1 & 773.6 \\ Swedish & 519 & 74.5 & 632.0 \\ \hline \hline \end{tabular} \end{table} Table 1: The number of resumes and the average number of lines and tokens per resume for each language corpus. ## 5 Model Architecture and Training The models we use in this work are based on the BiRNN+CRF architecture. Initial features are first extracted for each token, then combined through bidirectional recurrent layers, and finally passed through a CRF layer to predict the labels. Unless specified otherwise, the input to the model is the entire resume text after applying tokenization. We study two design-decisions: (1) the choice for initial features, and (2) separate models for predicting line and token labels vs. a multi-task model that predicts both jointly. **Initial features**. We explore two alternatives: 1. A combination of FastText [14] word embeddings and handcrafted features, which are detailed in Appendix B. 2. Token representations obtained from the encoder component of a pre-trained T5 [15] model (or an mT5 [16], depending on the language) without fine-tuning. The T5 models are based on the Transformer [17] architecture. For this second case, each line is encoded individually2, and then the token representations for each line are concatenated to obtain the input sequence for the BiRNN+CRF architecture. This is visually described in Figure 2. Preliminary experiments, which are not presented here because of space constraints, showed that avoiding the BiRNN component for this last architecture, i.e. applying CRF directly on the output of the Transformer-based features, obtains markedly worse results. This is because the two layers capture complementary aspects of the context: the Transformer encodes tokens by exclusively considering the context of the current line, while the BiRNN layer on top contextualizes across every line. Because of the typical length of a resume in terms of tokens, we did not explore encoding the whole resume at once with the Transformer encoders used in this work. Footnote 2: Note that résumés are long text sequences, usually longer than 512 tokens (see Table 1). **Single-task vs. Multi-task**. We experiment with: 1. Single-task models that perform either line-level sequence labeling (sections and groups) or token-level sequence labeling (entities). 2. Multi-task models that predict labels for both line-level and token-level tasks simultaneously. Figure 1 illustrates the model variants. The architecture shown in Figure 0(a) is a single-task model for line-level objectives (sections and groups). This architecture takes as input the complete sequence of tokens in the resume and predicts one label for each line. We train Figure 1: Model variants studied in this work. Blue architecture blocks denote initial features (which are pre-trained, and held fixed during our experiments), while yellow and red blocks denote layers that output a sequence of elements for each token or line. this type of model using only the supervision from the line sequence labeling task. As the diagram shows, a sequence of token representations is transformed into a sequence of line representations, such that the output is expressed in terms of lines, using a pooling operation inspired by Akbik et al. (2018) [6]. In detail, consider the input resume as a long sequence \(\mathbf{X}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T})\) of \(T\) tokens, partitioned in \(L\) lines. Each line \(j\) is a subsequence of tokens, starting at \(\mathbf{x}_{a_{j}}\) and ending at \(\mathbf{x}_{b_{j}}\). After extracting the initial features for each token, and feeding these into the token-wise BiRNN layer, we obtain a sequence of token representations \(\mathbf{H}=(\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{T})\), each consisting of a forward and backward component, \(\mathbf{h}_{i}=\overrightarrow{\mathbf{h}}_{i}\oplus\overleftarrow{\mathbf{h }}_{i}\). We then compute the representation for each line \(j\) by concatenating the forward component of the last token with the backward component of the first token: \(\mathbf{r}_{j}=\overrightarrow{\mathbf{h}}_{b_{j}}\oplus\overleftarrow{ \mathbf{h}}_{a_{j}}\). The result is a sequence of line representations \(\mathbf{R}=(\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{L})\), which is in turn processed by another BiRNN layer. This aggregation mechanism is depicted in Figure 2. Figure 1b, on the other hand, shows the single-task model for token-level objectives (entities). This second architecture is trained using supervision from the token-level labels only. Finally, a multi-task architecture for predicting both line and token objectives jointly is presented in Figure 1c. It is trained with both supervision signals simultaneously. For this multi-task architecture, the token-level CRF receives as input the concatenation of: (i) the representation of the target token and (ii) the line-level representation of the line in which the token occurs. All the models are implemented using TensorFlow [18]. ## 6 Experiments We next describe the results of our experiments using the corpora of Section 4. The main results are summarized in Table 2. For each language, we use 90% of the documents for training and report the micro-average \(F_{1}\) scores (for the positive labels only) on the held-out 10%3. The results compare the two model architectures discussed in Section 5: _Single-task_ and _Multi-task_, and for each architecture, the two alternatives for initial features: FastText and Transformer-based T5. Footnote 3: Due to the relatively small size of the corpora, we opted against using a three-way split involving training, validation, and test sets. The \(F_{1}\) scores for the token sequence labeling task (predicting entities) are reported in Table 2a. Those include the results for the two single-task models that act only on the token-level task, as well as the two multi-task models. The \(F_{1}\) scores for the line sequence labeling task (sections and groups) are shown in Table 2b, again for the two single-task models that act only on the line-level task, and the two multi-task models4. Footnote 4: Row 2 of both sub-tables evaluates the same underlying model (but for different tasks), and similarly for row 4 We make some observations. Comparing row 1 with row 3, and also row 2 with row 4, we see that using Transformer-based embeddings yields an improvement of 2.5% in the goals \(F_{1}\) on English, and a smaller improvement on French, Spanish, Chinese, and Portuguese, but is worse on German and Swedish5. FastText initial features, on the other hand, perform as well or better than Transformer-based features in the line-level task. It is important to consider, though, that the improved error rate of the Transformer-based model comes at a higher computational cost during inference. This consideration is especially important when the model is deployed in a high-load commercial application where latency is a crucial factor. Footnote 5: Swedish is an outlier, where the Transformer-based models are markedly less accurate. This might be due to the small size of Swedish data used for pre-training mT5. A second important observation is that the multi-task models generally outperform their single-task counterparts for the token sequence labeling task. Additionally, the multi-task model has a significant advantage in a commercial setting. From an operational perspective, the training, testing, integration, and maintenance of a single model is simpler and cheaper than for two models. ### Section-specific Models The simplification of model development and maintenance is even more significant when we contrast the unified multi-task model described above with the typical two-stage approach for resume parsing [1, 2, 3]. The latter requires training and maintaining several models: one for the initial line segmentation task, and then one Figure 2: Detail of the aggregation of token-level representations into line-level representations (blocks in red), exemplified with a variant using Transformer-based initial features. The BiRNN that contextualizes initial token-level features across every line (blocks in yellow) is needed because a typical résumé does not fit in the maximum input length of the typical Transformer models. for entity extraction within each specific section type (e.g. one single-task model for the entities related to contact information, another single-task model for entities related to work experience, etc). By contrast, the unified multi-task model we proposed is used to label all the entities across the whole resume at once, regardless of the section type. This simplification, however, comes at a cost of increased error rate since a section-specific model has to decide among a much smaller set of labels, and receives a shorter text sequence as input. In this part, we attempt to quantify such degradation. We train section-specific models, i.e. individual models, for the entities for three of the section types: _contact information_, _work experience_, and _education_. Each is trained and evaluated only on the corresponding segment of the resumes. Segmentation is performed using the gold standard annotations for sections, in order to focus our measurements on the token-level task. In Table 3, we report the micro-average \(F_{1}\) scores grouped by the relevant sections, comparing the performance of each section-specific model to the proposed unified, multi-task model. Results are reported for English, French, and Chinese. We show a loss in \(F_{1}\) ranging from 1% to 5% depending on the section and language. Since the section-specific models benefit from the gold standard segmentation of sections, the results should be considered as an upper bound of the degradation in error rate. A real-world system implemented according to the two-stage approach should expect a compound error carried from the first stage, e.g. the error observed for the Single-task models presented in Table 1(b). The aim is to provide the practitioner with a quantifiable assessment of the trade-off between engineering simplicity and task accuracy. ### Analysis and Details on Deployment The results already suggest that the Transformer-based initial features perform generally better for the token-level sequence labeling task. Furthermore, they do not need language-specific handcrafted features, so they can be readily applied to new languages. On the other hand, the alternative set of initial features (the combination of word embeddings and handcrafted features) performs better in the line sequence labeling task for detecting section and group labels. However, in terms of efficiency, our experiments reveal that using word embedding initial features leads to a considerable improvement in time-efficiency during inference, when compared to the Transformer-based features. The inference time for the multi-task model was measured under both feature sets. On a bare-metal server with a single GPU6, we observed a speedup of 7 of the FastText models compared to Transformer-based features. Furthermore, when utilizing CPU-only hardware7, the speedup increased substantially to 90. As an example, we note that the multi-task model using FastText initial features, deployed on CPU-only servers via \begin{table} \end{table} Table 2: Performance of the model variants for resume parsing in seven languages, expressed as micro-average \(F_{1}\) score in percentage points for the positive labels in the two hierarchical levels of the sequence labeling task: token and lines objectives. For each variant, we report the average of three independent replications using different random seeds. (The single-task model for tokens using FastText features is equivalent to the one proposed by Pinzon et al. (2020) [12].) TensorFlow Serving [19], yields a latency of 450 ms per resume without batch processing. ### Ablations and Comparison with Previous Work Table 4 presents an ablation study of the proposed architectures in order to empirically support our architectural design choices. Furthermore, some of the ablated variants are re-implementations of systems proposed in previous work and thus act as baselines for the experiments presented above in this section. The first group involves variants that use, as initial features, the combination of FastText word embeddings and handcrafted features. Variant 1 is the multi-task model presented in Table 2a. The first ablation, variant 2, involves replacing the top-wise CRF layer with a Softmax layer. Both variants have comparable performance, with a small degradation when Softmax is used. The next ablation, variant 3, removes the BiRNN layer and thus makes the CRF predict the token labels using the initial features directly. This is a re-implementation of the system proposed by Tosik et al. (2015) [1], although they did not share their handcrafted features (and therefore we use those described in Appendix B). This other ablated variant has a substantial degradation in performance with respect to our proposed model, suggesting that the role played by the BiRNN layer is critical. The second group involves variants that apply frozen Transformers to each line individually, and then concatenate every line to obtain the initial features (this is visually described in Figure 2). Variant 4 is the multi-task model presented in Table 2a. The first ablation, variant 5, involves replacing the T5 (or mT5) encoder with a BERT (or mBERT) encoder [7]. We observe an appreciable degradation in performance, suggesting that the pre-trained T5 family of models produces representations that are more useful for our task. Variant 6 and 7 use T5 and BERT, respectively, but omit the recurrent layer. Both result in a significant degradation of performance with respect to the models including the BiRNN, again showing the importance of the BiRNN for this task. The third group involves variants that also apply Transformers to each individual line, but this time we allow for the Transformer encoder to be fine-tuned with the task supervision. In this case, we do not employ a BiRNN for contextualizing token representations across lines because this would require a much more challenging optimization procedure8 and thus each line is processed independently. Variant 8 involves a BERT encoder (being fine-tuned) that computes representations for each token in the line, and uses a CRF layer to predict their labels. When compared to our proposed model (variant 4), we observe a significant drop in performance, suggesting that the contextualization across different lines in the resume is the critical factor for the performance of the system. Interestingly, when variant 8 is compared to variant 7 --identical, except for fine-tuning-- we do see an improvement in performance, suggesting that without inter-line contextualization, fine-tuning is indeed helpful. Footnote 8: A naive implementation for this procedure would require keeping in memory as many copies of the Transformer as lines in the target resume. that allow for fine-tuning the Transformer component outperform their frozen-Transformer equivalents, but they are in turn outperformed by our proposed solutions (variants 1 and 4). ## 7 Conclusion Resume parsing is an important task for digitalized recruitment processes, and the accuracy of the parsing step affects downstream recommender systems significantly. In this work, we study resume parsing extensively in seven languages. We formulated it as a sequence labeling problem in two levels (lines and tokens), and studied several variants of a unified model that solves both tasks. We also described the process for developing high-quality annotated corpora in seven languages. We showed through experimental results that the proposed models can perform this task effectively despite the challenges of substantially long input text sequences and a large number of labels. We observed that the joint model is more convenient than the typical two-stage solution in terms of resource efficiency and model life-cycle maintainability, and also found that in some cases the joint model yields better performance. We provided a trade-off analysis of the proposed variants and described challenges for deployment in production environments. The ablation experiments suggest that the BiRNN layer contextualizing across the resume is critical for performance, and that the CRF component further provides a smaller improvement. Potential directions for future research include the following: using character-based initial features [5, 6] for the FastText variants, as they can complement word embeddings by incorporating information from the surface form of the text and may even offer the opportunity to gradually replace handcrafted features; domain-adapting the Transformer representations with unannotated resumes, considering the reported effectiveness of this technique in enhancing downstream task performance [20]; and building multilingual models to improve sample efficiency for low-resource languages. Furthermore, alternative Transformer architectures designed specifically for long input sequences [21, 22] could be used in order to encode the entire resume in a single pass, while also enabling the possibility to fine-tune the encoder. ## Limitations As discussed in Section 4, despite our best efforts to cover as many locations, industries, and seniority levels, it is not feasible for resume parsing corpora with sizes of up to 1200 resumes to actually contain samples from every subgroup of the population under study. Therefore, we would like to highlight that the findings presented in this work apply specifically to resumes that are similar to those included in the corpora, and may not generalize with the same level of accuracy to other resumes belonging to combinations of location, industry, and work experience that were not seen by the model during training. ## Ethics Statement The system described in this work is intended for parsing resumes of individuals from different backgrounds, located around the globe. Considering the importance of inclusivity in this context, we made a great effort to cover the diversity of the use of language in our corpora with the objective in mind. This helps us to provide high-quality resume parsing for individuals from various industries and locations. Furthermore, the data used for training and evaluating our models consist of resumes that contain sensitive information from real-world individuals. We have taken the necessary privacy and security measures for protecting this information throughout every step of this project. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model variant** & **English** & **French** & **Chinese** \\ \hline _FastText initial features_ & & & \\ 1 IF+BiRNN+CRF & 89.03 & 86.90 & 92.66 \\ 2 IF+BiRNN+Softmax & 88.86 & 86.53 & 92.67 \\ 3 IF+CRF [1] & 65.89 & 64.68 & 67.53 \\ _Transformer initial features_ & & & \\ _(frozen)_ & & & \\ 4 T5+BiRNN+CRF & 90.94 & 88.65 & 92.61 \\ 5 BERT+BiRNN+CRF & 88.91 & 86.34 & 91.79 \\ 6 T5+CRF & 78.65 & 75.40 & 76.91 \\ 7 BERT+CRF & 74.70 & 73.53 & 81.51 \\ _Transformer initial features_, _linewise_ & & & \\ _(fine-tuned)_ & & & \\ 8 BERT+CRF & 83.55 & 85.60 & 86.36 \\ 9 BERT+Softmax [7, 3] & 83.13 & 85.55 & 85.95 \\ 10 T5+Softmax & 84.18 & 85.61 & 86.58 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study. Variants are compared in terms of the micro-average \(F_{1}\) obtained for the token sequence labeling task. Variants 1 and 4 represent the models discussed in the previous part of this section. Other model variants depart from either one of these by changing one aspect at a time. In particular, variant 3 re-implements the system of Tosik et al. (2015) [1], and variant 9 is equivalent to the architecture proposed by Devlin et al. (2019) [7] for other sequence labeling tasks. _IF_ denotes _initial features_. Each result is an average of three independent replications.
2305.19840
BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language
The BEIR dataset is a large, heterogeneous benchmark for Information Retrieval (IR) in zero-shot settings, garnering considerable attention within the research community. However, BEIR and analogous datasets are predominantly restricted to the English language. Our objective is to establish extensive large-scale resources for IR in the Polish language, thereby advancing the research in this NLP area. In this work, inspired by mMARCO and Mr.~TyDi datasets, we translated all accessible open IR datasets into Polish, and we introduced the BEIR-PL benchmark -- a new benchmark which comprises 13 datasets, facilitating further development, training and evaluation of modern Polish language models for IR tasks. We executed an evaluation and comparison of numerous IR models on the newly introduced BEIR-PL benchmark. Furthermore, we publish pre-trained open IR models for Polish language,d marking a pioneering development in this field. Additionally, the evaluation revealed that BM25 achieved significantly lower scores for Polish than for English, which can be attributed to high inflection and intricate morphological structure of the Polish language. Finally, we trained various re-ranking models to enhance the BM25 retrieval, and we compared their performance to identify their unique characteristic features. To ensure accurate model comparisons, it is necessary to scrutinise individual results rather than to average across the entire benchmark. Thus, we thoroughly analysed the outcomes of IR models in relation to each individual data subset encompassed by the BEIR benchmark. The benchmark data is available at URL {\bf https://huggingface.co/clarin-knext}.
Konrad Wojtasik, Vadim Shishkin, Kacper Wołowiec, Arkadiusz Janz, Maciej Piasecki
2023-05-31T13:29:07Z
http://arxiv.org/abs/2305.19840v2
# BEIR-PL: Zero Shot Information Retrieval Benchmark ###### Abstract The BEIR dataset is a large, heterogeneous benchmark for Information Retrieval (IR) in zero-shot settings, garnering considerable attention within the research community. However, BEIR and analogous datasets are predominantly restricted to the English language. Our objective is to establish extensive large-scale resources for IR in the Polish language, thereby advancing the research in this NLP area. In this work, inspired by mMARCO and Mr. TyDi datasets, we translated all accessible open IR datasets into Polish, and we introduced the BEIR-PL benchmark - a new benchmark which comprises 13 datasets, facilitating further development, training and evaluation of modern Polish language models for IR tasks. We executed an evaluation and comparison of numerous IR models on the newly introduced BEIR-PL benchmark. Furthermore, we publish pre-trained open IR models for Polish language,d marking a pioneering development in this field. Additionally, the evaluation revealed that BM25 achieved significantly lower scores for Polish than for English, which can be attributed to high inflection and intricate morphological structure of the Polish language. Finally, we trained various re-ranking models to enhance the BM25 retrieval, and we compared their performance to identify their unique characteristic features. To ensure accurate model comparisons, it is necessary to scrutinise individual results rather than to average across the entire benchmark. Thus, we thoroughly analysed the outcomes of IR models in relation to each individual data subset encompassed by the BEIR benchmark. The benchmark data is available at URL [https://huggingface.co/clarin-knext](https://huggingface.co/clarin-knext). ## 1 Introduction Modern natural language processing (NLP) applications often require support from efficient information retrieval (IR) processes, e.g. in order to efficiently acquire and accurately pre-filter texts. An IR component is necessary in the case of many NLP tasks such as Question Answering, Entity Linking, or Abstractive Summarization. Recently, classic IR models based on lexical matching are typically combined with neural retrievers utilizing large pre-trained language models. The neural language models based on the transformer architecture facilitate solving NLP problems in a multilingual setting due to their cross-lingual alignment originating from pre-training on parallel corpora. Such models have achieved promising results in a zero-shot setting, in which the model is trained only on the source language data only and evaluated on the target. Thus, there is a great need to create cross-lingual evaluation data and benchmarks similar to the monolingual BEIR [29] for many languages. Although existing multilingual evaluation benchmarks try to include as many languages as possible, the Polish language has been much less prominent in IR studies focused on neural models due to the limited availability of Polish IR datasets. The existing studies on dense IR tasks suggest that dense retrieval models outperform lexical matchers, such as BM25 algorithm [16]. However, measuring the performance gap between lexical retrievers and dense retrieval models is very instructive, as the lexical matchers typically require much fewer computational resources. Moreover, lexical matchers are still a tough baseline for neural retrievers in specific domains. Our main goal was to create a large scale benchmark dataset for IR in the Polish language, which is especially aimed at zero-shot approaches. Additionally, we wanted to train and evaluate existing IR models known from literature, but have not been so thoroughly analysed on Polish datasets, yet, to determine their performance and establish a baseline for future research. Our contributions are as follows. * For the sake of comparison and compatibility, we translated the original BEIR benchmark datasets to the Polish language, a less-resourced language in IR, e.g., Polish was neither covered by the original multilingual MS MARCO dataset [23], nor by Mr. TyDi benchmark [36], that was a strongly limiting factor for developing dense retrieval models for the Polish language. * We evaluated a BM25-based lexical retriever and demonstrated that lexical retrieval in the Polish language is more challenging when compared to other languages and is not as well-supported. * We fine-tuned five deep neural models for the re-ranking task using different model architectures and sizes, that are currently present in the literature. We then compared the performance of these models with a pre-existing multilingual model. * We fine-tuned an unsupervised dense bi-encoder as a retriever on the Inverse Cloze Task (ICT) and compared its performance with an available multilingual sentence embedding model, as well as with lexical BM25 retrieval. * We demonstrated that both the BEIR-PL and original BEIR benchmarks are of heterogeneous nature, and that to accurately compare model performance, it is necessary to closely examine the results of individual datasets rather than relying solely on overall averages across the entire dataset. ## 2 Related work We built upon the idea of the BEIR benchmark dataset [29], as it is precisely focused on the zero-shot evaluation of modern IR systems. Neural IR systems are trained on large datasets such as MS MARCO [4], or synthetic datasets derived from large pre-trained generative language models [3]. MS MARCO has been translated into many different languages [4], but not to Polish, yet. Moreover, even other extensive multilingual benchmarks for IR such as Mr. TYDI [36] - covering many topologically diverse languages - do not include Polish data and includes so far only one Slavic language, namely Russian. The two most commonly used and recent benchmarks for the Polish language technology, namely KLEJ [28] and Lepiszcze [1] contain evaluation data for many distinct NLP tasks, but none of them is directly related to IR tasks. ### Passage Retrieval The task of IR is to search for and return documents (i.e. any indexed text objects) that are relevant to a user query from a collection. Collections may consist of millions of documents, which makes the task computationally intensive. Moreover, documents and queries mostly are of significantly different lengths, the language used throughout the documents may vary (e.g., from general to specialized), and the information represented in a collection may cover a broad range of topics. Lexical approaches, e.g., TF.IDF or BM25 [27], have dominated textual IR for many years. Mainly due to manageable computational cost, but still offering decent performance. Recently, a strong trend has been observed towards developing neural retriever models that should outperform lexical approaches. Pretrained language models like BERT [8] appeared to be a good basis for dense retrieval approaches. Bi-encoder architecture as presented in dense passage retriever (DPR) [16] and sentence BERT [26] are commonly used and express high performance, especially on in-domain datasets. The query and document are represented by single vectors, which facilitates applications of fast vector databases i.e. FAISS [15]. The main drawback of such models is their lower performance on out-of-domain data. On the other hand, the BM25 approach achieves better results in such scenario. A potential approach involves utilising a multi-vector representation of the query and document, as exemplified in ColBERT [17]. These approaches utilise the late interaction paradigm combining lexical information with high-level features obtained from different transformer layers. ColBERT encodes documents and queries in multiple vectors, where each output vector corresponds to the input token. During inference time, ColBERT computes the Cartesian product between queries and documents, which can enhance retrieval outcomes, but also necessitates storing a huge index in memory. To improve performance of single vector representations, it was shown that language models have structural readiness, and it is possible to pre-train the model towards bi-encoder structure [10]. Also, we can explicitly aggregate sentence-level representation with token representation, which is obtained using a weight matrix from Mask Language Modeling (MLM) pre-training task and treating all input sentence tokens as a probability distribution over model dictionary [20]. The aggregated tokens representation, in conjunction with CLS vector, is fine-tuned to the retrieval task. ### Unsupervised pretraining Unsupervised methods are mainly aimed at zero-shot schemes. In IR, most methods focus on data augmentation and generation of pseudo queries. Inverse Cloze Task (ICT) [19] resembles finding evidence to answer a question. In contrast to the standard Cloze task - predicting masked text given its context - the ICT requires anticipating the context given a sentence. The unsupervised equivalent of the question-evidence pair is the sentence-context pair - the context of a sentence has semantic meaning and can be used to infer information that is not directly expressed in the sentence. Using ICT pre-training, we can achieve zero-shot evidence retrieval performance sufficient for bootstrapping the latent-variable learning. Another method of building positive query-document pair instances without supervision is dropout as a Positive Instance (DaPI) from SimCSE [11]. To perturb the input sentence representation, dropout is applied to transformer models' fully-connected layers and attention probabilities. As a result, an obtained representation can be treated as a positive instance pair with the same sentence but different hidden dropout mask. The promising performance of both methods and evaluation on English BEIR benchmark was shown in LaPraDor [34]. ### Passage Re-ranking BERT [8] enabled approaches based on cross-encoders, e.g., [26], in which we obtain a joint embedding of a document and an input query, on the token level. In this approach, BERT processes a document and a query simultaneously, scoring their relationship. Due to computational cost, cross-encoders are particularly popular in two-stage retrieval architectures. The first stage extracts the most relevant documents with a light and fast model (e.g., BM25 [27]). Cross-encoders are used in the next stage for re-ranking. A re-ranker, e.g., a cross-encoder, recomputes document scores from the first stage (see Figure 1). Alternatively, generative sequence-to-sequence language models were also proposed for re-ranking. MonoT5 [24] is an adaptation of the T5 model [25] for IR task. The model had achieved state-of-the-art results in zero-shot setting. The sequence-to-sequence model is triggered by a prompt containing a query followed by a document. The model is expected to assess their relevance by producing "true" or "false" token in a generative way. The idea of two-stage approach is presented in Figure 1. Initially, a set of top K documents is re Figure 1: In retrieval with re-ranking setting, in the first stage, top@k most relevant documents are retrieved by the fast but inaccurate model. In our case, it was BM25. Afterwards, the documents are re-ranked by a more powerful and more accurate model. trieved using techniques such as BM25. Subsequently, the retrieved documents are reranked based on the query using a re-ranker. ### BEIR benchmark BEIR is a benchmark for zero-shot IR encompassing various tasks - their sizes are shown in Table 2. The authors of BEIR benchmark aimed at obtaining a large-scale data collection representing diversified IR tasks, with various features of text data and queries, e.g. collecting queries and documents of different lengths and style, also originating from different domains, not only news or Wikipedia articles. Different domains are meant to represent real-world data settings and should be challenging for various methods. Moreover, the datasets were annotated by utilising different annotation strategies, e.g. performed by crowd-workers but also experts in specific cases. The original BEIR benchmark includes the following data subsets: * a large-scale dataset for IR and Question Answering tasks. It consists of over 180K Bing questions and human-generated answers. It is a core dataset for the BEIR benchmark. All models are trained on MS Marco and evaluated on other datasets. * **Natural Questions**[18] is a dataset consisting of questions from google search engine and human-annotated answers based on Wikipedia articles. * a dataset for IR in Medicine, composed of queries from NutritionFacts. * a dataset dedicated to multi-hop Question Answering, which requires reasoning over multiple paragraphs. It is based on Wikipedia articles as the information source. * a dataset created for the purpose of helping in COVID-19 and easier information access to reliable information in case of future pandemics. It contains scientific articles related to COVID-19. * a financial-aspect-based dataset used in IR and QA tasks. The dataset can be also used in other NLP task, such as aspect-based sentiment analysis. * an entity search dataset based on BDpedia knowledge base with queries in the natural language. * a dataset based on StackExchange sub-forums. It may be useful for IR and classification tasks. CQDCupstack consists of several subsets: Android, English, gaming, GIS, Mathematica, physics, programmers, stats, Tex, Unix, webmasters, WordPress. * this dataset is slightly different. Instead of question-answer pairs, it includes question-question pairs. The main task underlying this dataset is prediction if two questions are duplicates. It is used in BEIR-benchmark as a text comprehension task. * a large dataset of scientific documents targeted to be used in various NLP tasks. * fact-checking tasks in the area of scientific claims. * a conversational argument retrieval dataset, where the assumed task is to retrieve an argument on a socially important topic or everyday personal decision. * a dataset focused on retrieving the best counterargument to a given argument. Examples are scraped from the online debate portal. ## 3 Methodology In this section, we present the steps taken to create BEIR-PL benchmark dataset. As our aim was to build a large-scale benchmark as reference point for comparing different IR models in Polish, we decided to translate the entire BEIR benchmark using automated Machine Translation. Subsequently, we trained and evaluated baseline models on the newly created resources. Baseline models will be publicly available to the research community. The selection of baseline models was dictated by recent advances in dense information retrieval and reranking models existing in the literature. As the IR field is dominated by bi-encoder-based approaches, we selected several models representing different ways of using bi-encoder architecture. The reranking models usually rely on large pretrained language models such as cross-encoders and generative sequence-to-sequence models. Thus we also included few reranking models in our model collection. ### Translation of the datasets To create a large-scale resource for information retrieval, it is necessary to obtain a significant number of annotated query-passage pairs. However, the high cost of the annotation procedure can make this infeasible. Additionally, linguistic translation from foreign languages over millions of documents is both demanding and costly. As a result, machine translation can serve as a cost-effective solution to enrich resources in low-resource languages such as Polish. In order to process and translate the available BEIR benchmark's datasets into the Polish language, we used Google Translate service. This service has been previously used to translate mMarco [4] dataset into various languages, but unfortunately, the Polish language was not included in this study. It has been shown during the development of mMarco, that the results obtained from Google Translate were better than the translation by the available open-source machine translation model, namely Helsinki [30], which can be downloaded from the HuggingFace1 repository. For that reason, we decided to rely on Google Translate service. Footnote 1: [http://huggingface.co](http://huggingface.co) We keep the data in a format analogous to the format proposed in the original BEIR benchmark, including the existing division on _queries_, _corpus_, and _qrels_. Queries are pre-defined natural language questions used to evaluate the performance of an IR system, while corpus refers to the set of documents that the system searches to find answers to the queries. Qrels, on the other hand, represent the relevance judgments indicating the relationships between the queries and documents in the corpus. Queries and corpus are defined and stored in a JSONL format andrels in tsv format. That ensures that our resource can be treated as an extension to and be fully compatible with the multilingual BEIR benchmark in the future. The size of the obtained resource, as illustrated in Table 2 causes that manual verification of it would be very laborious. Instead, in order to search for potential problems, we have used a multilingual contextual embeddings model, namely LaBSe, to compare source texts and their translations. We could observe high similarity across all subdatasets and selective manual inspection proved that no systematic translation errors could not be spotted. In addition, our experimental results on testing various IR models, reported in the further part of the paper, and their comparison with the results obtained for other languages, suggest good quality of the obtained Polish dataset, i.e. the results obtained for Polish are mostly slightly lower, but this can be attributed to higher complexity of the highly inflectional Polish (like Russian) - this illustrates results with BM25 - and to lower performance of the contextual embeddings models used as a basis for re-rankers and dense retrieval models. ### Baseline models This section briefly describes the baseline IR models we used for our evaluation study. The main baseline was computed using lexical matching with the BM25 implementation from Elasticsearch engine2. This is a standard baseline method used in IR, which has demonstrated strong performance and computational efficiency across various domains. It is also typically used in the first stage of retrieval, as shown in Figure 1. The baseline neural models can be divided into following categories: Footnote 2: [https://www.elastic.co/](https://www.elastic.co/) * these models can be used as both retriever and reranker models. We evaluated three BERT-only bi-encoder models: the unsupervised bi-encoder based on ICT technique [19] with HerBERT [22] as its core, and two pre-existing multilingual models (pre-trained on English data), namely LaBSE [9] and mMiniLM [4]. * these models can be used only as rerankers due to their computational inefficiency. We evaluated two HerBERT-based models and two T5-based models [25]. The models were pre-trained for Polish language on translated MSMARCO data. Finally, we evaluated one late-interaction reranker based on ColBERT architecture [17]. #### 3.2.1 Unsupervised dense bi-encoder To evaluate the unsupervised models and check how well they are performing on the benchmark data, we decided to fine-tune the HerBERT-base model [22] with ICT unsupervised tasks on BEIR-PL benchmark datasets. For each document, a pseudo-query was generated and used as a training positive instance. We utilized the model as a bi-encoder, in which it encodes queries and documents independently into single dense vector representations. Those vector representations can be compared using cosine similarity and saved to create a dense index of documents. #### 3.2.2 HerBERT based reranker models We further evaluated re-ranker models in a setting where the top 100 search results are retrieved by BM25 are presented as an input to the model. The model output is the re-ranked order of documents corresponding to the query. We trained HerBERT-base and HerBERT-large re-rankers on the BEIR-PL MS MARCO dataset. #### 3.2.3 Polish T5 based re-ranker models Furthermore, we trained and evaluated sequence-to-sequence MonoTS re-rankers based on plT5 language models [6], in both base and large variants. We used special tokens _prawda_ ('true') and _fafsz_ ('false'), to represent positive and negative relevance between query and passage. This architecture is composed of encoder and decoder, which may lead to different performance compared to HerBERT, which is a BERT-based model [8]. ### Late interaction HerBERT based re-ranker model Finally, we trained and evaluated late-interaction model named ColBERT [17], with HerBERT base as its core language model. The maximum document length was set to 180 tokens in our configuration. Creating an index for the retrieval task using ColBERT requires large disk and memory space - the model stores in memory the tokens obtained from corpus encoding (the index size of MSMARCO-PL is estimated to be at least 200GB). Due to that reason, we decided to use ColBERT as a re-ranker, which does not require creating enormous indexes. #### 3.3.1 Pre-existing multilingual models We compared our models with already available multilingual models, to check their performance on Polish language. \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline **Metric** & \multicolumn{1}{c}{\multirow{2}{*}{**Corpus size**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Corpus size**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg. Q Len**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Avg. D Len**}} \\ \hline NDCG@10 & PL & 31.5 & 45.0 & 26.9 & 16.0 & 43.6 & 15.3 & 34.2 & 25.9 & 18.5 & 11.4 & 54.6 \\ NDCG@10 & EN & 47.7 & 68.9 & 34.3 & 32.6 & 60.2 & 25.4 & 47.2 & 32.5 & 32.1 & 16.5 & 69.1 \\ \hline Recall @100 & PL & 25.4 & 6.9 & 19.9 & 47.2 & 60.1 & 35.0 & 84.6 & 48.6 & 23.9 & 27.3 & 85.1 \\ Recall @100 & EN & 45.0 & 11.7 & 26.0 & 78.3 & 76.3 & 54.9 & 95.2 & 62.1 & 43.5 & 36.8 & 92.0 \\ \hline \hline \end{tabular} \end{table} Table 1: An overall comparison between Polish (after translation) and English (original) lexical retrievers using BM25 matching on test data from BEIR benchmark. The retrievers were evaluated using NDCG @10 and Recall @100 evaluation metrics. \begin{table} \begin{tabular}{l r r r r} \hline \hline **Dataset** & **\#Test queries** & **Corpus size** & **Avg. Q Len** & **Avg. D Len** \\ \hline MSMARCO & 43 & 8.8M & 5.33 & 49.63 \\ TREC-COVID & 50 & 171K & 9.44 & 137.05 \\ NCPorus & 323 & 3.6K & 3.37 & 205.96 \\ NQ & 3 452 & 2.68M & 7.33 & 66.89 \\ HotpotQA & 7 405 & 5.2M & 15.64 & 38.67 \\ FiQA & 648 & 57K & 9.76 & 113.96 \\ ArguAna & 1 406 & 9K & 168.01 & 142.48 \\ Touche-2020 & 49 & 382K & 7.12 & 125.48 \\ CQADupstack & 13 145 & 547K & 7.86 & 110.76 \\ Quora & 10 000 & 523K & 8.13 & 9.85 \\ DBPedia & 400 & 4.63M & 4.82 & 41.61 \\ SciDocs & 1 000 & 25K & 9.70 & 150.15 \\ SciFact & 300 & 5K & 11.74 & 187.66 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of queries, corpus size, average query and document word length across all datasets in BEIR-PL benchmark. As a bi-encoder, we tested an already pre-trained multilingual LaBSE model. This model was fine-tuned to the sentence embedding task and showed competitive results compared with other fine-tuned multilingual retrievers described on the SentenceTransformers page3. Footnote 3: [https://www.sbert.net/](https://www.sbert.net/) In case of fine-tuned multilingual reranker model, we decided to test mMiniLM model [4] available on Hugging Face platform4. The model was fine-tuned to re-rank queries and documents from the multilingual MS Marco corpora and was trained on 80M training examples. Footnote 4: [https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1) ### Experimental setup Our models (all except ColBERT) were trained on two NVIDIA-RTX3090 GPUs with 24GB of memory each. The unsupervised HerBERT-base ICT bi-encoder was trained for 203 hours with batch size 64 for about 1.8M iterations. The learning rate hyperparameter was set to \(2e^{-4}\). For fine-tuned variants of HerBERT-based re-markers, we trained the models for \(\approx\)25 hours with a batch size of 32. The HerBERT-base model was trained on 20M examples, and the HerBERT-large model on 3.2M. Both T5-based models were trained for 20 hours with gradient accumulation steps set to 16 and batch size 16. The T5-base model was trained on 5M examples, and the T5-large model on 645K examples. The difference in training size between the base and large models was caused by the computational cost of model training. ColBERT model was trained on a single NVIDIA GeForce RTX 2080 Ti GPU with 12GB of memory. The model was trained for 102h with batch size 8 and learning rate set to \(3e^{-6}\). Moreover, the max document token number was set to 180 for documents and 32 for queries. ### Evaluation metrics For the comparison of the models, we applied the most commonly used metrics in IR: * the official MS Marco metric. MRR@k measures the quality of rank regarding the first relevant passage in ranking, \[MRR@k=\frac{1}{|Q|}\sum_{i=1}^{|Q|}\frac{1}{rank_{i}}\] \[.\] (1) * reported in the original BEIR benchmark. NCDG@k measures the quality of ranking considering all relevant passages and its position in @k retrieved documents, \[NDCG@k=\frac{\sum_{i=1}^{k(rank_{\textit{-}}order)}\frac{G_{\textit{min}}}{ \log(2i+1)}}{\sum_{i=1}^{k(real_{\textit{-}}order)}\frac{G_{\textit{min}}}{ \log(2i+1)}},\] where Gain is equal to 1 if passage relevant and 0 otherwise. * _Recall_ (Recall@k) cut off at the \(k\) ranking position. The recall@k informs how many relevant documents from the collection were classified to @k ranking. \[recall=\frac{|relevant|\cap|retrieved|}{|relevant|}\]. ## 4 Results and Discussion BM25 - lexical level retrieval algorithm - performs better for the English language than for Polish, as showed in Table 1. Moreover, it demonstrates the lowest score for Polish when compared with the multilingual MS Marco dataset results among all different languages, as shown in Figure 2. The main cause of such a low performance scores for Polish language is that Polish is a highly inflected language with large number of word forms per lexeme (also Proper Names are inflected) and a complex morphological structure. In such a case, lexical matching is less effective than in the case of other languages. Furthermore, the Elasticsearch engine does not support algorithmic stemming for Polish language in version 8.7 or lower. On the other hand, our results show that the BM25 baseline is a strong baseline for neural models, even in the case of Polish, as shown in Table 3 and Table 4. When compared with bi-encoder models, i.e., the unsupervised ICT model initialised with HerBERT-base and the LaBSE model pre-trained for sentence semantic similarity, BM25 is a better choice for most datasets as is presented in Figure 1. Interesting results have been obtained for the Quora dataset, in which bi-encoders have achieved very high results. The underlying reason for this observation is that the Quora dataset primarily concentrates on the task of determining whether a question has been duplicated. This requires from the models to rely on semantic similarity of sentences, that was indirectly a part of pre-training procedure for the LaBSE model. The ICT fine-tuning seems to enhance the performance of the model for this particular task effectively, which could be attributed to the model's exposure to similar questions from the Figure 2: BM25 performance on MS Marco passage retrieval on different languages[4]. corpus with minor modifications during the training procedure. The performance of the ICT bi-encoder model is surprisingly low on Natural Questions (NQ) and HotpotQA datasets, which may be due to the complexity of these datasets. Both datasets are derived from the Question Answering task. Questions might be very distant regarding lexical similarity from the retrieved documents, and the ICT task is an insufficient approach in this case. For those datasets, a deeper understanding of provided text is essential. The results of the re-ranker models show a significant improvement over BM25 lexical matching, which means neural cross-encoders can re-arrange retrieval results into a better pertinence ranking. Only in the case of the ArguAna dataset, the performance of re-ranker models, except ColBERT, is lower than BM25 results. The goal of the IR task in ArguAna dataset is to retrieve a counterargument to a given argument. This may not be intuitive for re-ranker models, as the meanings of compared sentences differ. For that reason, actually lower performance on this task may point to the better model in terms of text comprehension. The sequence-to-sequence T5 models express a slight improvement over BERT-based re-rankers on most datasets, but there are cases where we can notice a significant drop in performance. We believe that, in the case of ArguAna dataset, the reason for the diminished performance can be attributed to the heightened sensitivity of sequence-to-sequence models to differences in semantic meaning between arguments and counterarguments, as previously mentioned. Furthermore, the results on Quora dataset are worse after re-ranking than BM25, indicating that the task is not inherently intuitive for this T5 model class after fine-tuning on MS MARCO, particularly in contrast to HerBERT-based re-rankers. Also, T5-large model achieves the best performance on all CQDupstack subsets, except english, as shown in Table 4. The late-interaction ColBERT model demonstrates significant improvements over the BM25 retriever on the majority of datasets. Considering that it is solely a late-interaction model and not a full cross-encoder, one might anticipate its performance to be inferior to that of the HerBERT re-ranker. However, there are instances where the actual performance surpasses expectations, with slightly higher results on certain datasets, namely Quora, TREC-COVID and NFCorpus. This observation could be attributed to the fact that both TREC-COVID and NFCorpus are medical-related datasets, wherein the retrieval task might be simplified to identifying keywords present in both the query and the passage. It is also essential to note that the average performance, as presented in Figure 1, for all re-rankers is very similar. However, we can notice a significant difference in performance on each dataset separately, as showed in Table 3. That is why, we believe that in order to accurately evaluate the model's performance, it is crucial to examine the individual results more closely. ## 5 Conclusions Information Retrieval in the Polish language is still developing and is an intensive research area. Therefore, there is a great need for resources to enable further training and more accurate evaluation of existing and new deep neural IR models. In this work, we introduced the translated BEIR-pl benchmark and showed the results of a broad family of IR baseline models. We would like to encourage other researchers to participate in further development of Polish and multilingual IR models using our new resource. Our findings revealed that IR models perform differently depending on the dataset's characteristics. In some cases, lexical similarity is the right choice to solve the task, and in other cases, it is beneficial to rely on transformer reranker models. ## Acknowledgements **anonymised...**
2309.11111
PRAT: PRofiling Adversarial aTtacks
Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack techniques with a broad common objective of fooling deep models. However, we find slight compositional differences between the algorithms achieving this objective. These differences leave traces that provide important clues for attacker profiling in real-life scenarios. Inspired by this, we introduce a novel problem of PRofiling Adversarial aTtacks (PRAT). Given an adversarial example, the objective of PRAT is to identify the attack used to generate it. Under this perspective, we can systematically group existing attacks into different families, leading to the sub-problem of attack family identification, which we also study. To enable PRAT analysis, we introduce a large Adversarial Identification Dataset (AID), comprising over 180k adversarial samples generated with 13 popular attacks for image specific/agnostic white/black box setups. We use AID to devise a novel framework for the PRAT objective. Our framework utilizes a Transformer based Global-LOcal Feature (GLOF) module to extract an approximate signature of the adversarial attack, which in turn is used for the identification of the attack. Using AID and our framework, we provide multiple interesting benchmark results for the PRAT problem.
Rahul Ambati, Naveed Akhtar, Ajmal Mian, Yogesh Singh Rawat
2023-09-20T07:42:51Z
http://arxiv.org/abs/2309.11111v1
# PRAT: PRofiling Adversarial aTacks ###### Abstract Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack techniques with a broad common objective of fooling deep models. However, we find slight compositional differences between the algorithms achieving this objective. These differences leave traces that provide important clues for attacker profiling in real-life scenarios. Inspired by this, we introduce a novel problem of 'Profiting Adversarial aTacks' (PRAT). Given an adversarial example, the objective of PRAT is to identify the attack used to generate it. Under this perspective, we can systematically group existing attacks into different families, leading to the sub-problem of attack family identification, which we also study. To enable PRAT analysis, we introduce a large 'Adversarial Identification Dataset' (AID), comprising over 180k adversarial samples generated with 13 popular attacks for image specific/agnostic white/black box setups. We use AID to devise a novel framework for the PRAT objective. Our framework utilizes a Transformer based Global-LOcal Feature (GLOF) module to extract an approximate signature of the adversarial attack, which in turn is used for the identification of the attack. Using AID and our framework, we provide multiple interesting benchmark results for the PRAT problem. The dataset and the code are available at [https://github.com/rahulambati/PRAT](https://github.com/rahulambati/PRAT) ## 1 Introduction Deep learning is currently at the center of many emerging technologies, from autonomous vehicles to numerous security applications. However, it is also well-established that deep networks are susceptible to adversarial attacks [1, 8]. This intriguing weakness of deep learning, which is otherwise known to supersede human intelligence in complex tasks [41], has attracted an ever-increasing interest of the research community in the last few years [9]. This has led to a wide range of adversarial attacks that can effectively fool deep learning. Although adversarial attacks have also led to research in defenses, there is a consensus that defenses currently lack efficacy. Many of them are easily broken, or become ineffective by changing the attack strategy [2]. Incidentally, deep learning in practice is still widely open to malicious manipulation through adversarial attacks [8]. It is yet to be seen if this technology can retain its impressive performance while also demonstrating robustness to adversarial attacks. Until an adversarially robust high-performing deep learning framework is developed, practitioners must account for the adversarial susceptibility of deep learning in all applications. These conditions give rise to an important practical problem of 'attacker profiling'. In real-life, understanding the attacker's abilities can allow counter-measures even outside the realm of deep learning. However, the current literature on adversarial attacks on is almost completely void of any exploration along this line. From the pragmatic viewpoint, the primal question of this potential research is, "_given an adversarial example, which attack algorithm was used to generate it?"_. In this work, we take the first systematic step towards answering this question with PRofiling Adversarial aTacks (PRAT). Focusing on the _additive adversarial perturbations_, our aim is to explore the extent to which a victim is able to identify its attacker by analyzing only the adversarial input. To explore this new direction, it is imperative to curate a large database of adversarial samples. To that end, we introduce Adversarial Identification Dataset (AID) which consists of over 180k adversarial samples, generated with 13 popular attacks in the literature. AID covers input-specific and input-agnostic attacks and considers white-box and black-box setups. We select these attacks considering the objective of retracing the attacker from the adversarial image. We use AID to explore PRAT with a proposed framework that is built on the intuition that attack algorithms leave their peculiar signatures in the adversarial examples. Figure 1: Despite their imperceptibility, adversarial perturbations contain peculiar patterns. Perturbations generated using the popular methods FGSM, DeepFool, CW and UAP Attacks are shown. As seen in Fig. 1, these traces might reveal interesting information that can help in profiling the attacker. Our technique works on the principle of extracting those signatures. At the center of our framework is a signature extractor which is trained to extract input-specific signatures. Unlike random noise, these traces contain global as well as local structure. We design a signature extractor consisting Global-LOcal Feature extractor (GLOF) modules that combine CNN's ability to learn local structure [26] and transformer's capability to capture global information [46, 47, 14]. These signatures contain information which corresponds to the attack algorithm and we use this signature to identify the attack leveraged to generate the adversarial example. Our contributions are summarized as follow. * We put forth a new problem of PRofiling Adversarial aTracks (PRAT), aimed at profiling the attacker. We formalize PRAT to provide a systematic guideline for research in this direction. * We propose an effective framework to provide the first-of-its-kind solution to the PRAT problem which consists of a hybrid Transformer network that combines the capabilities of CNNs and attention networks targeted to solve PRAT. * We introduce a large Adversarial Identification Dataset (AID), comprising 180k+ adversarial samples generated with 13 different attacks. AID is used to extensively study PRAT, leading to promising results. ## 2 Related Work Adversarial attacks and defenses are currently a highly active research direction. Our discussion here focuses on the relevant aspects of this direction with representative existing techniques. The discovery of adversarial susceptibility of deep learning was made in the context of visual classifiers [43]. [43] demonstrated that deep models can be fooled into incorrect prediction by adding imperceptible adversarial perturbations to the input. Hence, to efficiently compute adversarial samples (for adversarial training), [16] proposed the Fast Gradient Sign Method (FGSM). Conceptually, the FGSM takes a single gradient ascend step over the loss surface of the model w.r.t. input to compute the adversarial perturbation. [25] enhanced FGSM to iteratively take multiple small steps for gradient ascend, thereby calling their strategy Basic Iterative Method (BIM). A similar underlying scheme is adopted by the Projected Gradient Descent (PGD) attack [31], with an additional step of projecting the gradient signals on a pre-fixed \(\ell_{p}\)-ball to constrain the norm of the resulting perturbation signal. All the above attacks must compute model gradient to compute the perturbations. Hence, we can categorise them as gradient-based attacks. Moreover, the gradient computation normally requires complete knowledge of the model itself hence categorized as white-box attacks. Other popular gradient based attacks include Carlini & Wagner attack [6], DeepFool [34] and Jacobian Saliency Map Attack (JSMA) [35]. Black-box attacks do not assume any knowledge of the model, except its predictions. The most popular streams of black-box attacks are query-based attacks, which allow the attacker to iteratively refine an adversarial example by sending the current version to the remote model as a query. The model's prediction is used as feedback for improving the adversarial nature of the input. If the attacker only receives the model decision (not its confidence score), then such an attack is called a decision-based attack. Currently, the decision based attacks are more popular in black-box setups due to their pragmatic nature. A few recent representative examples in this category include [37, 40, 15, 27]. With the discovery of adversarial samples, there is an increased interest in devising defences, of which, the most popular strategy is adversarial training [16, 22, 31, 45, 48]. The existing literature also covers a wide range of other defense techniques, from augmenting the models with external defense modules [36, 28, 12] to certified defenses [23, 44, 11]. Here, we emphasize that these defenses generally come at considerable computational cost and degradation in model performance on clean inputs. Instead of proposing yet another defense, we take a different perspective on addressing the adversarial susceptibility of deep learning. Assuming a deployed model, we aim at identifying the capabilities of the attacker. Such an attacker profiling can help in adversarial defenses outside the realm of deep learning. This is more practical because it can eventually allow deep learning models to disregard intrinsic/appended defensive modules that result in performance degradation, causing deep learning to lose its advantage over other machine learning frameworks. ## 3 The PRAT Problem The PRofiling Adversarial aTracks (PRAT) problem is generic in nature. However, we limit its scope to visual classifiers in this work for a systematic first-of-its-kind study. Let \(\mathcal{C}(.)\) be a deep visual classifier such that \(\mathcal{C}(\mathbf{I}):\mathbf{I}\rightarrow\boldsymbol{\ell}\), where \(\mathbf{I}\in\mathbb{R}^{m}\) is a natural image and \(\boldsymbol{\ell}\in\mathbb{Z}^{+}\) is the output of the classifier. For attacking \(\mathcal{C}(.)\), an adversary seeks a signal \(\boldsymbol{\rho}\in\mathbb{R}^{m}\) to achieve \(\mathcal{C}(\mathbf{I}+\boldsymbol{\rho})\rightarrow\boldsymbol{\tilde{\ell}}\), where \(\boldsymbol{\tilde{\ell}}\neq\boldsymbol{\ell}\). To ensure that the manipulation to a clean image is humanly imperceptible, the perturbation \(\boldsymbol{\rho}\) is norm-bounded, e.g., by enforcing \(||\boldsymbol{\rho}||_{p}<\eta\), where \(||.||_{p}\) denotes the \(\ell_{p}\)-norm of a vector and '\(\eta\)' is a pre-defined scalar. More concisely, the adversary seeks \(\boldsymbol{\rho}\) that satisfies \[\mathcal{C}(\mathbf{I}+\boldsymbol{\rho})\rightarrow\boldsymbol{\tilde{\ell} }\ \ \text{s.t.}\ \boldsymbol{\tilde{\ell}}\neq\boldsymbol{\ell},||\boldsymbol{\rho}||_{p}<\eta. \tag{1}\] The above formulation underpins the most widely adopted settings for the adversarial attacks, where \(\boldsymbol{\rho}\) is a systemat ically computed additive signal. From our PRAT perspective, we see this signal as a function \(\mathbf{\rho}(\mathcal{A},\{\mathbf{I}\},\mathcal{C})\), where \(\mathcal{A}\) identifies the algorithm used to generate the perturbation and \(\{\mathbf{I}\}\) indicates that \(\mathbf{\rho}\) can be defined over a set of images instead of a single image. In practice, the targeted model \(\mathcal{C}\) must already be deployed and the input \(\mathbf{I}\) fixed during an attack leaving\(\mathcal{A}\) as the point of interest for the PRAT problem. For clarity, we often refer to \(\mathcal{A}\) directly as 'attack' in the text. To abstract away the algorithmic details, we can conceptualize \(\mathcal{A}\) as a function \(\mathcal{A}(\{\mathbf{\varphi}\},\{\mathbf{\psi}\})\), where \(\{\mathbf{\varphi}\}\) denotes a set of abstract design hyper-parameters and \(\{\mathbf{\psi}\}\) is a set of numeric hyper-parameters. To exemplify, the choice of the scope of the adversarial objective, e.g. universal vs image-specific, is governed by an element in \(\{\mathbf{\varphi}\}\). Similarly, the choices of '\(\eta\)' or '\(p\)' values in Eq. (1) are overseen by the elements of \(\{\mathbf{\psi}\}\). Collectively, both sets contain all the hyper-parameters available to an attacker to compute \(\mathbf{\rho}\). We are particularly interested in the design choices made under \(\{\mathbf{\varphi}\}\). In the considered settings, \(\{\mathbf{\varphi}\}\) is a finite set because each of its elements, i.e., \(\varphi_{i}\in\{\mathbf{\varphi}\}\), governs a choice along a specific design dimension under the practical constraint that the attack must achieve its fooling objective. Nevertheless, in this work, we are not after exhaustively listing the elements of \(\{\mathbf{\varphi}\}\). Instead, we specify only three representative elements to demonstrate the possibility of attack profiling. These three elements are 1) \(\varphi_{1}\)_-model gradient information_, 2) \(\varphi_{2}\)_-black-box prediction score information_, and 3) \(\varphi_{3}\)_-attack fooling scope_. It is possible to easily extend the above list to incorporate further design choices. The criterion for a parameter to be enrolled in \(\{\mathbf{\varphi}\}\) is that a single choice should cover a range of existing attacks. For instance, \(\varphi_{1}\) can either be true for a family of attacks \(\mathcal{F}_{1}^{a}\) of gradient-based attacks and can be false for non-gradient based attack family \(\mathcal{F}_{1}^{b}\). Similarly, when \(\varphi_{2}=\texttt{true}\), we get an attack family \(\mathcal{F}_{2}^{a}\) of score-based black-box attacks[30, 20], and \(\varphi_{2}=\texttt{false}\) yields \(\mathcal{F}_{2}^{b}\) that represents decision-based attacks[4, 3, 10]. Similarly, \(\varphi_{3}=\texttt{true}\) results in the \(\mathcal{F}_{3}^{a}\) representing universal attacks and \(\varphi_{3}=\texttt{false}\) corresponds to the family \(\mathcal{F}_{3}^{b}\) consisting input-specific attacks. In the above formalism, \(\mathcal{F}_{i}^{x}\cap\mathcal{F}_{i}^{y}=\emptyset\) always holds for the resulting attack families. However, we must allow \(\mathcal{F}_{i}^{x}\cap\mathcal{F}_{j}^{x}\neq\emptyset\) because an attack family resulting from \(\varphi_{i}\) may still make choices for \(\varphi_{j\neq i}\) without any constraint.Let \(\mathcal{F}_{i}=\{f_{1}^{i},f_{2}^{i},...,f_{Z}^{i}\}\) denote the \(i^{\text{th}}\) attack family with '\(Z\)' adversarial attacks that are formed under \(\varphi_{i}\) such that all \(f_{i}^{i}\in\mathcal{F}_{i}\) satisfy the constraint in Eq. (1). Then, \(f_{z}^{i}(\mathbf{I})\rightarrow\tilde{\mathbf{I}}\) s.t. \(\mathcal{C}(\tilde{\mathbf{I}})\rightarrow\tilde{\mathbf{\ell}}\neq\mathbf{\ell},||\bm {\rho}||_{p}\)\(<\eta\). In this setting, the core PRAT problem is a reverse mapping problem that computes \(\Psi(\tilde{\mathbf{I}})\to f_{z}^{i}\), given a set of '\(N\)' attack families \(\mathcal{F}=\{\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{N}\}\). We must seek \(\Psi(.)\) to solve this. ## 4 Adversarial Identification Dataset (AID) To investigate the PRAT problem, we develop Adversarial Identification Dataset (AID). Below, we detail different attacks \(\mathcal{A}\), attack families \(\mathcal{F}\) and their design and numeric hyper-parameters \((\{\mathbf{\varphi}\},\{\mathbf{\psi}\})\) considered in AID. Most of the existing literature on adversarial attacks concentrates on devising novel attack schemes or robustifying models against the attacks. Multiple existing adversarial attack libraries are available to generate adversarial samples on-the-fly. However, for our problem, it is imperative that we store the generated adversarial perturbations to analyze them for reverse engineering. This motivates the curation of Adversarial Identification Dataset (AID) that comprises perturbations generated by leveraging different attack strategies over a set of images targeting different pre-trained classifiers. In line with our PRAT problem, AID consists of 3 different attack families (_gradient-based_, _decision-based_, and _universal_) with 13 different attack strategies resulting in over 180k samples. We discuss these families next. ### Attack Families **Gradient based attacks:** Gradient based attacks are able to exploit the gradients of the target model to perturb input images. Since the attacker needs access to the gradients, these attacks are typically white box in nature. Our gradient-based attack family consists of _Fast Gradient Sign Method (FGSM)_[16], _Basic Iterative Method_[25], _Newton-Fool_[21], _Projected Gradient Descent(PGD)_[31], _Deep-Fool_[32], _Carlini Wagner (CW)_[7] attacks. **Decision based attacks:** Decision-based attacks are applied in black-box setups where the attacker only has access to the decision of the target model. The attacker repeatedly queries the target model and utilizes the decision of the model to curate the perturbation. We consider _Additive Gaussian Noise_, _Gaussian Blur_, _Salt & Pepper Noise_, _Contrast Reduction_, and _Boundary Attack_[5] for this family. **Universal attacks:** Universal attacks generalize across a dataset. A single perturbation is sufficient to fool the network across multiple images with a desired fooling probability. Most common approaches to generate universal perturbations either iteratively compute perturbations by gradually computing and projecting model gradients over input batches, or use generative modeling to compute image agnostic perturbations. We consider _Universal Adversarial Perturbation (UAP)_[33], _Universal Adversarial Network (UAN)_[17] for the universal attack family. ### Dataset creation **Benign samples:** We require clean images to create an adversarial perturbation. We utilize ImageNet2012 [39] validation set consisting of 50k images spanning across 1000 classes. We split the validation set into two exclusive parts, forming training and test partitions of AID. The training set of perturbed images for AID is generated by randomly choosing 4k images per network per attack from the training partition. Similarly, the test set of perturbed images is generated by randomly choosing 800 images per network per attack from the test partition. Note that each attack image can be computed with different networks i.e. target models. We discuss these in the following section. **Target models:** We consider three target models; ResNet50 [18], DenseNet121 [19] and InceptionV3 [42]. Using multiple models ensures that the adversarial samples are not model specific. **Attack settings:** In practice, there can be variations in perturbations norm for an attack - a hyper-parameter from \(\{\mathbf{\psi}\}\). This variation is incorporated in AID by sampling \(\eta\) from a range of values. For attacks constructed under \(l_{\infty}\) norm, we consider a range of \(\{1,16\}\) and \(\{1,10\}\) for \(l_{2}\) norm based attacks. The procedure of generating the entire dataset as well as the summary statistics are further detailed in the supplementary material of the paper. We also summarise the considered attacks, their families, and used perturbation norm-bounds in Table 1. ## 5 Proposed Approach Here, we discuss the design choices we consider for solving the PRAT problem \(\Psi(\tilde{\mathbf{I}})\to f_{z}^{i}\). A simple approach to solve PRAT could be to build a classifier \(C(\tilde{\mathbf{I}})\to f_{z}^{i}\) that identifies the attack leveraged to generate the adversarial input \(\tilde{\mathbf{I}}\). In such a scenario, the underlying patterns in the perturbation \(\rho\) are closely intertwined with the benign sample \(\mathbf{I}\), thus making the problem much harder. To solve it, we design a signature extractor \(\Omega(\tilde{\mathbf{I}})\rightarrow\tilde{\mathbf{\rho}}\) that generates a signature \(\tilde{\mathbf{\rho}}\) from the adversarial input s.t. it lies close to the original perturbation \(\mathbf{\rho}\) while preserving patterns helpful in identifying the attacker. The objective of the signature extractor is \[\Omega(\tilde{\mathbf{I}})\rightarrow\tilde{\mathbf{\rho}},\ ||\tilde{\mathbf{\rho}}-\mathbf{ \rho}||_{2}=\mathbf{\delta},\ \ min(\mathbf{\delta}). \tag{2}\] While the objective draws similarities with existing problems like denoising/deraining, signature extraction is relatively complex. Noise/rain pertaining to these tasks are largely localized in nature and are visually perceptible in most cases which is not the case for PRAT that makes the problem more challenging and requires methods beyond standard techniques aimed at denoising and other low-level computer vision tasks. Extracted signature is utilized to train a classifier \(C\) that identifies the attack. The objective of the classifier is \[C(\tilde{\mathbf{\rho}})\to f_{z}^{i},\ \ where\ f_{z}^{i}(\mathbf{I}) \rightarrow\tilde{\mathbf{I}}, \tag{3}\] where, \(\tilde{\mathbf{\rho}}\) is the generated signature, \(f_{z}^{i}\) is the \(z^{\text{th}}\) attack from the \(i^{\text{th}}\) toolchain family. Figure 2 shows an overview of the proposed approach highlighting the signature extractor and the attack classifier. **Signature Extractor:** It serves the purpose of extracting a signature with patterns specific to the attack. As shown in Fig.2, the signature extractor has two streams of information flow progressing through a series of GLOF modules. Each stream is designed to capture local or global features along with feature sharing across them. GLOF module utilizes convolutional layers to extract local features while attention mechanism applied over image patches help in attaining global connectivity. Conjunction of global and local features help reconstruct a rectified image that lies in the neighborhood of the clean image. Subtracting the rectified image from the adversarial image yields the signature. The input adversarial image \(\tilde{\mathbf{I}}\in\mathbb{R}^{H\times W\times 3}\) (\(H\), \(W\) correspond to image height and width and 3 corresponds to the RGB channels) is split into a series of patches. The patches are flattened and projected onto the embedding space of dimension \(\mathbf{D_{1}}\). Similar to [14], we add positional embeddings to the patch embeddings. The resulting patch embedding is termed \(\mathbf{T_{0}}\in\mathbb{R}^{N\times D_{1}}\) (0 referring to the initial feature level and \(N\) referring to the number of patches). Alongside, the input image is projected to an embedding dimension \(\mathbf{D_{2}}\), by applying a \(3\times 3\) Conv with \(D_{2}\) features. We term these features \(\mathbf{Z_{0}}\in\mathbb{R}^{H\times W\times D_{2}}\) (0 refers to the initial feature level). Features extracted from previous level (\(l-1\)) are passed on to the next GLOF module. \[\mathbf{T_{l}},\mathbf{Z_{l}}=GLOF(\mathbf{T_{l-1}},\mathbf{Z_{l-1}});\ \ \ \ l=1...L \tag{4}\] Where \(L\) is the number of GLOF modules. The output of the final GLOF module corresponding to the convolutional arm \(\mathbf{Z_{l}}\) is transformed to RGB space by applying a \(3\times 3\) Conv with 3 feature maps resulting in the rectified image \(\mathbf{I_{r}}\in\mathbb{R}^{H\times W\times 3}\). Finally, to extract the signature from the rectified image, difference of the rectified and the original image is considered \(\tilde{\mathbf{\rho}}=\tilde{\mathbf{I}}-\mathbf{I_{r}}\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline **label** & **Attack Method** & **Family** & **Setup** & **NB** \\ \hline \hline 0 & PGD [31] & Grad. & WB & \(l_{\infty}\) \\ \hline 1 & BIM [25] & Grad. & WB & \(l_{\infty}\) \\ \hline 2 & FGSM [16] & Grad. & WB & \(l_{\infty}\) \\ \hline 3 & DeepFool [32] & Grad. & WB & \(l_{\infty}\) \\ \hline 4 & NewtonFool [21] & Grad. & WB & \(l_{2}\) \\ \hline 5 & CW [7] & Grad. & WB & \(l_{2}\) \\ \hline 6 & Additive Gaussian [38] & Grad. & BB & \(l_{2}\) \\ \hline 7 & Gaussian Blur [38] & Grad. & BB & \(l_{\infty}\) \\ \hline 8 & Salt\&Pepper [38] & Grad. & BB & \(l_{\infty}\) \\ \hline 9 & Contrast Reduction [38] & Dec. & BB & \(l_{\infty}\) \\ \hline 10 & Boundary [5] & Dec. & BB & \(l_{2}\) \\ \hline 11 & UAN [17] & Uni. & WB & \(l_{\infty}\) \\ \hline 12 & UAP [33] & Uni. & WB & \(l_{\infty}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Summary of the attacks in AID**. Grad., Dec. and Uni. denote Gradient-based, Decision-based and Universal attacks. BB and WB denote Black- and White-box attacks. NB is the norm bound on perturbation. **GLOF Module:** Standard convolutional layers are good at extracting local patterns [24]. On the other hand, transformers are known to be extremely powerful in learning non-local connectivity [13]. As seen in [14], vision transformers fail to utilize the local information [29]. Overcoming these limitations, we propose Global-LOcal Feature extractor (GLOF) module to combine CNN's ability to extract low-level localized features and vision transformer's ability to extract global connectivity across long range tokens. Detailed schematic of the GLOF module is given in Fig.2. The GLOF module at any level receives the local and global features from the previous level. _Local features:_ Embedded 2D image features from the previous layer \(\mathbf{Z_{l-1}}\) are fed to a ResNet block[18] with convolutional, batch norm and activation layers. _Global features:_ Embedded tokens are fed to attention mechanism[46]. Series of tokens from previous layer \(\mathbf{T_{l-1}}\) are passed through a multi-head attention layer which calculates the weighted sum. A feed forward network is applied over the attention output consisting of two dense layers that are applied to individual tokens separately with GELU applied over the output of the first dense layer[14]. _T2 Block:_ Features from the attention arm corresponding to the global connectivity are merged with the convolutional arm. **Token to Image (T2I)** is responsible for rearranging the series of tokens to form a 2D grid. This transformed grid is passed to a series of convolutional layers to obtain the feature map with the desired depth and is merged with the features from the convolution arm of the GLOF module. The merged features as well as the learned token embeddings are passed to consecutive GLoF modules. **Attack Classifier:** The generated signature is specific to the input. Since the input contains contextual information, we complement the extracted signature with the adversarial input and feed it to the attack classifier. The fusion is done by applying a series of convolutional layers over the signature and the input separately and concatenating them. **Training objective:** We utilize two learning objectives in our framework. We use \(L_{2}\) loss to minimize the distance of generated signature \(\tilde{\mathbf{\rho}}\) to the raw perturbation \(\mathbf{\rho}\). Alongside, the attack classifier is modelled with cross-entropy loss to generate probability scores over a set of classes. ## 6 Experiments We evaluate the performance of the proposed approach on AID under various settings and also present extensive ablations that support the design choices. **Implementation details:** The signature extractor comprises of 5 GLOF modules with the attention arm embedding dimension of 768 and the convolutional arm embedding dimension of 64. The T2I block consists of two convolutional layers with kernel size 5 each followed by batch normalization. We use a patch size of 16x16 and 12 attention heads. Each convolutional arm in the GLOF module consists of a ResNet block with 2 convolutional layers of kernel size 5, batch norm and a skip connection. We use DenseNet121[19] as the attack classifier. Final layers of the attack classifier are adjusted to compute probabilities over 13 classes for attack identification and 3 classes for attack family identification. **GLOF Variants:** Standard GLOF module consists of convolution and attention arms. We introduce variants of GLOF that exclusively contain either of the arms allowing us to study the contribution of local and global features separately. We term GLOF-C, referring to the GLOF module with only the convolutional arm and GLOF-A, referring to the GLOF module containing only the attention arm. **Experimental Setup:** We employ a two-stage training strategy to train the overall pipeline. In the first stage, the signature extractor is trained to produce the rectified image. Benign samples corresponding to the adversarial inputs are used as the ground truth. Adam optimizer and \(L_{2}\) loss are used to pre-train the signature extractor. In the second stage, Figure 2: (**Left**) Schematics of the proposed approach. (**Right**) GLoF module architecture. In our method, an input adversarial image is passed through a series of GLoF modules. Each GLoF module has two arms; one captures global dependencies, the other captures local features. Extracted signature is fused with the adversarial image and fed to attack classifier. the overall pipeline with the pre-trained signature extractor is further trained. We use cross-entropy loss to train the network with Adam optimizer with a learning rate of \(1e^{-4}\) and momentum rates of 0.9 and 0.999. We use exponential decay strategy to decrease the learning rate by \(5\%\) every 1k iterations. All experiments are conducted on NVIDIA V100 GPU with a batch size of 16. Two stage training helps in faster convergence of the overall network, allows the signature extractor to learn better, and removes the need to retrain it if novel attacks are included. **Evaluation metrics:** Since the main objective of the PRAT is classification, we use accuracy to compare across several techniques. We also evaluate the performance of the signature extractor using PSNR and SSIM scores calculated over the rectified image and the benign sample. **Baselines:** Since the PRAT problem is first-of-its-kind, we develop several baselines and compare our technique against them. PRAT at its core is a classification problem, we look at the existing visual classifier models and train them accordingly for the PRAT problem. We consider variants of ResNet [18], DenseNet [19], Inception [42] and different versions of Vision Transformer[14]-{ViT-B, ViT-L}as baselines. In line with the original work, ViT-B refers to the Base version of ViT with 12 encoder layers and ViT-L is the Large version with 24 encoder layers. We analyze patch sizes of 16x16 and 32x32 for both the variants. ### Results **Attack Identification:** Table 2 reports the results on PRAT problem evaluated on AID under two settings: identifying the attack as well as the attack family. Our approach with the pre-trained signature extractor, feature fusion and the attack classifier achieves **80.14%** accuracy on the attack identification and **84.72%** on attack family identification. **Comparison with baselines:** Table 2 compares the performance of our network against other baselines. The top performing compared method, DenseNet121[19], is surpassed by our technique in both categories by a margin of 6.94% in attack identification and 0.51% in attack family identification. In general, variants of ResNet[18] and Inception[42] under perform when compared with DenseNet[19] versions. Comparing with versions of ViT[14], CNNs have fewer number of parameters and perform much better in both the settings. One reason for this being that ViT requires large amounts of training data. We also observe a drop in accuracy with increase in a patch size from 16x16 to 32x32 suggesting that ViT[14] struggles to accurately capture the local intrinsic properties as the patch becomes bigger. We also evaluate the performance of Wiener filtering combined with a classifier. This setting achieves 67.55% compared to 80.14% by the GLOF based model. It is evident that identifying attack family is simpler compared to identifying the specific attack. ### Ablations Table 3 presents the ablation study on the proposed network. Full model refers to the complete pipeline with pre-trained signature extractor and a classifier accepting fused features form the signature and the input which yields 80.14% on AID. **Effect of pre-training:** Transformers are known to work well with pre-training. Without pre-training of the signature extractor the accuracy drop to 79.20%. **Effect of GLOF module:** Signature extractor with exclusively GLOF-C variant yields an accuracy of 78.66% while its counter part with GLOF-A variant(without CNN blocks) only achieves 73.61% indicating the importance of both the components for a better performance on PRAT. **Effect of Fusion:** The fusion module combines the features from the extracted signature and the adversarial image. Removing such fusion module and only relying on the \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{**Method**} & **Attack** & **Attack Family** & **no. of** \\ & **Identification** & **Identification** & **params** \\ \hline \hline ResNet50[18] & 68.27\% & 80.11\% & 24.7M \\ ResNet101[18] & 71.03\% & 80.38\% & 43.8M \\ ResNet152[18] & 67.03\% & 78.48\% & 59.5M \\ DenseNet121[19] & 73.20\% & 84.21\% & 8.2M \\ DenseNet169[19] & 72.22\% & 84.10\% & 14.3M \\ DenseNet201[19] & 73.07\% & 81.69\% & 20.2M \\ InceptionV3[42] & 69.96\% & 81.91\% & 22.9M \\ ViT-B/16[14] & 63.91\% & 75.89\% & 85.8M \\ ViT-B/32[14] & 54.61\% & 72.34\% & 87.4M \\ ViT-L/16[14] & 67.28\% & 78.25\% & 303M \\ ViT-L/32[14] & 55.23\% & 72.62\% & 305M \\ \hline **Ours** & **80.14\%** & **84.72\%** & 47.8M \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance of different methods on AID focusing on identifying 13 different attacks and 3 attack families.** Figure 3: **Confusion matrix: The labels of the classes are in accordance with the order in Table 1.** extracted signature results in an accuracy of 78.87%. **Effect of Signature Extractor:** While Signature Extractor acts as the crux of the overall pipeline, removing it is no different than the baseline DenseNet121 [12] from table 2 which yields 73.20%. ### Analysis and Discussion **Confusion Matrix:** We analyze class wise scores and the confusion matrix of the predictions from the proposed approach in Fig 3. From the confusion matrix, we observe the common trend of relatively high scores for all decision based attacks except for boundary attack. With scores close to 1, these attacks have distinctive patterns which are being easily identified by the signature extractor. Boundary attack do not always have specific patterns because of the way they are generated. Boundary attack performs a random walk on the decision boundary minimizing the amount of perturbation. Similarly, universal attacks generate discernible patterns making it easier for detection. Major confusion occurs in the gradient based attacks among Newton-Fool, DeepFool and CW attack. These attacks being highly powerful, are targeted on generated nearly imperceptible perturbations specific to the input image, making it difficult for the method to identify and distinguish. **Signature Extraction:** Table 4 investigates the performance of the signature extractor under various settings. Standard GLOF achieves higher PSNR and SSIM scores over GLOF-C and GLOF-A indicating that global and local connectivity used in conjunction help in better reconstruction. We also report the variation in reconstruction scores when the number of heads \(m\) in multi head attention are increased. GLoF modules with 12 heads achieves the highest scores of **31.55** PSNR and **0.89** SSIM. **Number of GLOF modules:** We analyze the performance of the network by varying the number of GLOF modules. Signature Extractor with as low as a single GLOF module achieves 79.20% (+6% over baseline) thus indicating its effectiveness. Employing 5 GLoF modules yields the best accuracy of 80.14%. **Cross model attack identification:** We analyze the performance of our network on cross model attack identification. AID consists of attacks generated by targeting 3 different networks. For this experiment, we split AID into three subsets containing perturbations related to the corresponding target model. AID-R, AID-D, AID-I refer to subsets of AID containing perturbations corresponding to ResNet50[18], DenseNet121[19] and InceptionV3[42] as target networks. Each subset is further split into train-test sets. Table 6 details the results on cross model attack identification of several baselines compared against our technique. In general, we observe that the networks perform well when trained and tested on the same subsets of AID. The proposed technique performs better in all cases compared to other baselines. This experiment suggests that perturbations from different target models also contain similar \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{\begin{tabular}{c} **Train** \\ **Set** \\ \end{tabular} } & \multicolumn{3}{c}{\begin{tabular}{c} **Performance on** \\ **different test sets** \\ \end{tabular} } \\ \cline{3-5} & & **AID-R** & **AID-D** & **AID-I** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} ResNet50 \\ [18] \\ \end{tabular} } & AID-R & 71.46\% & 65.74\% & 62.90\% \\ & AID-D & 66.15\% & 66.88\% & 61.46\% \\ & AID-I & 59.69\% & 65.22\% & 66.96\% \\ \hline \multirow{2}{*}{\begin{tabular}{c} DenNet121 \\ [19] \\ \end{tabular} } & AID-R & 70.01\% & 66.89\% & 58.46\% \\ & AID-D & 55.77\% & 73.71\% & 53.83\% \\ & AID-I & 63.3\% & 66.96\% & 69.54\% \\ \hline \multirow{2}{*}{\begin{tabular}{c} InceptionV3 \\ [42] \\ \end{tabular} } & AID-R & 66.35\% & 60.51\% & 61.29\% \\ & AID-D & 63.02\% & 66.05\% & 62.54\% \\ & AID-I & 59.21\% & 60.03\% & 68.72\% \\ \hline \multirow{2}{*}{ \begin{tabular}{c} **Proposed** \\ **Approach** \\ \end{tabular} } & AID-R & **75.41\%** & 73.56\% & 69.76\% \\ & AID-D & 70.46\% & **74.42\%** & 67.42\% \\ \cline{1-1} & AID-I & 69.95\% & 69.88\% & **73.12\%** \\ \hline \hline \end{tabular} \end{table} Table 6: **Cross Model Attack Identification. AID-R, AID-D, and AID-I refer to the subsets of AID containing perturbations corresponding to the target models ResNet50[18], DenseNet121[19] (abbreviated as DenNet121) and InceptionV3[42] respectively.** \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **\# GLoF** & \(n=1\) & \(n=3\) & \(n=5\) & \(n=7\) & \(n=9\) \\ \hline **Accuracy** & 79.20\% & 79.65\% & **80.14\%** & 79.22\% & 79.90\% \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of number of GLoF modules \(n\) on the performance of attack identification \begin{table} \begin{tabular}{l|c} \hline \hline **Method** & **Accuracy** \\ \hline \hline Full model & **80.14\%** \\ \hline without pre-training & 79.20\% \\ without global connectivity- GLoF-C & 78.66\% \\ without local connectivity- GLof-A & 73.61\% \\ without Feature Fusion & 78.87\% \\ without Signature Extractor & 73.20\% \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study for Attack Identification. Full model has a pre-trained signature extractor and a classifier accepting fused product of the signature and features.** traces that can be leveraged to profile the attacker. **Success rate vs. Identifiability**: While the stronger attacks like PGD have a 100% fooling rate, the weaker black box attacks have a success rate of atleast 65% for the samples considered in AID. We also study the indentifiability vs success rate for the FGSM class and find that our technique achieves 74.9% accuracy for an epsilon as low as 2 and 94.5% for an epsilon of value 16. We observe an upward increasing trend as the epsilon increases indicating an increasing level of perceptibility of the patterns. **Identifying unseen attacks:** With the increasing threat to neural networks, it is likely for the PRAT problem to encounter novel/unseen attacks. To experiment the effectiveness of the proposed network we devise an experiment which includes identifying the toolchain family of an unseen attack. For this, we split AID into two different sets containing mutually exclusive attack categories. We retrain the overall pipeline and test it on the unseen classes which achieves an accuracy of 57.2%. We extend our approach to register novel attacks with minimal training set using toolchain indexing(discussed in supplementary). Identifying open set novel attacks under PRAT scenario remains challenging due to the fact that the unseen perturbations are nearly imperceptible and are difficult to distinguish. **Feature visualization:** We study the separability of extracted features by analyzing the t-SNE plots of a set of features extracted from the penultimate layer of the attack classifier. Fig.4 shows the three toolchain families forming separate clusters. Due to their 'universality' constraint, universal perturbations form a clear cluster and are easily distinguishable. While gradient based attacks share similar techniques, decision based attacks have distinctive approaches based on the decision of the network. Hence we observe the overlap between gradient and decision based attacks. Fig.4 shows the t-SNE plots over specific classes. Boundary Attack has the maximum overlap with other attacks. In gradient based attacks, DeepFool, NewtonFool and CW attacks overlap with each other indicating that they generate similar patterns thus making it difficult to distinguish them. **Reconstructions:** Fig 5. depicts the adversarial images, corresponding perturbations and the signatures extracted by the our method. In general, the extracted signatures have patterns highlighting the object from the image. This is due to the fact that extracting these nearly imperceptible perturbations accurately is always challenging. These patterns along with the patterns pertaining to the attacker help in training the attack classifier to identify the attacker. ## 7 Conclusion We presented a new perspective on adversarial attacks indicating the presence of peculiar patterns in the perturbations that hint back to the attacker. We formulate the PRAT problem - given the adversarial input, profile the attack signature to identify the attack used to generate the sample. We develop Adversarial Identification Dataset and compare several baseline techniques on the proposed dataset. Targeting PRAT, we propose a framework that combines CNN's capability to capture local features and Transformer's ability to encode global attention to generate signatures containing attack-specific patterns, which are used by an attack classifier to identify the attack. Extensive experiments showcase the efficacy of our framework and support the credibility of the proposed PRAT problem. Figure 4: **Visualizations of features learned by the attack classifier. (a) t-SNE for specific attack categories. Labels are according to Table 1 (b) t-SNE for attack families. Labels {0,1,2} refer to {gradient, decision, universal} attacks.** Figure 5: Adversarial images, their perturbations (normalized) and the corresponding signature(normalized) extracted by the proposed approach.
2305.20019
Monotonic Location Attention for Length Generalization
We explore different ways to utilize position-based cross-attention in seq2seq networks to enable length generalization in algorithmic tasks. We show that a simple approach of interpolating the original and reversed encoded representations combined with relative attention allows near-perfect length generalization for both forward and reverse lookup tasks or copy tasks that had been generally hard to tackle. We also devise harder diagnostic tasks where the relative distance of the ideal attention position varies with timestep. In such settings, the simple interpolation trick with relative attention is not sufficient. We introduce novel variants of location attention building on top of Dubois et al. (2020) to address the new diagnostic tasks. We also show the benefits of our approaches for length generalization in SCAN (Lake & Baroni, 2018) and CFQ (Keysers et al., 2020). Our code is available on GitHub.
Jishnu Ray Chowdhury, Cornelia Caragea
2023-05-31T16:48:06Z
http://arxiv.org/abs/2305.20019v1
# Monotonic Location Attention for Length Generalization ###### Abstract We explore different ways to utilize position-based cross-attention in seq2seq networks to enable length generalization in algorithmic tasks. We show that a simple approach of interpolating the original and reversed encoded representations combined with relative attention allows near-perfect length generalization for both forward and reverse lookup tasks or copy tasks that had been generally hard to tackle. We also devise harder diagnostic tasks where the relative distance of the ideal attention position varies with timestep. In such settings, the simple interpolation trick with relative attention is not sufficient. We introduce novel variants of location attention building on top of Dubois et al. (2020) to address the new diagnostic tasks. We also show the benefits of our approaches for length generalization in SCAN (Lake & Baroni, 2018) and CFQ (Keysers et al., 2020). Our code is available on GitHub1. Machine Learning, ICML, ICML ## 1 Introduction Neural seq2seq (Sutskever et al., 2014) is a powerful generic framework for the task of transforming an input sequence of arbitrary length into an output sequence of arbitrary length. Although seq2seq models can perform impressively in a great variety of tasks (Raffel et al., 2020; Lewis et al., 2020), they can still struggle in out-of-distribution generalization (e.g., systematic generalization or length generalization), and sometimes, even in simple algorithmic tasks (Kim et al., 2022; Dubois et al., 2020; Dehghani et al., 2019; Lake & Baroni, 2018; Liska et al., 2018). Even after extensive pre-training, neural models can show mixed results in such forms of generalization (Kim et al., 2022; Anil et al., 2022). In this paper, we focus on length generalization, i.e., the ability of a model to generalize to sequences of unseen (and typically higher) lengths. Particularly, we concentrate on enhancing the interlayer attention mechanism in seq2seq encoder-decoder models for improved length generalization. Similar to Csordas et al. (2022), we take a _bottom up_ approach to model development and explore the effects of different strategies of increasing complexities on a range of controlled synthetic probing tasks--each targeting a narrowly defined model behavior or phenomenon--to investigate which strategy works and to what extent, and why does or does not work, and thus, each task precisely pinpointing their capabilities as well as their limitations. Such thorough investigation in a natural language domain can be difficult for at least the following reasons: (1) it can be hard to isolate the exact reasons of failure in natural language due to its complexities and diversity; (2) often there can be exploitable heuristics like emphasis on recency that may improve the overall length generalization performance but preserve systematic issues leading to failures in cases where the heuristics do not apply. Such failures may not be reflected in the overall evaluation if the heuristics apply in the majority of the samples. Besides these factors, the simple synthetic tests that we consider here can be still fairly challenging for neural models. We believe they offer an initial step toward the design of more general-purpose models. To achieve the above desideratum and evaluate length generalization capability of different interlayer attention mechanisms, we set up ten synthetic probing task (see Table 1 and SS2). Following prior work (Graves et al., 2014; Dehghani et al., 2019; Liang et al., 2021), we first consider the task of simply copying source texts in both forward and backward (reverse) directions. Following Dubois et al. (2020), we also consider compositional lookup table task (Liska et al., 2018) in both directions. However, as we will show in SS2, in these tasks the ideal attention position can be trivially determined from the decoding timestep alone--a condition (let us call it C1) that simply allows the relative positional attention (Shaw et al., 2018; Dai et al., 2019) to perform perfectly given the right implementation. Thus, we propose new probing tasks involving repeated copy (ReCopy) and its variants to create settings where C1 is not sat isfied. While there are already more synthetic tasks where C1 is not satisfied, our proposed tasks (ReCopy and its variants) are intended to be _small_ extensions over simple copy tasks such that the exact cause of model failure can be clearer compared to more complex tasks. Not only do we propose new probing tasks, but we also propose new strategies to tackle them. Prior models (Dehghani et al., 2019; Dubois et al., 2020) already struggled in reverse copy or reverse lookup tasks. We introduce a technique of interpolating forward and reversed encoded representations to handle reverse direction even with simple relative attention (the technique is universally applicable to any seq2seq architecture). Moreover, we also propose new attention models, OneStep attention and monotonic location attention (our full model), to handle the proposed probing tasks on which the prior models fail. We also show that our models maintain comparable performance in the SCAN (Lake and Baroni, 2018) (a dataset for translating simple commands into sequences of actions) and CFQ (Keysers et al., 2020) length splits (a dataset for query-to-SQL translation). ## 2 Probing Tasks We now describe the ten2 probing tasks used in this paper. We present examples for each task in Table 1. Footnote 2: Twelve including tasks in Appendix A. **Copy:** The copy task requires copying input tokens into the output tokens. In this case, the encoder-decoder network has to simply learn an identity function \((x=f(x))\). For this task we use a vocabulary of natural number tokens from \(0\)-\(9\) (see Table 1 for an example). We generated \(10,000\) training samples with sequence length in the range \(5\)-\(10\). For the development set, we generated \(2,000\) samples of sequence length \(10\)-\(15\). For test sets, we generated a split with sequence length \(15\), another split with sequence length \(30\), and another with sequence length \(100\). Each test split has \(2,000\) samples. **Reverse Copy:** In the reverse copy task, the model has to copy the input as above but in the reverse direction (see Table 1 for an example). This task is generated with the same parameters as the copy task. **Lookup:** Lookup represents the "Long Lookup Tables" task (Liska et al., 2018) as made available in the code.3 For any input like "\(001\)\(t1\)\(t2\)\(t3\).", the output for this task will be "\(v1\)\(v2\)\(v3\)\(v4\)" where \(v1=001\), \(v2=t1(001)\), \(v3=t2(t1(001))\), and \(v4=t3(t2(t1(001)))\). Here, \(t1\), \(t2\), and \(t3\) are functions, each corresponding to some lookup table such that \(t_{i}:\{0,1\}^{3}\rightarrow\{0,1\}^{3}\) (for any natural number \(i\)). The task is generated using the supplied code.3 The code generates a training split of approximately \(9,000\) samples of lengths \(\leq 6\). We consider three generated test splits that are of sequence lengths \(7\), \(9\), and \(11\). The first test split has approximately \(4,500\) samples whereas the others have approximately \(5,000\) samples. The development split consists of about \(500\) samples of sequence length \(\leq 6\) and approximately \(500\) samples of length \(7\). Footnote 3: [https://github.com/i-machine-think/machine-tasks](https://github.com/i-machine-think/machine-tasks) **Reverse Lookup:** Reverse Lookup represents the "Long Lookup Tables Reverse" task (Liska et al., 2018) as can be generated from the code.3 For any input like "\(t1\)\(t2\)\(t3\)\(001\).", the output for this task will be "\(v1\)\(v2\)\(v3\)\(v4\)" where \(v4=001\), \(v3=t3(001)\), \(v2=t2(t3(001))\), and \(v1=t1(t2(t3(001)))\). Here, \(t1\), \(t2\), and \(t3\) are lookup functions as before. The splits of this task are created similarly to those of the Lookup task described above. Footnote 3: [https://github.com/i-machine-think/machine-tasks](https://github.com/i-machine-think/machine-tasks) **ReCopy:** There is one thing that is common in the above tasks. For the forward tasks (Lookup, Copy), assuming that the encoder can keep the information of position \(i\) at position \(i\) after encoding with necessary contextualization (e.g., composition of previous functions in case of Lookup), the ideal encoding position to attend during decoding will always remain at the same constant distance from the decoding timestep. This is also true for the reversed tasks (Reverse Copy, Reverse Lookup) if the encoding is reversed. For example, to copy "\(4\)\(7\)\(9\) 8", at timestep \(1\) the model has to attend position \(1\) to print \(4\). Next, at timestep \(2\) the model has to attend position \(2\) to print \(7\). Thus, more generally, for any timestep \(t\) the model has to attend an encoding position \begin{table} \begin{tabular}{l|l|l} \hline **Task** & **Input** & **Output** \\ \hline Copy & \(4\)\(7\)\(9\)\(8\) & \(4\)\(7\)\(9\)\(8\) \\ Reverse Copy & \(4\)\(7\)\(9\)\(8\) & \(8\)\(9\)\(7\)\(4\) \\ Lookup & \(010\)\(t3\)\(t4\)\(t2\)\(t6\)\(t1\). & \(010\)\(011\)\(01\)\(01\)\(001\) \\ Reverse Lookup & \(t1\)\(t6\)\(t2\)\(t4\)\(t3\)\(010\). & \(010\)\(01\)\(01\)\(01\)\(01\)\(001\) \\ ReCopy & \(4\)\(7\)\(9\)\(8\) & \(4\)\(4\)\(4\)\(4\)\(7\)\(7\)\(7\)\(7\)\(7\)\(9\)\(9\)\(9\)\(9\)\(8\)\(8\)\(8\)\(8\)\(8\) \\ Reverse ReCopy & \(4\)\(7\)\(9\)\(8\) & \(8\)\(8\)\(8\)\(8\)\(8\)\(8\)\(9\)\(9\)\(9\)\(9\)\(7\)\(7\)\(7\)\(7\)\(4\)\(4\)\(4\) \\ Inv ReCopy & \(4\)\(4\)\(4\)\(4\)\(7\)\(7\)\(7\)\(7\)\(7\)\(7\)\(9\)\(9\)\(9\)\(9\)\(9\)\(8\)\(8\)\(8\) \(8\) & \(4\)\(7\)\(9\)\(8\) \\ Inv Reverse ReCopy & \(8\)\(8\)\(8\)\(8\)\(8\)\(8\)\(9\)\(9\)\(9\)\(9\)\(7\)\(7\)\(7\)\(7\)\(7\)\(4\)\(4\) & \(4\)\(7\)\(9\)\(8\) \\ SCAN & look and run right & \(\_\)LOOK\(\_\)IURN\(\_\)RIGHT\(\_\)LRUN \\ \hline \end{tabular} \end{table} Table 1: Input-output examples for each task (except CFQ). \(i\) such that \(i-t=c\) (where \(c\) is some constant. In this example, \(c=0\)). Even more generally, in all these tasks, the ideal position to attend can be determined just based on the decoding timestep \(t\). For instance, for the above tasks, the ideal position of attention \(i\) can be defined as a function over timestep as \(i=f(t)=t+c\). However, such a happy situation will not be maintained in more complex tasks. Thus, we decide to create a new set of diagnostic/probing tasks that are close to the previous tasks but precludes the possibility of determining the position of attention just from the timestep. With this motivation, first, we introduce the task ReCopy (Repeated Copy). In this task, the vocabulary includes natural numbers in the range \(0\)-\(9\). If the input for the task is "\(4\)\(7\)\(9\)\(8\)", then the corresponding output will be "\(4\)\(4\)\(4\)\(7\)\(7\)\(7\)\(7\)\(7\)\(9\)\(9\)\(9\)\(9\)\(8\)\(8\)\(8\)\(8\)". Effectively, in this task, the model has to learn to not just copy but also to repeat the copied item for a certain frequency before copying the next item. There is a specific set of rules behind how many times an item should be repeated. That is, if the item is a number \(\leq 3\) the model should print it once, if the item is a number \(x\) such that \(3<x\leq 6\) the model should print it three times, and for any other number \(>6\), the model should print it five times. The splits and sample sizes for this task are similar to those of the copy task. Our intention here is to make a small extension of the copy task that avoids determination of the attention position purely from the timestep but without introducing any additional sources of difficulty so that the causes of failures can be disentangled more easily. For instance, if a model succeeds in the copy task but fail in ReCopy we can now reasonably infer that its cause of failure is the specific difficulty introduced in ReCopy. Note that if we made ReCopy a bit simpler by requiring each number to be copied and repeated for a uniform frequency, then the determination of the ideal position for attention will again become trivially possible just from a decoding timestep; thus ReCopy requires repeating with varying frequency depending on which number is being copied. **Reverse ReCopy:** The Reverse ReCopy task is similar to the ReCopy task in all aspects other than the fact that the copying takes place in the reversed direction (see example in Table 1). The task splits are generated in the same way as in the Copy task. **Inv ReCopy:** The Inv ReCopy task (Inverse ReCopy) is similar to the ReCopy task in all aspects other than the fact that the inputs and outputs are inverted (see example in Table 1). The task splits are generated in the same way as in the Copy task. **Inv Reverse ReCopy:** The Inv Reverse ReCopy task (Inverse ReCopy) is similar to the ReCopy task in all aspects other than the fact that the inputs and outputs are inverted (see example in Table 1). The task splits are generated in the same way as in the Copy task. **Inv ReCopy:** The Inv Reverse ReCopy task (Inverse ReCopy) is similar to the ReCopy task in all aspects other than the fact that the inputs and outputs are inverted (see example in Table 1). The task splits are generated in the same way as in the Copy task. **Scan:** SCAN is a popular dataset used for testing systematic generalization (Lake and Baroni, 2018). It involves the task for translating simple commands into a sequence of actions. We explore its original length generalization split. **CFQ:** CFQ is a realistic semantic parsing dataset (Keysers et al., 2020) proposed for evaluating compositional generalization. We explore its length generalization split. We also propose and explore two additional probing tasks (**DeDupe** and **PosRetrieve**) in Appendix A. ## 3 Seq2Seq General Framework A seq2seq model can be formalized as a function \(F_{seq2seq}:\mathbb{N}^{s}\rightarrow\mathbb{N}^{z}\) that maps some input sequence \(x_{1:s}=(x_{1},x_{2},\ldots,x_{s})\) of length \(s\) to an output sequence \(y_{1:z}=(y_{1},y_{2},\ldots,y_{z})\) of length \(z\). Here each element in \(x_{1:s}\) and \(y_{1:z}\) is a natural number that indexes some distinct token from a vocabulary. \(F_{seq2seq}\) constitutes two major components: an encoder function (\(F_{enc}\)) and a decoder function (\(F_{dec}\)). The encoder \(F_{enc}:\mathbb{N}^{s}\rightarrow\mathrm{I\!R}^{s\times d}\) maps the initial token indices \(x_{1:s}\) to a sequence of hidden state representations \(e_{1:s}=(e_{1},e_{2},\ldots,e_{s})\) (where any \(e_{i}\in\mathrm{I\!R}^{d}\)). The decoder \(F_{dec}:\mathbb{N}^{s}\times\mathbb{N}\rightarrow\mathbb{N}\) generates the output sequence \(y_{1:z}\) recursively one token at a time, typically in an autoregressive manner. That is, at each timestep \(t\) (beginning at \(t=1\)), \(F_{dec}\) takes as input the history of all previously generated tokens \(H^{t-1}=(go,y_{1},y_{2},\ldots,y_{t-1})\) and the last generated token index \(y_{t}\) and outputs \(y_{t+1}\). \(H^{0}\) is initialized with \((go)\) where \(go\) represents the index of a special token that marks the beginning of the generation. One salient component within the decoder is an interlayer (cross) attention function (Bahdanau et al., 2015) that allows the decoder to interact and retrieve encoded state information. The decoder, in timestep \(t\), will typically create history-contextualized representation \(h^{t-1}\in\mathrm{I\!R}^{d}\) (compressing \(H^{t-1}\)). Let query \(q_{t-1}=f_{q}(h_{t-1})\), keys \(k_{i}=f_{k}(e_{i})\), and values \(v_{i}=f_{v}(e_{i})\) (\(\forall i\in\{1,\ldots,s\}\)) where \(f_{q},f_{k}\), and \(f_{v}\) are linear transformations (\(f_{q|k|v}:\mathrm{I\!R}^{d}\rightarrow\mathrm{I\!R}^{d}\)). In the attention layer, the query interacts with the keys to score each corresponding value. A weighted sum of values based on those scores is then computed as the result of the attention function. This allows the decoder to dynamically and softly retrieve information from any position in the encoded representations \(e_{1:s}\) at any timestep. For our work, we explore a variety of cross-attention functions which we discuss below. ## 4 Prior Approaches to Cross-Attention ### Content Attention As a baseline, we use the popular scaled inner dot-product query-key based attention as used in Vaswani et al. (2017): \[c_{ti}=\frac{<q_{t},k_{i}>}{\sqrt{d}}, \tag{1}\] \[a_{ti}=\frac{\exp(c_{ti})}{\sum_{j=1}^{s}\exp(c_{tj})},\;o_{t}=f_{o}(\sum_{j=1}^ {s}a_{tj}\cdot v_{j}), \tag{2}\] where \(f_{o}:\mathds{R}^{d}\rightarrow\mathds{R}^{d}\) is a linear transformation, \(c_{ti},a_{ti}\in\mathds{R}\) and \(o_{t}\in\mathds{R}^{d}\). Note that this is a fully _content-based attention_ because it does not explicitly use any position or distance-related information about the query or keys. ### Relative Attention As another baseline, we use the relative attention mechanism4 as used in Dai et al. (2019). Effectively, a sinusoidal positional encoding (Vaswani et al., 2017; Dai et al., 2019) is first used to create embeddings for each relative distance. Let \(pe_{k}\in\mathds{R}^{d}\) represent the embedding for the distance \(k\in\mathbb{Z}\). Then, the relative position attention creates a query-key score sequence based on the corresponding relative distances between the query and the keys: Footnote 4: Initially the idea was introduced in Shaw et al. (2018). \[r_{ti}=\frac{<(q_{t}+b_{2}),pe_{i-t}>}{\sqrt{d}} \tag{3}\] where \(b_{2}\in\mathds{R}^{d}\) is a bias for position attention and \(r_{ti}\in\mathds{R}\). This is integrated with content-based attention by modifying Eqn. 1 in SS4.1 as: \[c_{ti}=\frac{<(q_{t}+b_{1}),k_{i}>}{\sqrt{d}}+r_{ti} \tag{4}\] \(b_{1}\in\mathds{R}^{d}\) is a bias for content-based attention. Everything else is kept the same as was for content-based attention. ### Location Attention Location attention, as introduced in Dubois et al. (2020), is primarily a form of attention based only on the positions of the encodings \(e_{1:s}\); however, it is more expressive than the relative positional attention. Here we discuss the details of location attention with some refinements. Dubois et al. (2020) introduced a method to determine the locational "center of focus" for attention which is made to resemble human attention in visual search in how even when it focuses on a specific part of the input, it also perceives neighboring parts due to the eccentricity effect (Carrasco et al., 1995). Let \(\mu_{t}\in\mathds{R}\) represent the center of focus such that positions close to \(\mu_{t}\) get higher attention than those farther away. With such \(\mu_{t}\), an attention spread can be modeled by using \(\mu_{t}\) as a mean in a Gaussian distribution: \[\lambda_{ti}=\exp\left(-\frac{(i-\mu_{t})^{2}}{2\cdot\sigma_{t}^{2}}\right) \tag{5}\] Here \(\sigma_{t}\) is the standard deviation, which determines the spread of the attention focus. However, using raw values of \(i\) and \(\mu_{t}\) is not ideal because the range of values (especially of \(i\)) can differ based on sequence length. This becomes more problematic for unseen length generalization. Thus, the formalism is modified as follows: \[\lambda_{ti}=\exp\left(-\frac{(norm(i)-clamp(\mu_{t}))^{2}}{2\cdot\sigma_{t}^ {2}}\right) \tag{6}\] where: \[norm(i)=\frac{i-1}{max(1,s-1)} \tag{7}\] \[clamp(\mu_{t})=max(0+m\cdot\mu_{t},min(1+m\cdot\mu_{t},\mu_{t})) \tag{8}\] Note that the encoder position index ranges from \(1\) to \(s\) where \(s\) is the sequence length. The \(norm()\) function squeezes any position index \(i\) into the range \([0,1]\) no matter the sequence length. Further, the \(clamp()\) function enforces \(\mu_{t}\) to be approximately within \([0,1]\) which is the possible range of positions that can be attended. Following Dubois et al. (2020), \(m\) in \(clamp()\) acts as a negative slope (\(m=0.01\)) to add some "leakiness" similar to LeakyReLU. Note that the result is a PDF over the whole real number set whereas only the discrete positions of the encoding matter. Thus, \(\lambda_{ti}\) can be further normalized to get a discretized probability measure over only the relevant positions: \[\lambda_{ti}^{\prime}=\frac{\lambda_{ti}}{\sum_{j=1}^{s}\lambda_{tj}} \tag{9}\] This gives the location attention. Below we discuss how to obtain \(\mu_{t}\) and \(\sigma_{t}\). First, a transformation over the decoder hidden state \(h_{t}\) is created as \(l_{t}=f_{l}(h_{t})\) where \(f_{l}:\mathds{R}^{d}\rightarrow\mathds{R}^{d}\) is a linear transformation.5 Next, \(\sigma_{t}\) is computed as: Footnote 5: Dubois et al. (2020) used a GRU for \(f_{l}\). However, in our experiments we removed it because we did not find it effective. \[\sigma_{t}=\frac{ReLU(f_{\sigma_{t}}(l_{t}))+min_{\sigma_{t}}}{s} \tag{10}\] Here \(f_{\sigma_{t}}:\mathds{R}^{d}\rightarrow\mathds{R}\) is a linear transform and \(min_{\sigma_{t}}\) is the minimum value for \(\sigma_{t}\). Next, \(\mu_{t}\) is computed by taking some steps (in either forward or backward direction) with respect to some reference position ref\({}_{t}\). Given that the \(norm(.)\) function will squeeze any discrete position index into a continuous number in \([0,1]\), ref\({}_{t}\) can also be treated to be in \([0,1]\). Formally, \(\mu_{t}\) is computed as: \[\mu_{t}=\text{ref}_{t}+\text{stepsize}\cdot\text{steps}_{t} \tag{11}\] Here, stepsize is \(\frac{1}{max(1,s-1)}\). For the reference point \(\text{ref}_{t}\), different possible choices can be considered. One possible choice is the previous attended position \(pa_{t-1}\) which is computed as \(pa_{t-1}=\sum_{i=1}^{s}\alpha_{t-1i}\cdot norm(i)\) where \(\alpha_{t-1i}\) represents the interlayer attention at the previous timestep (\(t-1\)) to the encoding position \(i\). Essentially, with this setup, the attention model can move left or right with respect to previously attended position. Another choice for the reference point is to simply make a neural network-based logistic prediction to choose any arbitrary position \(b_{t}\) in \([0,1]\) as \(b_{t}=sigmoid(f_{b}(l_{t}))\) where \(f_{b}:\mathbbm{R}^{d}\rightarrow\mathbbm{R}\) is a linear transform. Here \(b_{t}\) can also help initialize \(\text{ref}_{t}\) to adaptively learn to start attending from the beginning of the encoding (\(i=1\)) or the end of the encoding (\(i=s\)) (or even somewhere in-between if needed) based on the task. Ultimately, we can allow the model itself to learn to choose or combine both \(pa_{t-1}\) and \(b_{t}\) as needed: \[\text{ref}_{t}=g_{t}\cdot pa_{t-1}+b_{t} \tag{12}\] with \(g_{t}\) being a real scalar in \([0,1]\) functioning as a gate that decides to keep or ignore \(pa_{t-1}\). It is computed as \(g_{t}=sigmoid(f_{g}(l_{t}))\) where \(f_{g}:\mathbbm{R}^{d}\rightarrow\mathbbm{R}\) is a linear transform. Next the steps to take (i.e., \(\text{steps}_{t}\)) with respect to the reference point are determined as follows: \[\text{steps}_{t}=\text{softstair}(f_{step}(l_{t})) \tag{13}\] where \(f_{step}:\mathbbm{R}^{d}\rightarrow\mathbbm{R}\) is again a linear transform and softstair is an activation function that pushes the output towards an integer: \[\text{softstair}(x)=\lfloor x\rfloor+sigmoid(\tau\cdot(x-\lfloor x\rfloor-0.5)) \tag{14}\] \(\tau\) is a temperature hyperparameter which is set to \(20\) like in Dubois et al. (2020). Last, the attention is computed as a convex combination of content attention and location attention: \[a_{ti}=mix_{ti}\cdot\left(\frac{\exp(c_{ti})}{\sum_{j=1}^{s}\exp(c_{tj})} \right)+(1-mix_{ti})\cdot\lambda_{ti}^{\prime} \tag{15}\] \[mix_{ti}=sigmoid(\beta f_{mix}(h_{t})) \tag{16}\] Here \(f_{mix}:\mathbbm{R}^{d}\rightarrow\mathbbm{R}\) is a linear transform and \(c_{ti}\) corresponds to the content-based pre-normalized attention scores as computed in Eqn. 1. In some cases, we might want to ignore the content attention focusing purely on location attention. In such cases, we can set \(a_{ti}=\lambda_{ti}^{\prime}\). ## 5 Proposed Approaches to Cross-Attention In this section, we first present the limitations of the prior approaches discussed above and then present (in a _bottom-up_ manner) our proposed changes that address them. ### Limitations of Prior Approaches **Limitation 1 (handling reverse tasks):** As noted earlier (see ReCopy task description in SS2), in some tasks like Copy or Lookup, the target cross-attention position is always at the same constant relative distance from the timestep. In such cases, the inductive bias from the relative attention (SS4.2) can be especially fruitful. However, this setup is not maintained by default (without reversing the encoding or the input in the model), in the reverse directions of the tasks (Reverse Copy or Reverse Lookup). Consider transforming "\(4\)\(7\)\(9\)\(8\)" to "\(8\)\(9\)\(7\)\(4\)". In this case to print \(8\) in timestep \(t=1\), the model needs to attend to encoding position \(i=4\). Thus, the relative distance will be \(i-t=3\). However, for printing \(9\) in timestep \(t=2\), the model needs to attend to the encoding position \(i=3\). Then the relative distance will be \(i-t=1\). Thus, the ideal relative distance can vary with timestep and also depends on the source sequence length. These facts make it a struggle for relative attention, by default, to work on reverse tasks. In theory, location attention is equipped to handle reverse tasks - it has to just initialize \(b_{t}\) as \(1\) and \(g_{t}\) as \(0\) when \(t=1\). This will set \(ref_{t}=1\), i.e., the reference position will be the end of the input sequence. From that point location attention can learn to take steps backward one by one using previous attention (\(pa_{t-1}\)) as the reference position if needed. However, in practice, location attention still tends to be empirically brittle and have been shown to fail the reverse lookup task (Dubois et al., 2020). **Limitation 2 (handling ReCopy and beyond):** As discussed in SS2 (see ReCopy description), tasks like ReCopy, Reverse ReCopy, or their inverted variants are specifically designed to serve as settings in which the ideal attention position can vary from timestep to timestep (no matter if the encoding positions are reversed or not). Thus, this setting becomes hard for relative attention. Location attention, again, can theoretically address these situations given its flexibility to keep track of past attended position and ability to take any arbitrary steps in reference to past attended position dependent on the decoder state. Nevertheless, as mentioned earlier, in practice location attention turns out to be relatively brittle. Moreover, its use of soft sigmoid-based gating for making decisions at different stages of the model can lead to higher error accumulation and lower robustness to increasing lengths of data. ### Bidirectional Relative Attention First, we propose a simple approach to extend relative attention in a manner that addresses limitation 1. We note that if the task is, e.g., reverse copy, we can simply reverse the encoded sequence representations after encoding. Once done so, from the perspective of the decoder, the task becomes equivalent to forward copy. Nevertheless, in practice, we will not know ahead of time whether we are facing a task where forward version of the encoding is more ideal or the reversed version of the encoding. Thus, we use a gating mechanism that interpolates (make a convex combination of) the two directions of encoding so that the model can adaptively decide whether to reverse the encodings or not: \[e_{1:s}^{rev}=\text{reverse}(e_{1:s}),\;\alpha_{dir}=\text{ sigmoid}(\beta\cdot f_{dir}(e_{cls})) \tag{17}\] \[\forall i\in\{1,\dots,s\}\;\;e_{i}^{dir}=\alpha_{dir}\cdot e_{i}+(1-\alpha_{ dir})\cdot e_{i}^{rev} \tag{18}\] \(\beta\) is a scalar (acting as a temperature), \(f_{dir}:\mathbbm{R}^{d}\rightarrow\mathbbm{R}^{1}\) is a linear layer, and \(e_{cls}\in\mathbbm{R}^{d}\) is a vector representation of the whole sequence \(e_{1:s}\) - it can be implemented in multiple ways (we explain our implementation in Appendix E). After this we use the same strategy as in SS4.2 but using key and value transformations over \(e^{dir}\) instead of \(e\). This trick can also be useful in more practical tasks like translation where the source and target language may have different reading orders. Note that \(e^{dir}\) is different from outputs from models like bidirectional RNNs. Unlike here, in a bidirectional RNN, the encoded tokens remain in the same positions; only the contextualizing information comes from different directions. Also, note that this strategy is as general purpose as introducing bidrectionality to RNNs. Moreover, we are allowing neural networks to dynamically predict the direction for a given input through the gating mechanism; thus, avoiding infusion of task-specific knowledge of ideal direction of attention.6 Footnote 6: While this strategy may appear obvious, it is still not explored so far to our knowledge. Moreover, theoretical motivation does not always translate well to empirical performance. For instance, Location Attention struggles in reverse tasks despite having the theoretical capacity for reverse attention as discussed in §5.1. So empirical benefit of this strategy is not a priori obvious and deserves the investigation that we do here. ### OneStep Attention As discussed in SS5.1, fixing limitation 1 by reversing the encodings (as in SS5.2) still does not address limitation 2. Concerned with limitation 2, we move away from simple relative positional attention and instead seek to make adjustments over location attention to address its potential issues (see SS5.1). As a result we propose a new attention model - OneStep attention. Below we enumerate the main adjustments over location attention (from SS4.3): 1. OneStep attends to key-value transformations of \(e_{1:s}^{dir}\) instead of \(e_{1:s}\) similar to SS5.2. 2. The computation of \(\text{ref}_{t}\) is simplified as: \(\text{ref}_{t}=pa_{t-1}\) 3. The activation function in Eqn. 13 to sigmoid from softstair: \(\text{steps}_{t}=\text{sigmoid}(f_{step}(t_{t}))\) **First Change:** The first change follows from SS5.2 and is motivated to address limitation 1. **Second Change:** The second change is motivated by a number of factors. First, due to the incorporation of the first change, the role of \(b_{t}\) from Eqn. 12 is severely undermined. It is not anymore necessary for \(b_{t}\) to initialize the starting position of attention to handle reverse tasks. Besides that the usefulness of \(b_{t}\) can be limited.7 It is motivated for percentile attention in Dubois et al. (2020) which may not be as relevant or can be accomodated by content attention mixing (Eqn. 15). So we removed it. To reduce error-accumulation we also remove the gating \(g_{t}\) over \(pa_{t-1}\); thus ultimately setting \(\text{ref}_{t}=pa_{t-1}\). It removes the models capacity for attending to some specific absolute position from the beginning/end but this capacity is also lacking from relative attention and is not currently required by most of our tasks. We keep investigation to incorporating this capacity better in the future. Currently, absolute positional encoding in the encoder combined with content attention mixing can still accommodate for the lack to an extent. Footnote 7: It can be still useful in special cases when the model has to attend some \(x\) position from the end in one timestep and some \(y\) position from the beginning in another. **Third Change:** In the third change, we replace softstair with a sigmoid for the step computation. The sigmoid function enforces the model to softly choose between either taking a single step forward (\(\text{steps}_{t}=1\)) or none (\(\text{steps}_{t}=0\)). We added this change because giving unrestricted freedom in determining the steps can make it harder for the model to learn the right function. Particularly in most of our current diagnostic tasks, it is sufficient to learn to make bounded steps in \([0,1]\) with respect to the past attended position. While this choice is perhaps not ultimately ideal, it helps us evaluate the breaking points of the Location Attention framework better. Regardless, even after this restriction, OneStep can be still powerful enough to simulate a windowed version of relative attention (if it takes a single step in every timestep) (Shaw et al., 2018). Moreover, a sufficiently powerful encoded representation can, in theory, always reorganize or permute the input information to accommodate for this restriction. Besides, content attention mixing (Eqn. 15) can break the monotonicity of OneStep8 and make it more flexible. Footnote 8: By itself, without content attention mixing, OneStep is monotonic because in it, the center of focus can only move forward with time. ### Monotonic Attention In some tasks, it can be easier to learn to take bigger steps at the level of interlayer attention instead of expecting the encoder to permute the source input appropriately. So, we create another attention function where we relax the con straints in OneStep by changing the steps computation as: \[\text{steps}_{t}=g\cdot\text{sigmoid}(f_{step}(l_{t}))+(1-g)\cdot ReLU(f_{step}( l_{t})) \tag{19}\] Here, \(g=\text{sigmoid}(p)\) where \(p\in\mathds{R}\) is a model parameter.9 As we can see, with this setup we can allow the model itself to learn to prefer either taking controlled steps with a sigmoid or possibly bigger steps with a ReLU. We still use ReLU activation to keep the attention monotonic (i.e., the attention mechanism can only make forward steps) similar to OneStep for reasons discussed in SSSS5.3 (in Third Change). Footnote 9: In future, it can be better to have \(g\) dependent on the input encoding such as \(e^{cls}\) in case we want a multi-tasking model. ## 6 Experimental Setup Similar to Dubois et al. (2020), we use a Bidirectional GRU (Chung et al., 2014) based seq2seq model as the base for all the attention mechanisms. We explain more architectural details and hyperparameters in the Appendix E. **Nomenclature:** In Tables 2 and 3, we use the term _Content_ to refer to content attention (SS4.1), _Relative_ to refer to relative attention (SS4.2), and _Bi-Relative_ for bi-directional relative attention (SS5.2). We use the terms _LocAtm_, _On-eStepAtm_, and _MonoAttn_ for location attention (SS4.3), OneStep Attention (SS5.3), and monotonic attention (SS5.4) respectively if they are used without mixing content attention (i.e., replacing Eqn. 15 with \(a_{ti}=\lambda^{\prime}_{ti}\)). Otherwise, we use the terms _Mix LocAtm_, _Mix OneStepAtm_, and _Mix MonoAttn_ when mixing with content attention is done (i.e., Eqn. 15 is kept as described). We generally use the unmixed variants on the simpler diagnostic tasks (Lookup, Copy, or ReCopy-based tasks) because position-based attention is what is mainly relevant for the tasks. **Evaluation:** We calculate the sequence-level accuracy of our models. Any generated output gets a score of \(1\) if and only if it matches exactly with the given target output. **On the EOS problem:** The EOS token is a special marker that a model needs to generate to signify the end of sequence. In similar contexts, some prior works have tried to make the evaluation less stringent (Dubois et al., 2020; Newman et al., 2020) by terminating the model generation based on the oracle EOS position or by truncating oracle sequence based on predicted EOS position. We do not modify the evaluation in any such non-standard manner. Generally, we do not find EOS prediction to be a problem. If the inductive bias is suitable for the task, our models learn to generalize near perfectly without us needing to incorporate any separate mechanism to predict EOS properly. ## 7 Experimental Results In Table 2 we show the results of our different attention strategies on all our diagnostic tasks except SCAN and CFQ. The results are close to what we would expect a priori. Pure content attention (Content) without more explicit guidance from any positional information suffers in all the tasks. Relative attention (Relative) does well in the forward copy and lookup tasks but it fails in the reversed tasks for the reasons discussed in SS5.2. It also fails in the ReCopy-based tasks. This is consistent with our discussed limitations of prior works in SS5.1. Also, consistent with this discussion, we find our implementation of location attention (LocAttn) to struggle in all the tasks. Bidirectional relative attention (Bi-Relative) succeeds on both forward and reverse directions of copy and lookup tasks. This is aligned with our motivation for designing it (SS5.2). However, Bidirectional relative attention still does not alleviate the second limitation (SS5.1) and thus, fail in the ReCopy-based tasks. OneStep attention (OneStepAttn) succeeds nearly on all tasks except the inverted variations of the ReCopy tasks. The Copy tasks and Lookup tasks are easy to learn for OneStep attention because in either tasks it has to simply learn to take one step forward relative to the past attended position in every timestep. The ReCopy and Reverse RecCopy is slightly more complicated but still not too hard to learn. In these cases, the model has to learn to wait (predict \(steps_{t}=0\)) while the decoder is repeating previous generations. The attention model has to then predict \(steps_{t}=1\) to move one step forward in the encoding positions after the repetition of the content from the past attended position is complete. Thus, the OneStep strategy is suitable for the ReCopy and Reverse ReCopy tasks as well. However, the OneStep strategy faces an issue for the inverted versions of the tasks. Consider an Inv ReCopy sample where the input is "4 4 4 7 7 7 7 7 9 9 9 9 8 8 8 8 8" and the output is "4 7 9 8". In this case, one way to solve this would be for the encoder to radically re-organize the positions of the input information. But if the encoder fails to do that and keeps the encoded information close to its original position, OneStep attention, by itself, is ill-equipped for the task. In the given example, after printing \(4\) from encoding position \(1\), in the next timestep it has to take not just one but three steps forward. OneStep attention cannot do that because its number of steps is constrained by a sigmoid. In contrast to OneStep attention, monotonic attention (MonoAttn) is more flexible allowing bigger steps when needed. As such, monotonic attention is able to solve Inv ReCopy tasks that OneStep could not. It also performs perfectly on copy tasks and ReCopy tasks in both directions. However, it fails on the lookup tasks. It seems that its in creased flexibility (loosened inductive bias) and its possibility to make more uncontrolled steps (which are unnecessary for the lookup tasks) also at the same time make it more confused when trying to learn the lookup tasks in a length-generalizing manner. Ultimately, both OneStep attention and monotonic attention perform better than any of the other attention models. Both solves \(6\) out of the \(8\) tasks in Table 2 with \(100\%\) accuracy. However, we also discover a trade-off - the restricted steps of OneStep attention preclude it from solving the inverted versions of ReCopy tasks whereas the more unconstrained steps of monotonic attention manages the inverted ReCopy tasks but at the cost of the lookup tasks. In Table 3, we present the results on SCAN. We find location attention and our extensions of it (OneStep attention or monotonic attention) to generally also perform better on the task of translating simple commands into sequences of actions than other forms of interlayer attention even though they are not designed explicitly keeping the structure of SCAN task in mind. OneStep attention (Mix OneStepAttn) does particularly better than the others in SCAN. In the same table, we also present the results on CFQ. Interestingly, the basic position-encoding-less version of interlayer attention does the best here. However, both OneStep and monotonic attention keep up with it better than others - like location attention or unidirectional relative attention. ### Additional Analyses **Ablations:** In Table 4, we show some of the main ablations of OneStep Attention. \(-\)Step \(2\) represents using the more sophisticated location attention variant of \(\text{ref}_{t}\) computation (Eqn. 12) instead of the proposed \(ref_{t}=pa_{t-1}\) change in step \(2\) in SS5.3. \(-\)Step \(3\) represents using softstair activation for step computation (Eqn. 13) from location attention instead of the proposed sigmoid activation in step \(3\) change of OneStep (SS5.3). \(-\)Sigmoid represents removing the activation function from Eqn. 13 altogether. As the ablation shows both of our proposed changes are important to succeed in most of the tasks. Interestingly, we find here that having no activation at all in Eqn. 13 generally serves us better than having softstair activation. Besides that, the ablation results support our original motivation for proposing Step \(2\) and Step \(3\) in OneStep attention. We show several more ablation tests in Appendix B. **Additional Tasks:** In Appendix A, we introduce and explore two additional tasks - **DeDupe** and **PosRetrieve**. **Alternate Evaluation:** In Appendix C we evaluate the models on edit distance instead of exact match accuracy. Edit distance serves as a more fine-grained evaluation. \begin{table} \begin{tabular}{l|l l l|l l l|l l l|l l l} \hline \hline **Model** & \multicolumn{4}{c|}{**Copy**} & \multicolumn{4}{c|}{**Reverse Copy**} & \multicolumn{4}{c|}{**Lookup**} & \multicolumn{4}{c}{**Reverse Lookup**} \\ (Length Splits) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(7\) & \(9\) & \(11\) & \(7\) & \(9\) & \(11\) \\ \hline Content & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(33.3\) & \(0\) & \(0\) & \(3.7\) & \(0\) & \(0\) \\ Relative & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(0\) & \(0\) & \(0\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(78.2\) & \(0.8\) & \(0.4\) \\ LocAttn & \(99.8\) & \(0\) & \(0\) & \(0.7\) & \(0\) & \(0\) & \(\mathbf{100}\) & \(9.4\) & \(0\) & \(13.3\) & \(0\) & \(0\) \\ \hline Ours & & & & & & & & & & & & \\ \hline Bi-Relative & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) \\ OneStepAttn & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) \\ MonoAttn & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(98\) & \(29.9\) & \(28.5\) & \(0\) & \(0\) \\ \hline **Model** & \multicolumn{4}{c|}{**ReCopy**} & \multicolumn{4}{c|}{**Reverse ReCopy**} & \multicolumn{4}{c|}{**Inv ReCopy**} & \multicolumn{4}{c}{**Inv Reverse ReCopy**} \\ (Length Splits) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) \\ \hline Content & \(19.1\) & \(0\) & \(0\) & \(25\) & \(0\) & \(0\) & \(0.05\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ Relative & \(43.1\) & \(0\) & \(0\) & \(0.1\) & \(0\) & \(0\) & \(75.9\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ LocAttn & \(79.6\) & \(0\) & \(0\) & \(19.7\) & \(0\) & \(0\) & \(99.4\) & \(58.8\) & \(0\) & \(97.9\) & \(0.3\) & \(0\) \\ \hline Ours & & & & & & & & & & & \\ \hline Bi-Relative & \(33.4\) & \(0\) & \(0\) & \(35.3\) & \(0\) & \(0\) & \(69.8\) & \(0\) & \(0\) & \(71.3\) & \(0\) & \(0\) \\ OneStepAttn & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(0.1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ MonoAttn & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{100}\) & \(\mathbf{98.8}\) & \(\mathbf{100}\) & \(\mathbf{99.9}\) & \(\mathbf{98.3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy of the models on different length generalization splits in different algorithmic diagnostic / probing tasks. We present the median of five runs on different seeds. We bold the best results. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Model** & **SCAN (Len.)** & **CFQ (Len.)** \\ \hline Content & \(17.61\pm 4.07\) & \(\mathbf{62},\mathbf{14}\pm\mathbf{0.88}\) \\ Relative & \(19.21\pm 5.52\) & \(56.64\pm 1.84\) \\ Mix LocAttn & \(20.74\pm 5.69\) & \(44.83\pm 9.45\) \\ \hline Ours & & & & \\ \hline Bi-Relative & \(8.41\pm 1.21\) & \(59.48\pm 1.54\) \\ Mix OneStepAttn & \(\mathbf{29.51\pm 9.46}\) & \(60.65\pm 3.74\) \\ Mix MonoAttn & \(21.08\pm 7.17\) & \(60.32\pm 3.58\) \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy on SCAN length split and CFQ length split. We report the mean and standard deviation of \(5\) runs for SCAN and of \(3\) runs for CFQ. We bold the best results. **Examples:** In Appendix D we present some example failure cases of OneStep attention and monotonic attention. ## 8 Limitations First, although OneStepAttn and MonoAttn perform better than LocAttn in general, they are also more restricted. Nevertheless, OneStepAttn and MonoAttn show the potential of the LocAttn-like framework with restrained room for error accumulation and slightly stronger inductive biases. Ideally, we want to improve upon them in the future to get both higher flexibility and good performance. Moreover, when building specific modelling capacities (say attending to absolute positions), we should also consider building appropriate synthetic tasks for sanity checking in a similar spirit as done in this paper. In Appendix A, we propose PosRetrieve which can be a sanity check for absolute position attention capability for future developments. Second, our experiments are limited to mainly synthetic tasks most of which require purely location-based attention10 but no complex synergy between content-based attention and position-based attention. More synthetic tasks for sanity checking such capacities can be built. Footnote 10: Although, we should note that despite their simplicity, the tasks still have been difficult to solve perfectly (Dubois et al., 2020; Dehghani et al., 2019; Liang et al., 2021) Third, our exploration is currently limited to RNN-based seq2seq models. One reason for focusing on RNNs is because vanilla non-pretrained Transformers encoders can struggle to solve tasks like lookup table for decoder to do its job without specific changes (Csordas et al., 2022). Moreover, integration of location attention into Transformers is complicated by the fact that they use multiple layers of cross-attention in each timestep introducing additional variables to consider (the problem is not that our methods cannot be integrated with Transformers but that there are many ways to do so). Given these added variables, we leave investigations with Transformers for future work. ## 9 Conclusion We introduce several new probing tasks - ReCopy and its variants (some others in Appendix A) to enable additional diagnoses of length generalization performance of neural models. Although our proposed tasks are simple, this very simplicity can allow better isolation of failure cases and provide sanity checks for locational reasoning skills. Moreover, the new tasks are still challenging enough that none of the models explored here succeed in all of them. We propose a way to softly switch between the forward encodings and its reversed version to get near perfect performance in reverse variants of copy and lookup tasks that have been previously challenging to solve. We illuminate the limits of location attention and show how certain modifications in the form of OneStep attention and monotonic attention can bring massive improvement. Although, the modifications bring stronger inductive biases than location attention, they can still simulate windowed relative attention and empirically demonstrate more stable performance across datasets including more realistic ones like CPQ. Monotonic attention or OneStep attention can also be more broadly applicable in any context requiring _list traversal_ i.e. monotonic traversal through a list of items in a backpropagation-friendly manner -- for example, one application can be skill selection with a dynamic time horizon instead of a fixed one (Garg et al., 2022). OneStep attention is suitable if the only relevant choice during the traversal is to either stay at a position or move to the next position by a single step. Monotonic attention is suitable if we also want to allow the model to skip positions during traversal. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{l} **Model** \\ (Length Splits) \\ \end{tabular} } & \multicolumn{3}{c|}{**Copy**} & \multicolumn{3}{c|}{**Reverse Copy**} & \multicolumn{3}{c|}{**Lookup**} & \multicolumn{3}{c}{**Reverse Lookup**} \\ & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(7\) & \(9\) & \(11\) & \(7\) & \(9\) & \(11\) \\ \hline \hline \multirow{2}{*}{\begin{tabular}{l} OneStepAttn \\ \(-\)Step \(3\) \\ \end{tabular} } & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** & **100** \\ & \(1.4\) & \(0\) & **100** & \(98.6\) & \(0\) & \(99.2\) & \(2.34\) & \(0\) & \(99.8\) & \(0\) & \(0\) \\ & \(6.9\) & \(0\) & \(0\) & \(0\) & \(0\) & \(41.9\) & \(0\) & \(0\) & \(22.9\) & \(0\) & \(0\) \\ & **100** & **100** & 99.8 & **100** & **100** & **100** & **100** & 74.3 & \(0.3\) & \(19.1\) & \(0\) & \(0\) \\ \hline \hline \multirow{2}{*}{\begin{tabular}{l} **Model** \\ (Length Splits) \\ \end{tabular} } & \multicolumn{3}{c|}{**ReCopy**} & \multicolumn{3}{c|}{**Reverse ReCopy**} & \multicolumn{3}{c|}{**Inv ReCopy**} & \multicolumn{3}{c}{**Inv Reverse ReCopy**} \\ & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) & \(15\) & \(30\) & \(100\) \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{l} OneStepAttn \\ \(-\)Step \(3\) \\ \end{tabular} } & **100** & **100** & **100** & **100** & **100** & **100** & \(0.1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ & \(15.9\) & \(0\) & \(0\) & \(16.9\) & \(0\) & \(0\) & \(\mathbf{95.5}\) & \(0\) & \(0\) & \(\mathbf{96.2}\) & \(0\) & \(0\) \\ & **100** & **100** & **100** & **100** & **100** & \(99.9\) & \(40.3\) & \(0\) & \(0\) & \(45\) & \(0\) & \(0\) \\ & **100** & **100** & **100** & **100** & **100** & **100** & \(0\) & \(0\) & \(0\) & \(22\) & \(0\) & \(0\) \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy of ablations over OneStepAttn in different length generalization splits in different algorithmic diagnostic/probing tasks. We present the median of five runs on different seeds. We bold the best results. ## Acknowledgments This research is supported in part by NSF CAREER award #1802358, NSF IIS award #2107518, and UIC Discovery Partners Institute (DPI) award. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF or DPI. We thank our anonymous reviewers for their constructive feedback.
2305.19698
Investigation of the Robustness of Neural Density Fields
Recent advances in modeling density distributions, so-called neural density fields, can accurately describe the density distribution of celestial bodies without, e.g., requiring a shape model - properties of great advantage when designing trajectories close to these bodies. Previous work introduced this approach, but several open questions remained. This work investigates neural density fields and their relative errors in the context of robustness to external factors like noise or constraints during training, like the maximal available gravity signal strength due to a certain distance exemplified for 433 Eros and 67P/Churyumov-Gerasimenko. It is found that both models trained on a polyhedral and mascon ground truth perform similarly, indicating that the ground truth is not the accuracy bottleneck. The impact of solar radiation pressure on a typical probe affects training neglectable, with the relative error being of the same magnitude as without noise. However, limiting the precision of measurement data by applying Gaussian noise hurts the obtainable precision. Further, pretraining is shown as practical in order to speed up network training. Hence, this work demonstrates that training neural networks for the gravity inversion problem is appropriate as long as the gravity signal is distinguishable from noise. Code and results are available at https://github.com/gomezzz/geodesyNets
Jonas Schuhmacher, Fabio Gratl, Dario Izzo, Pablo Gómez
2023-05-31T09:43:49Z
http://arxiv.org/abs/2305.19698v1
# Investigation of the Robustness of Neural Density Fields ###### Abstract Recent advances in modeling density distributions, so-called neural density fields, can accurately describe the density distribution of celestial bodies without, e.g., requiring a shape model - properties of great advantage when designing trajectories close to these bodies. Previous work introduced this approach, but several open questions remained. This work investigates neural density fields and their relative errors in the context of robustness to external factors like noise or constraints during training, like the maximal available gravity signal strength due to a certain distance exemplified for 433 Eros and 67P/Churyumov-Gerasimenko. It is found that both models trained on a polyhedral and mascon ground truth perform similarly, indicating that the ground truth is not the accuracy bottleneck. The impact of solar radiation pressure on a typical probe affects training neglectable, with the relative error being of the same magnitude as without noise. However, limiting the precision of measurement data by applying Gaussian noise hurts the obtainable precision. Further, pretraining is shown as practical in order to speed up network training. Hence, this work demonstrates that training neural networks for the gravity inversion problem is appropriate as long as the gravity signal is distinguishable from noise. Code and results are available at [https://github.com/gomezzz/geodesyNets](https://github.com/gomezzz/geodesyNets) ## 1 Introduction In recent years small bodies in our solar system have been of increasing interest as mission targets. Various past, present, and future missions visited them and even collected samples allowing study of their composition and properties. For example, in the past, NEAR visited the asteroid 253 Mathilde and later soft landed on 433 Eros in February 2001, or Rosetta, with its lander Philae visiting the comet 67P/Churyumov-Gerasimenko in 2014. There are also several upcoming missions such as Hera, ZhenHe, and Psyche [1]. Hera, the complementary mission of ESA to NASA's DART mission, will analyze the latter's impact with the smaller moon Dimorphos, which revolves around the asteroid Didymos, with the goal of studying the process of asteroid deflection. It is scheduled to start in 2024 and rendezvous the binary system in 2026 together with two accompanying cubesats [2]. With these prospects, it is even more crucial for guidance, navigation and control to have accurate and precise models of these target bodies. However, this is especially difficult with small bodies since of their irregular shape. To complicate matters, the density distribution of these bodies is also rarely homogeneous but rather heterogeneous in nature [3, 4, 5]. These facts make modeling these bodies with the established three models difficult: spherical harmonics, mason models and polyhedral gravity models. The former struggle with the convergence inside the Brillouin sphere, and the irregular shapes slow down convergence, making it unsuitable for asteroids and comets [1, 6, 7]. The minimum Brillouin sphere is the sphere centered around the origin of the body and still enveloping all of the body's mass. The latter two require the knowledge of the shape model of the target body, and their application comes with constraints like discretization [8] in the case of the mascon model or the assumption of homogeneous density in the case of polyhedral gravity models [9, 10]. With the recent advances in neural networks, a new approach was formulated, requiring no previous assumptions but only existing data originating from measurements or synthetic sources. With this data, one can train a neural network to solve the gravity inversion problem. Multiple approaches exist for creating such a network, as presented in Section 2. This paper focuses on geodesyNets --a network learning the density distribution of a body whose integration leads to gravity [1]. Previous work showed acceptable error boundaries for models trained with a synthetic ground truth based on a mascon model. In addition, this approach was effectively used to explain the density of asteroid (101955) Bennu, with evidence from the OSIRIS-REx mission. However, the results were affected negatively due to noise in the input signal [11]. Thus, this work aims to determine if this approach is still practical given that multiple sources of perturbations can pollute the gravity signal used for training, ranging from non-gravitational forces like solar radiation pressure to measurement inaccuracies. Further, the utilized gravity signal might be weaker due to safety distance when on a trajectory far from the celestial body, further hindering the network from learning correctly. Last, it is investigated if pretraining on a prior shape model could improve the network performance under these constraints and if it reduces the amount of training iterations needed, thus reducing computational cost in a hypothetical onboard scenario. This work uses a polyhedral gravity model and a mascon model for conducting these experiments. It demonstrates the applicability of geodesyNets trained with a noisy input signal, as long as the induced error is reasonably sized compared to the gravity signal. This means that an error relative to the magnitude of the gravity signal in the training data set still leads to acceptable errors. On the other hand, a large absolute limit on measurement accuracy can render any training useless. This circumstance exemplifies in the conducted study about the sampling distance utilized for training the network. Finally, it is shown how pretraining can reduce the number of iterations required for training. All results and code are made publicly available via GitHub. ## 2 Related Work ### Polyhedral Gravity Models Polyhedral gravity models can calculate the full gravity tensor, including potential, acceleration, and second derivatives for an arbitrary given point \(P\) around a polyhedral source. They provide an analytical solution given homogeneous density distributions and given a shape model. One of the approaches to implementing a polyhedral gravity model is given by Tsoulis et al. [9], [10]. They use a line integral approach to convert the triple integral into a nested summation, which, thanks to the introduction of dedicated singularity terms avoids potential singularities that can affect these models [12], [13]. In this work, we use Tsoulis model to compare the performance of a geodesyNet trained with a mascon model to a network trained with a polyhedral model, but also to determine how the mesh granularity affects the achievable precision. Furthermore, we perform a detailed study of the achievable precision at close range since the high accuracy, even within the Brillouin sphere, and the analytical formulation does not require numerical integration methods. ### Mascon Models Mascon models are the second approach for modeling gravity utilized in this work. Here, the body is represented as a combination of multiple point masses, so-called mascons (short for "mass concentrations" [14]) filling up its volume. The mascon elements do not need to be of uniform size. Instead, various approaches exist, combining mascons of different weights [8]. Its simplicity and ability to model irregular shapes and density distributions as they appear for small bodies make the model appealing. Regardless, high accuracy of the gravity field can only be achieved with an extensive number of mascons for non-spherical / irregular objects. Even then, the field's accuracy near the body's surface remains challenging due to the discretized mass distribution [8]. ### Gravity Modeling with Neural Networks Artificial intelligence emerging in almost any area in recent years provides new opportunities in modeling traditionally expensive computational processes in physics [15], [16]. One such domain is modeling the gravity of irregularly shaped bodies and mapping positions to accelerations. In this context, one can distinguish between neural networks representing the actual body, enabling an indirect mapping to gravitational accelerations, and those directly mapping positions around a given body to gravitation. Thus, the former can also be referred to as neural fields since they parameterize the bodies [16]. A geodesyNet initially described by [1] represents a body through its neural density field. The inspiration for geodesyNets originates from Neural Radiance Fields (NeRF) introduced by [17]. Their network learns to represent a three-dimensional object from two-dimensional pictures. Thus, their network maps a 5D input vector for position and view angle to a 2D color and volume density vector. Similarly how they solve the inverse problem of image rendering, a geodesyNet solves the inverse problem of gravity inversion mapping a cartesian vector to a candidate body density [1]. The neural field of a geodesyNet represents the body's density distribution. The subsequent Subsection 3.2 will describe it in detail since it is the object of study, whereas this section provides a brief look over alternatives from the first category of networks regressing acceleration on positions. Generally, all of the following presented networks have in common that they directly map coordinates to potentials or accelerations and are trained on a polyhedral ground truth. However, the concrete approaches vary a lot. Furfaro et al. [7] use Single-Layer Feedforward Networks using Extreme Learning, reducing the number of required iterations to fine-tune their model to map the relationship between spacecraft points and gravitational acceleration. They report relative errors (relRMSE) below \(5\%\) for asteroid 25143 Itokawa and only slightly above \(5\%\) for comet 67/P Churyumov-Gerasimenko. They conducted two experiments. One time the global gravity field was learned using target points sampled from a near-range sphere around the body. The second time a local field was learned using target points from a cylinder above a given landing zone. Their approach contrasts this work as here the robustness is examined when also only sampling from far away. Cheng et al. [18] use deep neural networks (DNN) with the aim of generating trajectories in a computationally cheap way for a soft landing on small bodies. They show that the median error of their model is \(0.33\%\) when taking the mean over the three axial directions. Given that \(1\%\) deviations are usually acceptable in traditional gravity models [18], they conclude that their DNN is a practical approach. Similarly to the later employed geodesyNet, they normalize the input and outputs to the range \((-1,1)\). Further, they do not train the network with sample points from the near field but instead with points some kilometers away from the body. In their case study for 433 Eros, points are collected from the sphere \(3\,km\) to \(60\,km\). Finally, Gao et al. [19] model the gravity field of multiple sample bodies with a Gaussian Process Regression. They report a relative mean error of \(1.27\%\) for their bodies when validating inside the sampling area. They only train their networks with samples including the near range of the bodies. However, in contrast to the study presented later on geodesyNet, their approach shows strong generalization difficulties when further away from the body. Thus, the relative error increases beyond \(60\%\) in the case of 433 Eros at \(40\ km\) distance. For comparison: Training was done here with points up to \(20\ km\) distance. Martin et al. [20, 21, 22] also present a continuously improved model that maps Cartesian coordinates to gravity. However, their approach differs from the usual training in the sense that the network is bound to additional constraints beyond the usual loss to guarantee physical correctness. These constraints are integrated into the loss function and include, for example, the satisfaction of Laplace's equation. The training thus penalizes not only inaccuracy but also physical violations of the underlying differential equations. They also train on 433 Eros with sampling up to a distance of three times the radius. The network generalizes well to distance, but has difficulties predicting closer to the asteroid. ## 3 Methods ### Ground-Truth Models Equation 1 shows the derived formalism for computing the gravitational acceleration around a polyhedral source at the origin with \(G\) as the gravitational constant, \(\rho\) as the constant density using the polyhedral gravity model by Tsoulis et al. [9, 10]. It consists of two summations with the outer summing over the polyhedral faces \(p\) and the inner one iterating over the segments \(q\) of each face. For a detailed description, refer to Tsoulis et al. [9, 10]. \[\overrightarrow{a}=G\rho\cdot\sum_{p=1}^{n}\overrightarrow{N_{p}}\cdot \left[\sum_{q=1}^{m}\sigma_{pq}h_{pq}LN_{pq}+h_{p}\sum_{q=1}^{m}\sigma_{pq} AN_{pq}+\text{sing}_{\mathcal{A}_{p}}\right] \tag{1}\] This polyhedral gravity model was recently implemented with modern C++ in a work preceding this one [23]. This model will be contrasted with the mascon model, whose formula is displayed in Equation 2 for a generic point \(\overrightarrow{r_{i}}\) and a set of mascons \(\mathcal{M}=\{(x_{i},y_{i},z_{i})\ i=1..n\}\), in the following experiments. The aim is to determine the achievable precision with the knowledge that the mascon model, used in the original paper, has difficulties in the immediate proximity to the surface due to discretization. Next to the calculation of the ground truth is the evaluation of the model. For this purpose, the neuronal density field is integrated using Equation 4. Since the models were always trained with vectorial accelerations and not scalar potentials, demonstratively, only these formulas are displayed. \[\overrightarrow{a}(\overrightarrow{r_{i}})=-G\sum_{j=1}^{n}\frac{m_{j}}{r_{ij }^{3}}r_{ij} \tag{2}\] ### GeodesyNets - Neural Density Fields A neural density field, also referred to as geodesyNet [1, 15], is a fully-connected neural network trained with a gravity signal from an arbitrarily shaped body with the aim of learning the body's density distribution. Thus, it solves the gravity inversion problem and provides a fully differentiable expression for mapping a cartesian point to a candidate body density (see Equation 3) compatible with the observed gravity signal. \[f(x,y,z)\rightarrow\rho \tag{3}\] The appeal of this method is that it does not necessarily require a shape model, as is the case with the aforementioned gravitational models, but that the gravitational signal can come from any source. Further, it converges even inside the Brillouin sphere and can learn heterogeneous density distributions. Using the trained model, a numerical integration over the neural density field can be performed to calculate the potential or the acceleration. This procedure is displayed in Equation 4. \[\overrightarrow{a}=G\int_{x\in V}\frac{\rho(x)}{|\overrightarrow{r}- \overrightarrow{x}|^{3}}(\overrightarrow{r}-\overrightarrow{x})\;dV \tag{4}\] Previous studies utilized this training approach in combination with the mascon model as ground truth, demonstrating its theoretical applicability [1]. The measured relative error was always less than \(1\%\) even near bodies. There, however, it was not considered how robust the network would be if the sample points were at a distance of several kilometers as they have been in previous missions to minimize the risk of collision. GeodesyNets have also been looked at from a practical point of view with measured gravity signals obtained from the OSIRIS-REx mission, demonstrating the practical applicability of the given approach while also giving first insights above limitations of the approach given the noise in a measured gravity signal [11]. This work now provides a detailed study of the disruptive factors which could lead to wrong model behavior, including noise in the ground-truth measurements or errors in the numerical integration. As well as the influence that the distance has on the possible precision. ### Sampling Points for Training To begin with, all meshes are normalized versions so that the bodies fit in the hypercube \([-1,1]^{3}\) The spherical envelopes from which has been sampled for training were \((0,1)\), \((1,3)\), and \((3,5)\). The former covers the close range around the bodies, which the original study used for sampling. Here now, two more sampling distances are included. The mid-range \((1,3)\) and the far-range \((3,5)\) represent realistic distances an orbiter could reach around the body. The latter distances are the more realistic scenarios with respect to onboard training, as they maintain a certain safety distance for the spacecraft. In contrast, the close distance \((0,1)\) would only be practical in a gravity-model-based (pre-)training. In the following, it is refered to \((0,1)\), \((1,3)\), and \((3,5)\) as close-, mid- and far-range. Figure 1: Training and validation toolchain: Sampling with noise and from different distances; Training for several thousand iterations; Comparing to the actual ground truth and calculation of relative error. The utilized mesh for Eros comes from Gaskell [24], and the one for Churyumov-Gerasimenko from the European Space Agency [25]. Eros consists of 7374 vertices and 14744 triangular faces, whereas Churyumov-Gerasimenko comprises 9149 vertices and 18294 triangular faces. Both are based on the measurements of the probes that visited the bodies. These original meshes are now referred to as \(100\%\) resolution meshes. The mason models used for both models are from Izzo & Gomez [1] and are derived from the \(100\%\) resolution meshes. For this purpose, a centroid with mass \(m_{j}\) is placed in each tetrahedron in the delaunay tetrahedralized version. In the course of this study, additional meshes were constructed. These are downsampled versions with \(10\%\), \(1\%\), or \(0.1\%\) of the vertices and faces. The purpose of this is to investigate to what extent a model can be (pre-)trained, even with a low resolution model as it may be available from astronomical observations. Figure 2 illustrates the employed input meshes for the polyhedral gravity model. ### Addition of Noise to the Ground Truth In addition to the ground-truth model, mesh resolution, and sampling distance, this study has one more parameter: noise. This characteristic is applied to the ground truth during the training. It thus represents that the measurements, the accelerations used for training, are subject to perturbations and measurement noise. Three different types are investigated in the course of this study: a constant bias, additive Gaussian noise, and multiplicative Gaussian noise. The constant bias represents the possible effects of solar radiation pressure (SRP) on the measurement results. In a previous study [3], [11], this was a significant factor influencing the quality of the results. Although to subtract the SRP, it is necessary to know the mass and size of the object, along with its surroundings, such as the area facing towards the sun. Here, we will determine to what extent SRP influences the training. Equation 5[26] shows the employed approach for calculating the acceleration affecting the spacecraft with \(P\) as the solar radiation pressure, \(c\) as the speed of light and \(G_{SC}\) as the solar constant, and \(R\) as the distance from the sun in astronomical units \([AU]\). Equation 6 calculates the acceleration \(a\) given the area-to-mass ration of the spacecraft \(A/m\). \[P =\frac{G_{SC}}{c\cdot R^{2}} \tag{5}\] \[a =P\cdot\frac{A}{m} \tag{6}\] Figure 2: The original high resolution mesh denoted with \(100\%\) and downscaled to meshes with respectively only \(10\%\), \(1\%\) or \(0.1\%\) of vertices and faces. These equations are a simplified version of the calculation. Theoretically, the reflectivity, as given in Yousef et al. [27], could also be included in the calculation. For this work, however, having only a baseline of the magnitude is sufficient. The calculated acceleration \(a\) is then applied in one cartesian direction, assuming that the sun always shines from this direction. In the following experiments, a value was used for the SRP as it could realistically affect a probe analogous to Rosetta. The mass was assumed to be \(1422\,kg\) (roughly Rosetta Dry Mass and Payload) and the area \(69.88\,m^{2}\). For the distance, the semi-major axis of the bodies was used in each case. It should be added that the SRP for, e.g., a cubesat, would have a value of the same magnitude due to its smaller mass but equally smaller area. Further, an additive Gaussian noise \(X\sim\mathcal{N}(0,\sigma^{2})\) is used during the training to simulate an absolute error in the measurement of the gravitational signal. The standard deviation \(\sigma\) is chosen in a way that it simulates an absolute error of \(10^{-4}\frac{m}{s^{2}}\) or \(10^{-5}\frac{m}{s^{2}}\). After the meshes are normalized, this standard deviation has also been normalized. The value \(10^{-5}\) is the accuracy of GOCE (Gravity field and steady-state ocean circulation explorer) [28]. In addition to this value, \(10^{-4}\) was chosen as the absolute precision to get an insight into how strongly one additional error magnitude affects the result accounting for even lower precision on spacecraft. Further, a multiplicative Gaussian noise \(X\sim\mathcal{N}(1,\sigma^{2})\) is used during the training to simulate a relative error in the measurement of the gravitational signal. The assumption is that the magnitude of the gravitational signal is known, but the value can only be determined to a certain point. For example that the precision is limited relatively to the magnitude of the accelerations to \(10^{-x}\) with \(x\in\{1,2,3\}\). ### Training The toolchain used for training and performance evaluation is shown in Figure 1. This summarizes the characteristics of the training iterations. Overall, the same configuration is used for all other parameters as in the original paper [1]. In particular, this includes the Mean Absolute Error (MAE) calculated from the ground truth \(y\) and the model predictions \(\hat{y}\) shown in Equation 7. Here, \(\kappa\) (see Equation 8) is a scaling parameter to normalize the mass and restrict training to learning the volume rather than finding the absolute mass [1]. \[\mathcal{L}_{\kappa\,MAE} =\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\kappa\hat{y_{i}}| \tag{7}\] \[\text{with }\kappa =\frac{\sum_{i=1}^{n}\hat{y_{i}}y_{i}}{\sum_{i=1}^{n}y_{i}^{2}} \tag{8}\] ### Validation For validation, the relative root mean squared error is used, which is calculated as shown in Equation 9. It is calculated for each model at the end of training for a range of distances from the body's surface. Comparisons are made against the polyhedral ground truth using the \(100\%\) mesh resolution. The altitude sampler employed for this purpose uses the outward normal of the mesh faces to sample points at the appropriate altitude. So the distance in the validation plot's x-axis is scaled by the altitudes from the surface, not the altitudes from the mathematical origin of the bodies. \[relRMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{|y_{i}-\kappa\hat{y_{i}}|} {||\hat{y_{i}}||_{2}}\right)^{2}} \tag{9}\] Results For each scenario presented here, ten training runs have been performed with different seeds. The graphs show the mean value of the validation results and the standard deviation in a slightly transparent way. The batch size has been set to \(1000\) points, and every ten training iterations new point for training are sampled. For this purpose, a spherical point sampler randomly generates points in a spherical shell that do not lie within the sample body. ### Polyhedral vs. Mascon Model Figure 3 compares the relative errors of the models trained with a mascon model and a polyhedral model for the two studied bodies. Spherical sampling was performed in a shell \(\in(0,1)\), but the results are similar for sampling with more distant shells. No qualitative difference can be found between models trained with the mascon or polyhedral models, not even in the close range around the body, as one would have expected due to the polyhedral model's improved accuracy in the close range. For Churyumov-Gerasimenko, the relative errors are of similar size. E.g. for normalized distance \(0.01\), the model's errors are close at \(2.2\%\), whereas for distance \(0.001\), the mascon models has a slightly larger mean error of \(6.1\%\) compared to the polyhedral trained models' relative mean error of \(5.8\%\). For Eros, although the average relative error is slightly lower for the polyhedral model. The model's relRMSE trained with the polyhedral model is around \(50\%\) to \(70\%\) compared to the relRMSE of the mascon trained models through all validation distances. ### Sampling Distance Note that the blue lines in Figure 3 and Figure 5 are the same as in Figure 4 in the left column. They show the average relative error of ten trained models with the polyhedral ground truth, respectively, Figure 3: Relative errors of mascon-trained models and polyhedral-trained models on the \(100\%\) resolution mesh based on the \(100\%\) resolution mesh and with sampling in the range \((0,1)\) during training. In Figure 4, the lines show how the relative error changes for a given height above the target body when sampling only in the normalized range \((1,3)\) or \((3,5)\) during training. Immediately, it is noticeable that if the network only gets points from mid or far away during the training, the relative error in the close range increases to \(100\%\) and above. However, the network performs well for the mid to far range, and the mean relative error remains below \(4.3\%\) until a normalized distance of \(1\) or further away for models trained in mid or far range. Hence, the network always performs well in the sampling range or further away in the observed cases. It generalizes to the ranges more distant than used during training. It is also interesting to note that in the range where geometry does not play such a large role, the network is able to generalize and scale the predictions when getting closer to the body. This is reflected in the plots on the right side of Figure 4. The network has been trained with points from the region \((3,5)\), but the average relative error always remains below the \(4.3\%\) mark, even for the \((1,3)\) region. ### Robustness to Noise This section refers again to Figure 4, which also displays the results when some noise has been applied to the ground truth for learning. Starting with the constant bias, models trained with a constant bias approximately equal to the magnitude of the solar radiation pressure that would act on a probe such as Rosetta or a cubesat show an almost identical relative error in validation as models trained with the original ground truth. In the case of additive Gaussian noise, which corresponds to an absolute error of \(10^{-5}\frac{m}{s^{2}}\) or \(10^{-4}\frac{m}{s^{2}}\) in non-normalized units, respectively, the network is most disturbed during learning. The mean relative errors are far beyond \(5\%\) for an absolute error of \(10^{-4}\frac{m}{s^{2}}\) even within the training range. However, the results differ for Churyumov-Gerasimenko and Eros. The network shows smaller relative errors for Eros than for Churyumov-Gerasimenko. The multiplicative Gaussian noise that impacts the ground truth by \(+/-0.1\%,1\%,10\%\) respectively settles in the middle of the computed relative errors. Training in the close range with a relative Gaussian noise of \(1\%\) still results in relative errors below \(4.7\%\) even until a range of \(0.005\) for training in \((0,1)\). However, with \(\sigma=10\%\), the relative error maximally increases to \(11.0\%\) for models trained in \((0,1)\). ### A posteriori: Using Pretraining After the previous sections have shown that noise and a long sampling distance increase the relative error, we will now investigate whether these results can be improved by pretraining. The deliberation is to perform the pretraining on an imprecise low-resolution navigation model like it would be available due to observation before launch in the near range \((0,1)\). In the second step, the fine-tuning happens on the \(100\%\) resolution mesh in the range \((3,5)\) for a limited number of iterations, effectively simulating a redefined improved mesh. Even though we show results without noise terms here, experiments with those resulted in comparable results. Figure 5 shows the achievable precision for the polyhedral model depending on the precision of the mesh used. A model trained with a mesh with only one-tenth of the vertices and faces achieves a comparable performance to the entire \(100\%\) resolution mesh. Also interesting is that a geodesyNet trained with the \(0.1\%\) resolution mesh still shows a solid relRMSE of only \(6.3\%\) (Churyumov-Gerasimenko) and \(2.5\%\) (Eros) for a normalized range of \(1\). The models displayed in Figure 5 are utilized as a pretraining base. Figure 6 shows the results. Pretraining was performed for \(10000\) iterations, and fine-tuning (or training for the non-pretrained models) performed for \(10\) or \(100\) iterations. The pretrained models correspond Figure 4: Effects of Noise and Sampling Shell on obtainable precision. to those in Figure 5. Figure 6 shows that models pretrained on the near range data (green and orange curve) overall outperform the models without any pretraining in the near range. Further, the more precise the model utilized for pretraining, the better the obtainable precision in the near range. As a second observation, it is noticeable that the relRMSE increases in the near range for the pretrained models if the fine tuning is conducted for 100 rather than 10 iterations (e.g. \(0.1\%\) pretrained resolution, distance \(0.1\): \(28.1\%\to 43.2\%\)) while the relRMSE decreases for the far field (e.g. \(0.1\%\) pretrained resolution, distance \(5.0\): \(1.3\%\to 0.09\%\)). As the previous sections have shown, the model adapts to the far range when fine-tuning. To summarize, pretraining allows better-performing models with fewer iterations. However, one has to be careful when fine-tuning with far-field data to maintain the generalization capabilities learned during pretraining in the near field. ## 5 Discussion ### Polyhedral vs. Mascon Model No clear qualitative difference could be found in the comparison between the mascon and polyhedral models in Subsection 4.1. In both cases, the relative error is similar even for close distances. A possible reason could be that imprecision due to the number of vertices/edges in the mesh may be similarly limiting as the number of mascons. Furthermore, the numerical accuracy could be limited during training. Especially the numerical integration could be crucial and limit the maximum achievable precision. In addition, only single precision is used, but the polyhedral model provides a more precise result, i.e., precision that cannot be learned in this sense - since it cannot be represented in single precision. Mao et al. [29], for example, could improve accuracy using double precision in their physics-informed neural network. Figure 5: The performance of the \(10000\) iterations trained models depending on the utilized mesh resolution. The validation was done based on the \(100\%\) polyhedral ground truth. ### Sampling Distance Regarding the results presented in Subsection 4.2 on sampling, one could expect that the network could not predict the geometry and associated accelerations only by training in the mid-range to far-range. However, a geodesyNet is able to generalize and scale measurements as a function of distance successfully. This is clearly shown in the right plots in Figure 4. Conversely, a network is able to generalize successfully from the conducted sampling ranges up to \(50\) times the distance. A possible reason for that could be the employed normalization and loss strategy involving the continuously re-calculated mass normalization factor. Furthermore, the geodesyNet learns the shape of the body. Hence, it automatically satisfies the Laplace equation and thus represents a valid gravity field generated by the learned shape - a property enabling excellent generalization capability. In that sense, geodesyNets contrasts the in Subsection 2.3 listed approaches using a regression. The experiments also showed that the models could generalize from the far to the mid-range. As soon Figure 6: This figure compares the relative errors of pretrained models versus non-pretrained models. The pretraining was always conducted in the range \((0,1)\) marked in dark gray for \(10000\) iterations, and the fine-tuning (training for none) was conducted in the range \((3,5)\) marked in light gray for \(10\) or \(100\) iterations. Here, no noise was applied. as the shape of the body strongly impacts the measured gravity signal, the generalization approaching closer to the body does not work anymore. This feature could also be beneficial in a hypothetical onboard scenario, as it allows a probe to get closer step by step and improve the model accordingly. As here observed, the model makes a solid prediction a little closer than the actual sampling range during training. ### Noise Subsection 4.3 shows that a slight constant bias is calculable. In other words, future missions using a typical spacecraft will not have to worry about accounting for it. It also shows that training a geodesyNet when expecting a sizeable absolute error in the measurements is a challenging task since one must find a way to overcome the noise. Otherwise, the convergence of the network to an acceptable solution is not achievable. Especially if the error is notably large compared to the actual gravity; this is also why models trained for Eros perform better with additive Gaussian noise. Churyumov-Gerasimenko is less massive and smaller. Thus, the gravity signal is smaller than it would be in the case of Eros, and an absolute error dominates the input signal rather than merely perturbing it. Suppose the magnitude of the gravity signal is known, and only the number of determinable digits is a problem, as in the case of multiplicative Gaussian noise in the \(10\%\) deviation case. Even in that case, a robust network is constructible. ### Pretraining Subsection 4.4 analyzed pretraining. A first result is that even with a ground truth of poor quality, as was the case here during training with the \(0.1\%\) resolution mesh, a low error can be achieved up to the mid-range distance - an essential requirement for pretraining on Earth before the mission. Further, the results show that pretraining reduces the model error and allows for obtaining a better performance with fewer gravity measurements. For example, Figure 6 shows that the relRMSE in the case of Churyumov-Gerasimenko is with \(11.9\%\) approximately half as large using pretraining with the \(1\%\) resolution mesh than without any pretraining (\(21.3\%\)) in the same amount of iterations. In an onboard scenario, where power consumption is of critical interest, pretraining on Earth, albeit with a lower resolution ground truth, is thus advantageous. Moreover, it must be added in this context that the scenario, pretraining, and fine-tuning on range \((0,1)\) has not been considered here. Instead, the mixed range sampling scenario was considered, as it would be conceivable for a mission: pretraining before launch on a simple navigation model based on remote observation and slowly improving the model when approaching the target while re-training. ## 6 Conclusion and Future Work In summary, this work studied the robustness of geodesyNets. The variables of interest were: the underlying training ground truth, the sampling distance for the training points, and the effects of different types of noise and whether pretraining can reduce the number of training iterations. There were only minor differences when the geodesyNet was trained with the mascon or the polyhedral gravity model, probably due to limitations by the numerical precision and the relation of the mascon ground truth and the polyhedral mesh. The geodesyNet cannot learn the geometry of an irregular body properly with only distant measurements. However, the neural density field generalizes well and yields robust results even in areas where no training has been performed, as long as they are not in the close range. In order to be able to predict the acceleration in the close-range, near the surface, training in this region needs to be conducted. Noise negatively impacts the training's results. However, this depends strongly on the magnitude compared to the input gravity signal. If there exists an absolute boundary for the measurable precision of the acceleration and thus the actual gravity signal is no longer distinguishable from the measurement error, the training is unproductive. However, if the magnitude of the gravity signal is known, even with relative measurement deviations of up to \(10\%\), solid training results are achievable. From the point of view that the magnitude of the gravity signal is known, the training is successful. Pretraining allows more precise results in an onboard scenario with less sampling and is preferable, even if the ground truth is of low resolution. Future work could consider other forms of sampling, such as sampling with regard to realistic trajectories. Such an experiment requires a set of efficient trajectories maximizing the gravity signal - something currently being conducted in a related work by Marak et al. [30].
2309.08440
Beware of CaBER: Filament thinning rheometry does not always give `the' relaxation time of polymer solutions
The viscoelastic relaxation time of a polymer solution is often measured using Capillary Breakup Extensional Rheometry (CaBER) where a droplet is placed between two plates which are pulled apart to form a thinning filament. For a slow plate retraction protocol, required to avoid inertio-capillary oscillations for low-viscosity liquids, we show experimentally that the CaBER relaxation time $\tau_e$ inferred from the exponential thinning regime is in fact an apparent relaxation time that may increase significantly when increasing the plate diameter and the droplet volume. Similarly, we observe that $\tau_e$ increases with the plate diameter for the classical step-strain plate separation protocol of a commercial (Haake) CaBER device and increases with the nozzle diameter for a Dripping-onto-Substrate (DoS) method. This dependence on the flow history before the formation of the viscoelastic filament is in contradiction with polymer models such as Oldroyd-B that predict a filament thinning rate $1/3\tau$ ($\tau$ being the model's relaxation time) which is a material property independent of geometrical factors. We show that this is not due to artefacts such as solvent evaporation or polymer degradation and that it can only be rationalised by finite extensibility effects (FENE-P model) for a dilute polymer solution in a viscous solvent, but not for semi-dilute solutions in a low-viscosity solvent.
Antoine Gaillard, Miguel Angel Herrada Gutierrez, Antoine Deblais, Jens Eggers, Daniel Bonn
2023-09-15T14:39:35Z
http://arxiv.org/abs/2309.08440v4
Beware of CaBER: Filament thinning rheometry doesn't give 'the' relaxation time of polymer solutions ###### Abstract The viscoelastic relaxation time \(\tau\) of a polymer solution is often measured using Capillary Breakup Extensional Rheometry (CaBER) where a droplet is placed between two plates which are pulled apart to form a thinning filament. For a slow plate retraction protocol, required to avoid inertio-capillary oscillations for low-viscosity liquids, we show experimentally that the CaBER relaxation time inferred from the exponential thinning regime is in fact an apparent relaxation time that increases significantly when increasing the plate diameter and the droplet volume. Similar results are obtained with a Dripping-onto-Substrate (DoS) method. This dependence on the flow history before the formation of the viscoelastic filament is in contradiction with polymer models such as Oldroyd-B that predict a filament thinning rate \(1/3\tau\) which is a material property independent of geometrical factors. We show that this is not due to artefacts such as solvent evaporation or polymer degradation and that it cannot be universally explained by the finite extensibility of polymer chains. keywords: Viscoelasticity; Polymers; Capillary flows + Footnote †: journal: Physical Review Fluids ## 1 Introduction When polymers are added to a low-viscosity solvent such as water, the extensional rheology of the resulting solution is usually measured by indirect techniques where the (extensional) strain and strain rate are not controlled, unlike for high-viscosity polymer solutions or melts for which reliable extensional rheometers are available, e.g. Meissner's RME (Rheometric Melt Elongation rheometer) and FiSER (Filament Stretching Extensional Rheometer). Most indirect techniques for low-viscosity polymer solutions aim at forming a liquid filament undergoing capillary-driven thinning. Historically, this was first achieved by placing a drop of liquid between two horizontal plates which are then separated beyond the stability limit of a stable liquid bridge [1, 2, 3], a technique now known as CaBER (Capillary Breakup Extensional Rheometry). Alternative techniques, also based on the Rayleigh-Plateau instability, were proposed to avoid inertio-capillary oscillations of the end-drops in the original CaBER step-strain (rapid) plate separation protocol which prohibits measurement of very short relaxation times [4]. This is achieved by separating the plates at a constant low velocity (Slow Retraction Method or SRM) [5], by dripping a droplet in air from a nozzle at a low flow rate [6] or, in a more recent technique, by slowly bringing a solid substrate in contact with a drop hanging steadily from a nozzle (Dripping-onto-Substrate or DoS) [7]. In all these techniques, after an initial inertial and/or viscous regime, an elastic regime emerges where the elastic stresses arising from the stretching of polymer chains dominate and give rise to a cylindrical filament that thins exponentially in time for a wide range of dilute and semi-dilute polymer solutions. This is consistent with the Oldroyd-B model that predicts [8, 9]: \[h=h_{1}\exp\left(-\frac{t-t_{1}}{3\tau}\right), \tag{1}\] where \(h\) is the minimum filament radius, \(\tau\) the viscoelastic relaxation time of the polymer solution, \(t_{1}\) the time marking the onset of the elastic regime, and \(h_{1}=h(t_{1})\) the filament radius at that time. For a step-strain CaBER protocol, in which polymer molecules do not relax during the fast plate separation, the model predicts \(h_{1}=(Gh_{i}^{4}/2\gamma)^{1/3}\) where \(G=\eta_{p}/\tau\) is the elastic modulus, \(\eta_{p}\) the polymer contribution to the shear-viscosity, \(\gamma\) the surface tension and \(h_{i}\) the radius of the initial liquid column [10]. It is generally accepted that (i) for a polymer solution with a spectrum of relaxation times, the longest one dominates [2] and that (ii) as polymer chains unravel during the exponential regime, they ultimately approach their finite extensibility limit, causing the filament to break after a terminal regime which can be described by, e.g., FENE models (P or CR) [2, 9, 11]. The general consensus is that geometrical parameters such as the size of the system can only influence \(h_{1}\) (via \(h_{i}\)) but not the thinning rate \(|\dot{h}/h|=1/3\tau\) of the filament (where the dot means \(\mathrm{d}/\mathrm{d}t\)) since \(\tau\) is a material property. In particular, Bazilevsky et al. [1] and Miller et al. [12] checked that the filament thinning rate was independent of the sample volume and of the plate separation speed and, in a step-strain plate separation protocol, on the final plate separation distance. This suggests that it is independent of the history of the polymer deformation prior to the elastic regime. However, Rajesh et al. [13] recently tested polymer solutions of different solvent viscosities with a dripping method and reported a larger thinning rate for a smaller nozzle radius. In this communication, we investigate the role of the drop size on the thinning dynamics of CaBER filaments with a slow plate separation protocol and show that CaBER does not measure the relaxation time properly; rather it yields an apparent relaxation time that depends on the way the viscoelastic filament is formed, and hence on the specific geometry used for the experiments. ## 2 Materials and methods We use three different liquids: two solutions of poly(ethylene oxide) (PEO) of molecular weight \(M_{w}=4\times 10^{6}\) g/mol, one in water with concentration 500 (w)ppm, referred to as PEO\({}_{\rm aq}\), and one in a more viscous solvent with concentration 25 (w)ppm, referred to as PEO\({}_{\rm visc}\), and a 1000 ppm solution of poly(acrylamide/sodium acrylate) (HPAM) [70:30] of molecular weight \(M_{w}=18\times 10^{6}\) g/mol in water with 1 wt% NaCl to screen electrostatic interactions and make the chain flexible instead of semi-rigid. Both polymers were provided by Polysciences (ref. 04030-500 for PEO and 18522-100 for HPAM). For the PEO\({}_{\rm visc}\) solution, the solvent is an aqueous Newtonian 30 wt% \(20,000\) g/mol PEG solution. The different concentrations were chosen to ensure that all three liquids have comparable filament thinning rates. After slowly injecting the polymer powder to a vortex generated by a magnetic stirrer, solutions were homogenised using a mechanical stirrer at low rotation speed for about 16 hours. For the PEO\({}_{\rm visc}\) solution, PEG was added after mixing PEO with water. The shear viscosity \(\eta\) of these solutions was measured at the temperature of filament thinning experiments with a MRC-302 rheometer from Anton Paar equipped with a cone plate geometry (diameter 50 mm, angle \(1^{\circ}\) and truncation gap 53 \(\mu\)m). The PEO\({}_{\rm visc}\) solution is a Boger fluid with a constant shear viscosity while the two others are shear-thinning and are well described by the Carreau law \(\eta(\dot{\gamma})=\eta_{0}(1+(\dot{\gamma}/\dot{\gamma}_{c})^{2})^{(n-1)/2}\) where \(\eta_{0}\) is the zero-shear viscosity, \(n\) is the shear-thinning exponent and \(\dot{\gamma}_{c}\) is the shear rate marking the onset of shear thinning. These values, along with the solvent viscosity \(\eta_{s}\), the density \(\rho\) and the surface tension \(\gamma\) measured with a pendent drop method, are reported in table 1. For the PEO\({}_{\rm aq}\) (500 ppm) solution, viscosity measurements for other PEO concentrations gave an intrinsic viscosity \([\eta]=2.87\) m\({}^{3}\)/kg and hence a critical overlap concentration \(c^{*}=0.77/[\eta]=0.268\) kg/m \begin{table} \begin{tabular}{l c c c c c c c c c} Name & \(\rho\) & \(\gamma\) & \(\eta_{s}\) & \(c\) & \(c/c^{*}\) & \(\eta_{0}\) & \(\eta_{p}\) & \(n\) & \(1/\dot{\gamma}_{c}\) & \(\tau_{m}\) \\ & (kg/m\({}^{3}\)) & (mN/m) & (mPa s) & (ppm) & & (mPa s) & & (ms) & (ms) \\ PEO\({}_{\rm aq}\) & 998 & 62.5 & 0.92 & 500 & 1.86 & 3.0 & 2.08 & 0.93 & 120 & 240 \\ PEO\({}_{\rm visc}\) & 1048 & 56.0 & 245 & 25 & 0.018 & 248 & 3.3 & 1 & – & 110 \\ HPAM & 998 & 72.0 & 0.92 & 1000 & – & 15 & 14 & 0.78 & 410 & 100 \\ \end{tabular} \end{table} Table 1: Properties of the three polymer solutions. \(\rho\) is the density, \(\gamma\) the surface tension, \(\eta_{s}\) the solvent viscosity, \(c\) the polymer concentration, \(c^{*}\) the critical overlap concentration, \(\eta_{0}\), \(n\) and \(\dot{\gamma}_{c}\) the Carreau fitting parameters of the shear viscosity, \(\eta_{p}=\eta_{0}-\eta_{s}\), and \(\tau_{m}\) the maximum CaBER relaxation time measured for the largest plates. (268 ppm). Assuming that the PEO\({}_{\rm visc}\) solution (25 ppm) is dilute, \(\eta_{p}\) should increase linearly with the concentration as \(\eta_{p}=[\eta]\eta_{s}c\), from which \([\eta]\) and \(c^{*}\) are estimated from this single PEO concentration. Values of \(c/c^{*}\) are presented in table 1. In our home-made CaBER setup, a droplet of volume \(V\) is placed on a horizontal plate of radius \(R_{0}\) and the motor-controlled top plate of same radius is first moved down until it is fully wetted by the liquid, i.e., until the liquid bridge between the plates has a quasi cylindrical shape. The top plate is then moved up slowly (at about 0.5 mm/s) and stopped at a plate separation distance \(L_{p}\) where the liquid bridge is still stable, like in the left inset image of figure 1(a), but close to the bridge instability threshold. Then, instead of moving the top plate at a constant (lower) velocity, i.e. like in SRM [5], we move it by 10 \(\mu\)m \(L_{p}\)-increment steps, waiting about one second between each step (longer than the solution's relaxation time), which is long enough to ensure that polymers are at equilibrium (no pre-stress) before each new step. At a certain step, the bridge becomes unstable and collapses under the action of surface tension, transiently leading to the formation of a nearly cylindrical filament which is the signature of viscoelastic pinch-off, as shown in the right inset image of figure 1(a). We stop moving the top plate once capillary-driven thinning starts. The CaBER setup is placed in a plastic box where the relative humidity is kept above 80% using wet paper to minimise evaporation. The aluminium plates are plasma-treated before each new experiment to increase their hydrophilicity and minimise dewetting. The process is recorded by a high-magnification objective mounted on a high-speed camera (Phantom TMX 7510) and images are analysed by a python code. A typical time-evolution of the minimum bridge / filament radius \(h\) is shown in figure 1(a). The purpose of this step-by-step plate separation protocol is to extract the value of the last stable bridge radius \(h_{0}\) which, since steps are small, can be considered as the initial bridge radius at the onset of capillary thinning. Our image resolution is up to 1 pixel per micrometer for the smallest drops, corresponding to the smallest plates, and our time resolution is \(15,000\) images per second to capture the fast bridge collapse from radius \(h_{0}\) to the radius \(h_{1}\) marking the onset of the elastic regime, see figure 1(a). The critical aspect ratio \(\Lambda=L_{p}/(2R_{0})\) at which the liquid bridge becomes unstable depends on the liquid volume \(V\) and on the Bond number \(\mbox{\it Bo}=\rho gR_{0}^{2}/\gamma\), where \(g\) is the gravitational acceleration [14]. In our experiments, we vary both the plate diameter \(2R_{0}\), between 2 and 25 mm, and the non-dimensional droplet volume \(V^{*}=V/R_{0}^{3}\) and we find that the last stable bridge radius \(h_{0}\) increases with both \(R_{0}\) and \(V^{*}\). ## 3 Results Figure 1(b) compares the time-evolution of the minimum bridge / filament radius \(h\) for the PEO\({}_{\rm aq}\) solution tested with plate diameters \(2R_{0}\) between 2 and 7 mm with a fixed non-dimensional droplet volume \(V^{*}=V/R_{0}^{3}\approx 2.4\). Although all filaments thin exponentially in time at the beginning of the elastic regime, as suggested by the fairly straight curves for \(t>t_{1}\) (before the terminal regime), they thin faster for smaller plates. This is in apparent contradiction with the Oldroyd-B model, which predicts a rate of exponential thinning \(|\dot{h}/h|=1/3\tau\) (see equation 1) which should be the same for all filaments, provided that the liquid does not change so that the (longest) relaxation time \(\tau\) of the polymer solution is the same. As we show in Appendix A, similar results are found in DoS (Dripping-onto-Substrate) with filament thinning rates increasing with decreasing nozzle size. To quantify these differences, we introduce an apparent (or effective) relaxation time \(\tau_{e}\) such that \(|\dot{h}/h|=1/3\tau_{e}\) in the exponential part of the elastic regime. It is plotted in figure 2(a) as a function of the initial bridge radius \(h_{0}\) for all polymer solutions, plate diameters and droplet volumes, data points of the same colour corresponding to the same \(R_{0}\) with different \(V^{*}\). We observe that \(\tau_{e}\) increases with both \(R_{0}\) and \(V^{*}\) and that all data points for a given solution collapse on a single curve when plotted against \(h_{0}\), which is itself an increasing function of both \(R_{0}\) and \(V^{*}\). In other words, a given solution tested with two different \((R_{0},V^{*})\) sets but with the same \(h_{0}\) yields the same \(\tau_{e}\), as some examples show in figure 2(a). This suggests that \(h_{0}\) is in fact the only relevant geometrical parameter of the problem. This is in agreement with the accepted idea that polymer deformations during capillary thinning are Figure 1: (a) Time evolution of the minimum bridge / filament radius \(h\) in the step-by-step plate separation protocol for the PEO\({}_{\text{aq}}\) solution for plate diameters \(2R_{0}=3.5\) mm and a droplet volume \(V^{*}=V/R_{0}^{3}\approx 2.4\). Inertio-capillary oscillations are visible after each step. Inset images correspond to a stable liquid bridge (left) and to a thinning filament (right) of the PEO\({}_{\text{aq}}\) solution with \(2R_{0}=7\) mm and \(V^{*}=2.4\). (b) \(h(t)\) in log-lin for the PEO\({}_{\text{aq}}\) solution tested with plate diameters, \(2R_{0}=2\), \(3.5\), \(5\) and \(7\) mm, with \(V^{*}\approx 2.4\). Inset images correspond to three times labelled \(1\) to \(3\) indicated on the \(h(t)\) curve plus a fourth later time where \(h\) is below our spatial resolution for \(2R_{0}=7\) mm. The time reference \(t_{1}\) marks the onset of the elastic regime. only influenced by the local extensional flow in the bridge / filament of maximum extension rate \(\dot{\epsilon}=-2\dot{h}/h\) at its thinnest point, while the top and bottom end droplets act as passive liquid reservoirs, their size not directly influencing the pinch-off dynamics. The apparent relaxation time varies significantly (up to a factor 4) within the typical range of plate diameters used for CaBER experiments, see figure 2(a). However, \(\tau_{e}\) cannot increase indefinitely with increasing \(h_{0}\). In order to observe the expected saturation of \(\tau_{e}\) for larger \(h_{0}\) values, we had to move to much larger plate diameters, up to \(2R_{0}=25\) mm. For plate diameters \(2R_{0}\geq 10\) mm, the top end-drop does not cover the top plate fully because of gravity, as shown in the inset image of figure 2(b). In fact, there is always a thin liquid film covering the top plate due to the plasma treatment. For such large plates, the top end-drop is not at the centre of the the top plate since the two plates are not perfectly parallel. In spite of this lack of full coverage for large plates, we find that the critical minimum bridge radius \(h_{0}\) marking the onset of the Rayleigh-Plateau instability increases with \(R_{0}\), allowing us to explore a wider range \(h_{0}\) values, as shown in figure 2(b) where the apparent relaxation time \(\tau_{e}\) seems to saturate to a maximum value \(\tau_{m}\), reported in table 1, at large \(h_{0}\). Since no clear plateau is observed, especially for the PEO\({}_{\rm aq}\) solution, the value of \(\tau_{m}\) is only an estimation. In figure 2(b), we only show one data point for each plate diameter, with \(V^{*}\) between 2.4 for the smallest plates and 0.88 for the largest plates. Note that no change Figure 2: Apparent relaxation time \(\tau_{e}\) against \(h_{0}\) for all three solutions for plate diameters \(2R_{0}=2\), 3.5, 5 and 7 mm (a) and for \(2R_{0}=2\), 3.5, 5, 7, 10, 12.5, 15, 20 and 25 mm (b). In (a), data points of the same colour correspond to the same \(R_{0}\) with different droplet volumes \(V^{*}=V/R_{0}^{3}\approx 1.3\), 2.4 and 3.2. In (b), a single volume is used for each plate diameter (\(V^{*}\approx 2.4\) for the smallest plates and 0.88 for the largest plates). The inset images show stable liquid bridges for \(2R_{0}=2\) mm (a) and 20 mm (b). The linear fit is for the PEO\({}_{\rm aq}\) solution for \(h_{0}<2\) mm. in behaviour is observed in the \(\tau_{e}(h_{0})\) curves in figure 2(b) at the transition between fully covered (\(2R_{0}\leq 7\) mm) and not fully covered top plates (\(2R_{0}\geq 10\) mm), around \(h_{0}\approx 1.3\) mm, strengthening the claim that the top and bottom end-drops are passive liquid reservoirs whose size and shape don't affect the filament thinning dynamics. As shown by the inset images in figure 2(a,b), the bottom end-drop becomes increasingly larger than the top one as \(R_{0}\) increases since the Bond number \(\textit{Bo}=\rho gR_{0}^{2}/\gamma\) ranges between 0.16 and 25. However, the thinning dynamics is not driven by gravity since the "filament" Bond number \(\textit{Bo}_{f}=\rho gL_{f}h_{1}/\gamma\), comparing the typical capillary pressure \(\gamma/h_{1}\) in the filament to the hydrostatic pressure \(\rho gL_{f}\) over the filament length \(L_{f}\), is only up to 0.1 for the largest plates. This is also evident from the fact that filaments are not thicker at their base, see, e.g., the right inset image in figure 1(a). Note that the PEO solutions used in figures 2(b) are not the same as the ones used in figure 2(a) and have apparent relaxation times about 30% larger for \(2R_{0}=7\) mm, while the shear viscosity was only up to 10 % larger, meaning that the shear rheology parameters in table 1 are representative of both solutions. These differences are due to slightly different preparation protocols, e.g. agitation times, for a given recipe. These extra solutions were prepared because, by the time we had realised much larger plates were needed to observe the saturation of \(\tau_{e}\), the previous solutions had considerably aged, i.e., had lower \(\tau_{e}\) values. ## 4 Interpretations The apparent disagreement between experiments and equation 1 implies that either the liquid changes, becoming less elastic for lower values of \(h_{0}\), or that the Oldroyd-B model, from which equation 1 is derived, misses some important features of polymer dynamics in extensional flows. We now consider some possible explanations. ### Evaporation and degradation First, although the relaxation time measured in filament thinning is known to increase with polymer concentration [6; 11], solvent evaporation cannot explain the observed increase of the apparent relaxation time with increasing droplet size. Indeed, the bulk polymer concentration would increase quicker for smaller droplets due to their larger surface to volume ratio, leading to larger concentrations, and hence larger \(\tau_{e}\), for smaller droplets. Besides, repeating an experiment several times over the course of 10 minutes does not lead to a monotonic increase or decrease of \(\tau_{e}\) over time, beyond small variations of less than 5%. The latter observation also argues against polymer degradation as a possible explanation. Moreover, \(\tau_{e}\) is observed to increase with \(h_{0}\) for both PEO and HPAM solutions, even though HPAM is less fragile than PEO. Therefore, if the liquid is in fact the same for each experiment, the Oldroyd-B model fails to describe the full polymer dynamics in the bridge / filament. In particular, differences in the history of polymer deformations for different drop sizes could lead to different "initial" states of polymers at the onset of the elastic regime, which could result in different filament thinning rates. We now discuss whether finite extensibility of polymer chains, as described by the FENE-P model, can account for such differences. ### Elasto-capillary balance with FENE-P Following Wagner et al. [15], for a uniaxial extensional flow, the polymeric part of the normal stress is \(\sigma_{p,zz}=G(fA_{zz}-1)\) in the flow direction \(z\) where \(G\) is the elastic modulus and \(A_{zz}\) is the normal part of the conformation tensor \(\mathbf{A}\) which follows \[\dot{A}_{zz}-2\dot{\epsilon}A_{zz}=-\frac{fA_{zz}-1}{\tau}, \tag{2}\] where \(\dot{\epsilon}\) is the extension rate, \(\tau\) the relaxation time and \(f=(1-\mathrm{tr}(\mathbf{A})/L^{2})^{-1}\) where \(L\) is the ratio of the fully unravelled chain size to its equilibrium size. In this model the stress diverges as \(A_{zz}\) approaches its limit value \(L^{2}\). In the elastic regime (\(t\geq t_{1}\)), we assume that polymers are far from equilibrium and that the axial stress dominates over the radial stress, i.e. \(A_{zz}\gg 1>A_{rr}\). Assuming negligible inertia and solvent viscosity in the elastic regime, we use the elasto-capillary force balance equation \[(2X-1)\frac{\gamma}{h}=\sigma_{p,zz}=GfA_{zz}\,, \tag{3}\] with \(f=(1-A_{zz}/L^{2})^{-1}\) where \(X=3/2\) in the Oldroyd-B limit [16]. Assuming a small correction due to finite extensibility, combining equations 2 and 3 with \(\dot{\epsilon}=-2\dot{h}/h\) leads to the ordinary differential equation \((3+A_{zz}/L^{2})\dot{A}_{zz}=A_{zz}/\tau\) which has an implicit solution \[\frac{t-t_{1}}{\tau}=\frac{A_{zz}-A_{1}}{L^{2}}+3\ln\left(\frac{A_{zz}}{A_{1} }\right), \tag{4}\] where \(A_{1}=A_{zz}(t_{1})\) quantifies the amount polymer stretching at the onset of the elastic regime at time \(t_{1}\). The filament radius can be computed by noticing that \(hfA_{zz}\) is a constant according to equation 3, i.e. \[\frac{h}{h_{1}}=\frac{f_{1}A_{1}}{fA_{zz}}\,, \tag{5}\] where \(f_{1}=(1-A_{1}/L^{2})^{-1}\) and \(h_{1}=h(t_{1})\) is the filament radius at the onset of the elastic regime. \(h\) depends on three parameters: \(\tau\), \(h_{1}\) and the ratio \(A_{1}/L^{2}\) quantifying how far chains are from being fully extended at the onset of the elastic regime. Indeed, according to equations 4 and 5, \(h\) is unchanged upon multiplying both \(A_{1}\) and \(L^{2}\) by the same quantity. In the Oldroyd-B limit \(L^{2}\rightarrow\infty\) (\(f=1\)), we recover the expected exponential trends \(A_{1}\exp\left((t-t_{1})/3\tau\right)\) and \(h=h_{1}\exp\left(-(t-t_{1})/3\tau\right)\). For a finite extensibility, the exponential regime holds until \(A_{zz}\) approaches \(L^{2}\) where finite extensibility effects arise. Ultimately, the stress diverges and \(h\to 0\) in finite time when \(A_{zz}\) saturates to \(L^{2}\), which occurs sooner as \(A_{1}/L^{2}\) is closer to one. In particular, if \(A_{1}\) is only slightly less than 1, meaning that polymer chains are already almost fully extended at the onset of the elastic regime, finite extensibility effects are never negligible and equation 1 is never valid. In that case, increasingly larger filament thinning rates are be observed as \(A_{1}/L^{2}\) increases and equations 4 and 5 predict that the apparent relaxation time \(\tau_{e}\) is well approximated by \(\tau_{e}/\tau\approx 1-A_{1}/L^{2}\). This theory is tested in figure 3(a) where the elastic regime (\(t\geq t_{1}\)) of the PEO\({}_{\text{aq}}\) solution, tested with different plate diameters, is compared with the predictions of equations 4 and 5 where we have chosen the maximum relaxation time \(\tau_{m}\) measured at large \(h_{0}\) as the relaxation time of the FENE-P model. We use \(A_{1}/L^{2}\) and \(h_{1}\) as fitting parameters to obtain a good agreement between model and experiments. Most importantly, we have to impose that \(A_{1}/L^{2}\) gets closer to one as \(h_{0}\) decreases to capture the observed thinning rates, all larger than \(1/3\tau_{m}\). For \(2R_{0}=2\) mm for example, we need \(A_{1}/L^{2}=0.93\), meaning that polymers are almost fully extended at the onset of the elastic regime. We emphasise here that in previous studies, where \(\tau_{e}\) was believed to not vary with the plate size, comparisons with the FENE-P model were performed using \(\tau_{e}\) as the model's relaxation time, and it is quite remarkable that when using a larger value \(\tau_{m}\), one can still Figure 3: (a) \(h(t)\) for the PEO\({}_{\text{aq}}\) solution tested with different plate diameters. The elastic regime (\(t\geq t_{1}\)) is fitted by equation 4 and 5 with \(\tau=\tau_{m}\), using \(h_{1}\) and \(A_{1}/L^{2}\) as fitting parameters. (b) Time-evolution of \(h\) from experiments and of \(A_{zz}\) calculated from equation 2 with \(\dot{\epsilon}=-2h/h\) using experimental values of \(h\), with \(\tau=\tau_{m}\) and various values \(L^{2}\) for the PEO\({}_{\text{aq}}\) solution tested with a plate diameter \(2R_{0}=10\) mm, using \(A_{zz}=1\) at \(h=h_{0}\) as the initial condition. obtain a somehow exponential-looking trend with the right thinning rate by tuning \(A_{1}/L^{2}\) for \(\tau_{e}<\tau_{m}\). Equally good fits can be obtained for the PEO\({}_{\rm visc}\) and HPAM solutions and the corresponding values of the fitting parameter \(A_{1}/L^{2}\) are plotted against \(h_{0}\) in figure 4(a) (light purple). These results suggest that the maximum relaxation time \(\tau_{m}\) measured at large plate sizes could be the 'true' relaxation time, lower apparent values \(\tau_{e}\) being a consequence of polymers being too close to their finite extensibility limit at the onset of the elastic regime to display their 'natural' (far from full extension) relaxation behaviour. ### Calculating \(A_{zz}(t)\) from experimental \(h(t)\) with FENE-P Although promising, this explanation is only valid if the FENE-P model indeed predicts that polymers are more stretched at the onset of the elastic regime for smaller drop sizes, i.e., if \(A_{1}\) increases as \(h_{0}\) decreases. To test this, we use equation 2 to calculate \(A_{zz}(t)\), using the experimental values of \(h(t)\) for \(\dot{\epsilon}(t)=-2\dot{h}/h\), although this expression is only valid at the thinnest bridge radius. In other words, we calculate the predictions of the model for the experimental history of extension rates. In particular, the extension rate history in the Newtonian regime (\(t<t_{1}\)) sets \(A_{1}\). Hence, we do _not_ assume large polymer deformations (\(A_{zz}\not\gg 1\)) since, as our plate separation protocol is designed for, polymers are at equilibrium at the onset of capillary thinning, i.e. \(A_{zz}=1\) when \(h=h_{0}\). We use \(f=(1-A_{zz}/L^{2})^{-1}\) since when \(f\) is not close to 1 anymore, the axial stress dominates over radial stress. In order to circumvent the issue of calculating \(\dot{h}\) from experimental values of \(h\), we introduce a function \(y(t)\) such that \(A_{zz}=y/h^{4}\), which gives \(\dot{A}_{zz}+4(\dot{h}/h)A_{zz}=\dot{y}/h^{4}\), so that equation 2 becomes \(\tau\dot{y}=h^{4}-y/(1-y/(h^{4}L^{2}))\) which does not involve \(\dot{h}\) anymore. To solve this equation, we use a standard ODE solver, using spline interpolation to create a \(t\to h(t)\) function based on experimental values of \(h\). This equation can be integrated analytically in the Oldroyd-B limit, as shown by Bazilevsky et al. [17]. The results are shown in figure 3(b) for the PEO\({}_{\rm aq}\) solution tested with a plate diameter \(2R_{0}=10\) mm, with \(\tau=\tau_{m}\) for the relaxation time of the FENE-P model, along with various values of \(L^{2}\), including the Oldroyd-B limit \(L^{2}\to+\infty\). As expected, values of \(A_{zz}\) calculated from FENE-P coincide with Oldroyd-B until it saturates when reaching \(L^{2}\). In particular, the value of \(A_{1}\) becomes independent of \(L^{2}\) when \(L^{2}\) is sufficiently large and becomes indistinguishable from the values predicted by the Oldroyd-B model. ### Comparing fitting and calculated \(A_{1}\) values The values of \(A_{1}=A_{zz}(t_{1})\) calculated in SS4.3 from the experimental values of \(h(t)\) in the Newtonian regime (\(t<t_{1}\)), using the FENE-P model with \(\tau=\tau_{m}\), are plotted in figure 4(a) against \(h_{0}\) for all three solutions (dark blue). More precisely, we plot the ratio \(A_{1}/L^{2}\) indicating how close polymer chains are to being fully extended at the onset of the elastic regime (full extension corresponding to \(A_{1}/L^{2}=1\)). This is because we want to compare these calculated values (dark blue) with the values of \(A_{1}/L^{2}\) used as a fitting parameter (light purple) in SS4.2 to fit the elastic regime with equations 4 and 5 (see figure 3(a)). We hence need to choose a value of \(L^{2}\). For each liquid, we choose \(L^{2}\) such that, at the largest \(h_{0}\), the calculated value (dark blue) of \(A_{1}/L^{2}\) coincides exactly with the fitting value (light purple). We obtain \(L^{2}=4.9\times 10^{4}\) for the PEO\({}_{\rm aq}\) solution, \(L^{2}=2.0\times 10^{4}\) for the PEO\({}_{\rm visc}\) solution and \(L^{2}=6.2\times 10^{3}\) for the HPAM solution. The order of magnitude is consistent with the microscopic formula [11] \[L^{2}=3\left[\frac{j\sin^{2}{(\theta/2)}M_{w}}{C_{\infty}\,M_{u}}\right]^{2(1- \nu)}\,, \tag{6}\] which gives \(L^{2}\) between \(4.5\times 10^{4}\) and \(1.3\times 10^{5}\) for PEO of molecular weight \(M_{w}=4\times 10^{6}\) g/mol, for typical solvent quality exponents \(\nu\) between 0.55 and 0.5 (theta solvent) found for PEO in water-based solvents, where \(M_{u}\) is the monomer molecular weight, \(\theta=109^{\circ}\) the C-C bond angle, \(j=3\) the number of bonds of a monomer and \(C_{\infty}=4.1\) the characteristic ratio [18]. We find that, while the fitting values of \(A_{1}\) (light purple) increase towards \(L^{2}\) as \(h_{0}\) decreases, the calculated values of \(A_{1}\) (dark blue) only do so for the PEO\({}_{\rm visc}\) solution, for which a good agreement is found with the fitting values, and do not for the PEO\({}_{\rm aq}\) and HPAM solutions for which \(A_{1}\) is fairly constant. For these last two, no other value of \(L^{2}\) can lead to a better agreement since decreasing \(L^{2}\) would just shift all calculated values towards Figure 4: (a) Values of \(A_{1}/L^{2}\) used as fitting parameters (light purple, see e.g. figure 3(a)), and values calculated from the FENE-P model (dark blue, see e.g. figure 3(b)) for \(\tau=\tau_{m}\) and values of \(L^{2}\) discussed in the text, against \(h_{0}\) for all liquids and plate diameters. (b) Same values of \(A_{1}\) calculated from the FENE-P model (dark blue), compared with \((h_{0}/h_{1})^{4}\) (light orange). All values of \(A_{1}\) calculated from FENE-P are the same as in Oldroyd-B except for the PEO\({}_{\rm visc}\) solution at low \(h_{0}\) where Oldroyd-B values are shown with empty blue symbols. the upper limit \(A_{1}/L^{2}=1\). In order to better understand this, we compare these calculated values of \(A_{1}\) (dark blue) with their upper limit \((h_{0}/h_{1})^{4}\) (light orange) in figure 4(b). This upper limit corresponds to a relaxation time \(\tau\) so large that polymer relaxation (right hand side of equation 2) is always negligible in the Newtonian regime (\(t<t_{1}\)), a case where equation 2 (with \(\dot{\epsilon}=-2\dot{h}/h\)) can be integrated as \(A_{zz}h^{4}=h_{0}^{4}\). Differences in values of \((h_{0}/h_{1})^{4}\) among the three polymer solutions are due to differences in \(h_{1}\) stemming from different elastic moduli \(G\), as will be discussed in a separate paper where we show that the Oldroyd-B model gives \(h_{1}\propto(GH^{4}/\gamma)^{1/3}\) where \(H\to h_{0}\) for large relaxation times. We find in figure 4(b) that \(A_{1}\) is very close to the \((h_{0}/h_{1})^{4}\) limit for the PEO\({}_{\rm aq}\) and HPAM solutions at low \(h_{0}\), meaning that polymer relaxation is indeed negligible, and that the ratio between the two increasing as \(h_{0}\) increases, meaning that relaxation becomes more important. This is consistent with the fact that the Deborah number \(\mbox{\it De}=\tau_{m}/\tau_{R}\) decreases from 500 (\(\gg 1\), i.e., negligible relaxation) at the lowest \(h_{0}\) to 5 at the highest \(h_{0}\), where \(\tau_{R}=\sqrt{\rho h_{0}^{3}/\gamma}\) is the Rayleigh time scale relevant for the thinning dynamics of such low-viscosity liquids with Ohnesorge numbers \(\mbox{\it Oh}=\eta_{s}/\sqrt{\rho\gamma h_{0}}\) up to \(7\times 10^{-3}\ll 1\). The greater difference between \(A_{1}\) and \((h_{0}/h_{1})^{4}\) for the PEO\({}_{\rm visc}\) solution in figure 4(b) is due to the slower thinning dynamics in the Newtonian regime, caused by a larger solvent viscosity with Ohnesorge numbers 270 times larger than for the two other solutions. Indeed, since all three solutions have comparable relaxation times, slower thinning dynamics means that polymer relaxation is more important, i.e., \(A_{1}<(h_{0}/h_{1})^{4}\). Moreover, the Newtonian thinning dynamics gets slower as \(h_{0}\) increases since the visco-capillary time scale \(\tau_{\rm visc}=\eta_{s}h_{0}/\gamma\) increases, which is the reason why \(A_{1}\) decreases with \(h_{0}\) for the PEO\({}_{\rm visc}\) solution. ### Numerical simulations Hence, The FENE-P model can only explain the increase of the apparent relaxation time \(\tau_{e}\) with \(h_{0}\) for the PEO\({}_{\rm visc}\) solution which is the most dilute one and with the highest solvent viscosity out of our three solutions. This is checked further by numerical simulations of the axisymmetric problem (with gravity) using the full FENE-P constitutive equation. We use as fixed model parameters the values of \(\eta_{s}\) and \(\eta_{p}\) from the shear rheology, the high-\(h_{0}\) limit \(\tau=\tau_{m}\) for the relaxation time and the value of \(L^{2}=2.0\times 10^{4}\) used in SS4.4 (figure 4) for which the simplified analytical model in SS4.2 and SS4.3 could rationalise the apparent relaxation times \(\tau_{e}\). The equations to be solved are the same as in Rubio et al [19] and the numerical methods are detailed in B. The initial condition is established by starting from a stable liquid bridge with a plate-to-plate distance \(L_{p}\) just below the instability threshold value and slightly increasing \(L_{p}\) to trigger the pinch-off. The results are shown in figure 5 for the three smallest plates and the corresponding experimental droplet volumes in terms of the time evolution of the minimum bridge / filament radius. Simulations are found to start at a bridge radius close to \(h_{0}\), which validates the numerical method to set the initial condition. We find that simulations are able to capture the Newtonian regime quite well and provide a reasonable agreement with experiments in the elastic regime. In particular, the filament thinning rate varies with the plate diameter, consistent with experiments, while the Oldroyd-B model would give the same (constant) thinning rate \(1/3\tau_{m}\). Simulations could not be continued far enough to compare with the full experimental time window. Like in figure 3(a), figure 5 also features the analytic solution of equations 4 and 5 for the elastic regime (\(t>t_{1}\)). ## 5 Conclusions and discussion We have shown experimentally that the thinning rate of filaments of various polymer solutions is not just a material property but depends on the size of the system in both CaBER (with slow plate separation) and DoS experiments, consistent with previous observations for dripping experiments [13]. Although all filaments are observed to thin exponentially, as predicted by the Oldroyd-B model (see equation 1), we show that the inferred apparent relaxation time \(\tau_{e}\) in CaBER increases with the minimum bridge radius \(h_{0}\) marking the onset of capillary Figure 5: \(h(t)\) from experiments and simulations for the PEO\({}_{\text{visc}}\) solution for different plate diameters. Simulations are performed for \(2R_{0}=2\), \(3.5\) and \(5\) mm only, using the FENE-P model with \(\tau=\tau_{m}\) and \(L^{2}=2.0\times 10^{4}\). The elastic regime is fitted by equations 4 and 5 with \(\tau=\tau_{m}\) where, like in figure 3(a), \(A_{1}/L^{2}\) and \(h_{1}\) are used as fitting parameters. thinning, which is an increasing function of both the plate diameter and droplet volume, and seems to saturate only at large \(h_{0}\) values corresponding to plate diameters \(>10\) mm significantly larger than typical CaBER plates. These observations hence suggest that CaBER relaxation times reported in the literature are not universal since testing a given polymer solution with different plate diameters and droplet volumes yields different results, with typical variations by a factor \(>2\) within the range of standard plate diameters \(2R_{0}=2-7\) mm, at least for slow-plate-retraction CaBER protocols. The fact that Bazilevsky et al. [1], who used both fast and slow-retraction CaBER methods, reported no variation of \(\tau_{e}\) with the drop volume \(V\) (without providing the data to support their claim) might be due to the fact that its dependence on \(V\) is weak (weaker than its dependence \(R_{0}\), see figure 2(a)) and that, for a given plate diameter, \(V\) can only be varied up to a critical value above which the drop does not fit on the plate. We demonstrate that the variation of \(\tau_{e}\) with \(h_{0}\) is not caused by solvent evaporation or polymer degradation and cannot be universally explained by finite extensibility effects described by the FENE-P model. These observations suggest that the single-mode Oldroyd-B and FENE-P models miss some important features of polymer dynamics in extensional flows. The FENE-P model could only explain the variation of \(\tau_{e}\) for the most dilute solution with the most viscous solvent, which is consistent with the fact that (i) the FENE-P model is derived for dilute solutions and that (ii) inertio-capillary oscillations are absent for this solution. However, since the value of the finite-extensibility parameter was chosen to optimise the agreement with experiments, we don't exclude that this agreement may also be a coincidence. A physical interpretation for this deformation-history-dependent filament thinning rate is still needed, strengthening the already established need for better constitutive equations. Other shortcomings of the FENE-P model, such as coil-stretch hysteresis and the increase of \(\tau_{e}\) with the polymer concentration in the 'dilute' regime (\(c<c^{*}\)), were previously explained by a Conformation-Dependent Drag (CDD) model accounting for the action of both chain stretching and intermolecular hydrodynamic interactions on the friction coefficient [20, 21]. Future works will determine if such models are also able to capture the variations of the effective relaxation time \(\tau_{e}\) with the drop/plate size observed here. Experiments with other filament-thinning methods such as the classical step-strain CaBER protocol, as well as ROJER [22, 23], are needed to assess the universality of the dependence of the apparent relaxation time on the size of the system identified here. **Acknowledgements**. We thank Louison Laruelle and Carmen van Poelgeest for preliminary experimental work. **Declaration of Interests**. The authors report no conflict of interest. ## Appendix A Dripping-onto-substrate (DoS) In DoS experiments, a horizontal substrate (here, a plasma-treated aluminium plate) is moved slowly upward until being in contact with a liquid droplet hanging steadily from a nozzle. As shown in the image sequence in figure 6, a fast spreading of the liquid on the plate leads to the pinch-off of the bridge connecting the substrate to the nozzle. This transiently leads to the formation of an exponentially thinning filament, as shown by the time-evolution of the minimum bridge / filament radius \(h\) in figure 6, where the PEO\({}_{\text{aq}}\) solution is tested with four different nozzle diameters. As in CaBER experiments, the apparent relaxation time extracted from the filament thinning rate increases with the droplet size, here quantified by the nozzle diameter. This apparent relaxation time \(\tau_{e}\) is plotted in the inset of figure 6 for both CaBER and DoS experiments against the filament radius \(h_{1}\) marking the onset of the elastic regime which, unlike \(h_{0}\) in CaBER, is easily definable in both methods. The relatively good collapse of the data points on a single curve suggests a universal physical mechanism for the dependence of the apparent relaxation time on the size of the system, independent of the exact method used. We checked that \(\tau_{e}\) also increases with the nozzle diameter when the droplet spreads on a'small' plate (about two times larger than the nozzle), where spreading Figure 6: \(h(t)\) from DoS experiments with different nozzle diameters for the PEO\({}_{\text{aq}}\) solution. We report the values of the maximum diameter \(2R_{0}^{*}\) of the top end-drop which is between the inner and outer nozzle diameter. Top images correspond to the steadily hanging drop (left) and to four times labelled 1 to 4 indicated on the \(h(t)\) curve for \(2R_{0}^{*}=3.27\) mm. Inset: \(\tau_{e}\) vs. \(h_{1}\) from CaBER and DoS for the PEO\({}_{\text{aq}}\) solution. stops before the viscoelastic filament is formed. ## Appendix B Numerical method The FENE-P model was solved with a variation of the method described by Herrada & Montanero [24]. The physical domains occupied by the liquid is mapped onto a rectangular domains through a coordinate transformation. Each variable and its spatial and temporal derivatives appearing in the transformed equations were written as a single symbolic vector. Then, we used a symbolic toolbox to calculate the analytical Jacobians of all the equations with respect to the symbolic vector. Using these analytical Jacobians, we generated functions that could be evaluated in the iterations at each point of the discretised numerical domains. The transformed spatial domain is discretised using \(n_{\eta}=11\) Chebyshev spectral collocation points in the transformed radial direction. We used \(n_{\xi}=801\) equally spaced collocation points in the transformed axial direction \(\xi\). The axial direction was discretised using fourth-order finite differences. Second-order backward finite differences were used to discretise the time domain. We used an automatic variable time step based on the norm of the difference between the solution calculated with a first-order approximation and that obtained from the second-order procedure. The nonlinear system of discretised equations was solved at each time step using the Newton method. The method is fully implicit.
2309.07773
Spoken Humanoid Embodied Conversational Agents in Mobile Serious Games: A Usability Assessment
This paper presents an empirical investigation of the extent to which spoken Humanoid Embodied Conversational Agents (HECAs) can foster usability in mobile serious game (MSG) applications. The aim of the research is to assess the impact of multiple agents and illusion of humanness on the quality of the interaction. The experiment investigates two styles of agent presentation: an agent of high human-likeness (HECA) and an agent of low human-likeness (text). The purpose of the experiment is to assess whether and how agents of high humanlikeness can evoke the illusion of humanness and affect usability. Agents of high human-likeness were designed by following the ECA design model that is a proposed guide for ECA development. The results of the experiment with 90 participants show that users prefer to interact with the HECAs. The difference between the two versions is statistically significant with a large effect size (d=1.01), with many of the participants justifying their choice by saying that the human-like characteristics of the HECA made the version more appealing. This research provides key information on the potential effect of HECAs on serious games, which can provide insight into the design of future mobile serious games.
Danai Korre, Judy Robertson
2023-09-14T15:02:05Z
http://arxiv.org/abs/2309.07773v3
**Spoken Humanoid Embodied Conversational Agents in Mobile Serious Games: A Usability Assessment** ## Abstract This paper presents an empirical investigation of the extent to which spoken Humanoid Embodied Conversational Agents (HECAs) can foster usability in mobile serious game (MSG) applications. The aim of the research is to assess the impact of multiple agents and illusion of humanness on the quality of the interaction. The experiment investigates two styles of agent presentation: an agent of high human-likeness (HECA) and an agent of low human-likeness (text). The purpose of the experiment is to assess whether and how agents of high human-likeness can evoke the illusion of humanness and affect usability. Agents of high human-likeness were designed by following the ECA design model that is a proposed guide for ECA development. The results of the experiment with 90 participants show that users prefer to interact with the HECAs. The difference between the two versions is statistically significant with a large effect size (d=1.01), with many of the participants justifying their choice by saying that the human-like characteristics of the HECA made the version more appealing. This research provides key information on the potential effect of HECAs on serious games, which can provide insight into the design of future mobile serious games. ## 2 Introduction The latest generation of mobile devices has the capabilities of supporting more complex applications in terms of technical and interactive features. The portability and wireless access to the internet makes mobile devices a tool of great potential for formal and informal edification. However, there is a lack of studies regarding the use and the effectiveness of mobile devices for this purpose (Y.-T. Sung et al., 2016).The multi-touch nature of the mobile interaction along with the smaller screen size and the human fingertip call for a more compact information architecture (IA) with cleaner user interfaces and a smaller number of steps (Doumanis et al., 2015). The way users interact with mobile devices is changing again since the latest generation of mobile devices includes voice-driven virtual assistants (Siri, Google now, S voice) (Santos-Perez et al., 2013). Embodied conversational agents (ECAs) are virtual characters with the ability to converse with a human through verbal (speech) and/or non-verbal communication (text and/or gestures) (Cassel et al., 2000). There are many theoretical advantages in favour of ECAs and spoken dialogue systems (systems that use speech as input) and it is assumed they provide a more "natural interaction" (Weiss et al., 2015, Takeuchi and Naito 1995). They are often considered anthropomorphic entities due to the linguistic, extra-linguistic and non-verbal information they convey. The anthropomorphism of interfaces evokes an illusion of humanness from the user's behalf that can affect the interaction and subsequently the usability. Increased believability and perceived trustworthiness are a major goal in ECA research. To achieve that, human-like virtual agents are often developed; this human-like aspect makes ECAs subject to social conventions (Gris Sepulveda, 2015). According to some studies, the interaction with spoken dialogue systems, either in the form of an embodied agent or not, is still inferior compared to other approaches that allow a direct manipulation of the system to which the user responds instinctively, despite the theoretical advances of ECAs and dialogue systems (Weiss et al., 2015). However, the year 2016 has been a tipping point for conversational interfaces with major companies investing heavily in this technology (McTear, 2017). Furthermore, conversational agents are expected to be important modes of interaction in Virtual Reality (VR) environments (Beilby and Zakos, 2014) Thus, the importance of understanding the quality attributes and the issues that are affiliated with the development of high-quality and usable conversational agents, increases exponentially (Radziwill and Morgan, 2017). The introduction of an ECA without taking into consideration the context of use and the purpose of the system could lead to a poor performance by the user as the ECA might act as a distraction rather than a helpful element and the interaction may be frustrating for the user (Doumanis et al., 2015). Therefore, whether usability and quality are to be enhanced by using an ECA in a multimodal human-machine interface must be decided for each application anew (Weiss et al., 2015). This is also a strong reason to examine if and how ECAs enhance usability over current interaction paradigms in serious game (SG) environments, even more so in mobile devices as there is a recent trend towards mobile serious games (MSG) and empirical evidence is limited (Gamelearn, 2015; Adkins, 2020; Doumanis, 2015). Numerous aspects of ECAs (physical, behavioural etc.) have been evaluated empirically for several years. ECAs' interdisciplinary nature allows for further investigation on how they can create highly usable interfaces, as they rely heavily on technological advances such as the processing power, rendering techniques, graphic cards that are ever changing thus making previous research dated or even obsolete. As technology and computational techniques advance, ECAs remain a thriving research topic; in the last year with the HCI community new ECA developments across contrasting domains were published - for example an ECA tutor for teaching fractions to children (Krishna, Pelachaud, & Kappas, 2020), an animated daisy plant companion for older adults (Simpson, Gaiser, Mac'ik, & Bressgott, 2020), a virtual agent designed to increase productivity at work (Grover, Rowan, Suh, McDuff, & Czerwinski, 2020) and an ECA to assist in the area of substance use counselling (Olafsson, Wallace, & Bickmore, 2020). Although the evidence for the adoption of ECAs is encouraging, much experimental work related to the media equation and ECAs has been conducted on desktop computers. Mobile users may have a different reaction towards ECAs as mobile devices are most commonly used in places with ambient noise and crowds. Another issue is that research on mobile ECAs dates to early 2000 while the mobile and computer graphics technology has seen tremendous changes in recent years and most users are more technology literate. Based on these observations, further research on mobile ECAs is necessary. This paper presents an empirical investigation of the extent to which spoken Humanoid Embodied Conversational Agents (HECAs) can foster usability in mobile serious game (MSG) applications. The aim of the research is to empirically assess the impact of multiple agents and illusion of humanness on the quality of the interaction. The rest of the paper is organized as follows: in the background section, a literature review is given which includes theories on the use of ECAs, the proposal of the ECADM model for categorisation of ECAs and the experimental interface design. The evaluation section reports results of a qualitative and quantitative analysis of a user study with 90 participants which compares two styles of agent presentation: an agent of high human-likeness (HECA) and an agent of low human-likeness (text). The results of the study are then discussed, with consideration of future work and implications for developers. ### 2.1.Background #### 2.1.1 Embodied conversational agents (ECAs) The term "Embodied Conversational Agent" was coined by Justine Cassell in 2000 and is defined as follows: "computer interfaces that can hold up their end of the conversation, interfaces that realise conversational behaviours as a function of the demands of dialogue and as a function of emotion, personality, and social conversation" (Cassell et al., 2000). According to Cassell, these embodied conversational agents (ECAs) are virtual characters with the ability to converse with a human through verbal (speech) and/or non-verbal communication (text and/or gestures). Interface agents such as ECAs are agents that have some form of a graphical/visual representation on the interface and are capable of autonomous actions without explicit directions from the user (Doumanis and Smith, 2015). The terms mostly used interchangeably with ECAs are: virtual character, intelligent agent or social agent (Veletsianos and Miller, 2008). Another term related to ECAs is that of virtual humans. Virtual humans are the result of the emergence of different fields around computer science such as artificial intelligence, computer animation, computer graphics, human-computer interaction and cognitive science (Kasap and Magnenat-Thalmann, 2008). These characters can play the role of the guide, the trainer, the teammate, the rival or a source of motion in virtual space (Brogan et al., 1998). However, virtual humans along with their complexity, can vary diametrically as each of them has a specific role and purpose depending on the goal of the application. The main difference between ECAs and virtual humans is that virtual humans always have the appearance of a human and they do not necessarily possess any intelligence or communication skills. An example is the non-interactive characters in games that are used to populate a scene. When virtual humans are combined with ECAs, the result is a Humanoid Embodied Conversational Agent (HECA). Embodied conversational agents need to possess the following abilities which comply to the modelling of regular autonomous agents. First, they should perceive verbal and/or nonverbal input from the user and the user's environment. Second, they should translate the inputs' meaning and respond appropriately through verbal and/or nonverbal actions. Last, those actions should be performed by an animated computer character in a virtual environment (Huang, 2018). According to De Vos, (2002), ECAs share the following features: anthropomorphic appearance (human, animal or fantasy figure); a virtual body that is used for communication purposes; natural communication protocols; multimodality and performing a social role. This last feature is of particular interest for the research reported in this paper. Embodied conversational agents are different from other computer systems in the sense that they try to emulate human-to-human interaction in a believable manner and, therefore, have a social standing. The concept of believability is described by Bates, (1994) as "one that provides the illusion of life, and thus permits the audience's suspension of disbelief". In ECA research, the concept of believability is approached in two ways. One way is that higher believability can result by implementing more NL functions (Cassell and Stone, 1999). The other way is that believability is more a matter of personality and emotions supported by the significant roles that portrayal of emotions plays in creating "believable" characters by Disney (Bates, 1994). The work presented in this paper uses ECAs that express personality and emotions as part of the "illusion of humanness" of the system. #### 2.1.2 The illusion of humanness One of the major theoretical foundations of virtual character and ECA research is the media equation. Nass, et al. (1994) proposed the "Computers as Social Actors (CASA)" approach that is now known as the media equation theory. It implies that people tend to interact with computers and media in an inherently social way. Even though the users know that the computer is a medium rather than a human being, they treat it in a social way as they would in human-human interaction (Nishida et al., 2014). Experimental demonstration of this effect was carried out by Reeves and Nass (1996) showing that humans treated computers and media in an inherently social way although not consciously. The users rated seemingly "polite" computers as more favourable even though computers are not capable of expressing politeness. As a result, human-like interfaces such as virtual agents, pedagogical agents and ECAs would also be in principle subjected to social rules (Veletsianos, 2010). According to Kramer et al., (2015) the effects of ECAs can be described as "social" if they can evoke to the participant similar emotional, cognitive or behavioural reactions to the ones evoked by other humans. Further research by Nass and colleagues (Nass et al., 1997; Nass and Moon, 2000) used the term "Ethopoeia" to describe the phenomenon that occurs during the interaction between a human and a virtual agent. The "Ethopoeia" explanation suggests that people unconsciously apply social rules when interacting with a virtual agent in a similar way they would with other humans. Additionally, they reject the hypothesis that people consciously anthropomorphished computers thus they replied consciously as participants, but when asked denied doing so. The explanation according to Nass and colleagues can be found in the way the human brain has evolved to automatically recognise emotive reactions from humans (Kramer et al., 2015). Studies supporting this notion have provided evidence that users/participants ethnically identify with virtual agents, respond politely and apply gender stereotypes to them (Scott et al., 2015). For the purposes of this research the illusion of humanness is defined as _the user's perception that the system possesses human attributes and/or cognitive functions_. The illusion of humanness is not to be confused with anthropomorphism which is more related with the attribution of human properties to non-human entities or humanoid which almost always refers to having the appearance of a human. When it comes to anthropomorphism, "attribution" is a key term as it implies that giving human characteristics to non-human agents is a conscious action from humans' side while "the illusion of humanness" is an involuntary reaction to a humanoid and anthropomorphic interface. The illusion of humanness is an extension of the "ethopoeia" explanation and persona effect but not limited to the unconscious application of social rules or an affective impact on learning but rather a determining factor on users' performance and perceived usability. It refers more specifically to systems that present information by utilising one or more human-like attributes (ex. voice, gaze, gestures, body) thus giving an illusion of "humanness" to the user. These attributes can be presented in textual, auditory and/or visual form. These attributes can be in the form of: gesturing, facial expression, eye gaze, human-like movement, voice, embodiment and behaviour (ex. using pronouns, personality, politeness, humour). The illusion of humanness is related to anthropomorphism but is not synonymous. Anthropomorphism is the attribution of human characteristics to non-human entities and is a combination of the Greek words for human and form/appearance (\(\tilde{\alpha}\nu\theta\rho\alpha\pi o_{5}+\mu op\dot{\eta}\)). The word "anthropomorphism" etymologically is more relevant to the appearance, but it has also been used in the past to describe human-like behaviour in the field of HCI or even an umbrella term for human-like interfaces therefore it will be briefly explored as the factor the evokes the "illusion of humanness" effect. The psychology of anthropomorphism was examined by Adam Waytz (Harvard University) and Nicholas Epley (University of Chicago). This neuroscience research revealed that when people think of humans and non-human entities, the same brain areas are activated. This result is an indication that anthropomorphism utilises the same processes as the ones used when thinking of other people. Thus, anthropomorphism can evoke a certain mental response (illusion of humanness) where people think of non-human entities as human consequently render them worthy of consideration or moral care (Waytz et al., 2014). Anthropomorphism can take on various forms at the user interface. The simplest form is _textual_, another form concern using _auditory cues_ while _visual cues_ of multiple manifestations can be used and typically would involve using text and/or voice audio (Murano, 2006) (Table 1). The anthropomorphished aspect of textual feedback is the way text is written on the screen, i.e. using pronouns such as "I". Some chatbots are also an example of displaying textual anthropomorphism or personification. Auditory cues or "voice" are usually expressed in the form of Text-to-speech (TTS) technology or dynamically loaded voice clips of humans. The system may also use pronouns such as "I" to refer to itself. An example of a system using auditory cue is the virtual assistants that have recently became popular. Home virtual assistants such as Amazon Alexa and Google home but also mobile virtual assistants such as Siri, S voice and ok Google are a few examples of virtual assistants that speech recognition and voice output of a TTS form. Some of these systems have names associated with them such as Alexa and Siri which give the illusion of an identity and further anthropomorphises the system. The mere existence of voice expresses anthropomorphism. Even for the systems with no allocated names, the mere fact that they have a human-like voice gives an illusion of persona or identity due to extra-linguistic data provided through the voice beyond the context of the message such as intonation, gender etc. When humans hear a voice, they can in most cases understand emotion based on the tone that is being used. Speech can reveal cues to the speaker's personality, beliefs, cognitive process, social membership etc. [14]. Non-verbal communication and extra-linguistic information are also of importance and can be anthropomorphic. Developing ECAs that mimic humanlike non-verbal behaviours reinforces the understanding that the inclusion of non-verbal behaviour enhances the human-agent interaction. Images that are characterised as anthropomorphic can range from simple stick drawings to hyper realistic 3D characters [15]. This includes video clips of humans [1]. Non-verbal behaviour includes but is not limited to lip-synching that is accurate with ECA speech output, gesturing, facial animations such as eyebrow raising and change of eye gaze. Face animations (rising of eyebrow, smiling etc.) have been used successfully to communicate emotion and signal speech input from the user [16]. Through a series of experiments Foster:2007:RST found that when speech is combined with appropriate hand gestures, the usability of human-ECA interaction is significantly enhanced. \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Textual cues** & **Auditory cues** & **Visual cues** \\ \hline Example: Virtual companion chatbots that use pronouns or names such as Replika.1 & Example: Home virtual assistants that use pronouns or names such as Amazon’s Alexa. & Example: Telespin’s CoPilot soft skills training platform with the virtual reality firing training module (Barry).2 \\ \hline \end{tabular} Footnote 1: Found in: [https://replika.ai/](https://replika.ai/) Footnote 2: Found in: [https://www.talespin.company/copilot](https://www.talespin.company/copilot) **Table 1 - Forms of anthropomorphism with examples** ### The ECADM Model for categorizing ECAs The ECA Design Model model (ECADM) which organises ECAs' characteristics into three categories, is shown in fig. 1. This model serves a dual function: 1) inform design decisions for designers and 2) act as a guide to categorise ECA research which will allow for better comparisons and analyses. On the presentation level, ECAs can be depicted as either human or non-human characters, animated or static, photorealistic or more stylised, 2D or 3D, they can have a full body, only a head, a bust or a torso and finally their physical properties can vary (hair colour, clothes, body type, accessories, age etc.) [1, 13, 14]. Miller, 2008; Clarebout and Heidig (nee Domagk), 2012). Secondly on the interaction level, decisions on the input and output modalities of the ECA must be taken. Multimodality is a basic feature of ECAs; this means that ECAs can employ one or more of the inputs and output modalities such as voice and text. Finally, the persona level of the ECA is constituted by features related to the perceived by the user character of the ECA. Just like in real life as well as with virtual assistants, voice plays a major role in forming opinions about someone's personality. The agent's voice along with their role in the application and the personality they adopt form a cluster of personality pointers. These personality pointers are also informed by non-verbal and extra-linguistic information. Those categories are general and can be broken down to specifics, for example under the Interaction level one may add the number of agents within the application. Figure 1: Categories of ECA Design Model (ECADM). ## 3 Experimental Interface Design A system called Moneyworld was developed to examine the impact of multiple agents and illusion of humanness on the quality of the interaction for the user. Isbister and Doyle, (2002) claim that an agent with physical appearance, sound and animation can cause a powerful visceral reaction on the user - evoke the "illusion of life". By enhancing realism in movement, creating natural sounding speech and creating the right visual style that fits the application, user's reaction to the agent can be amplified. Based on the assumptions that human-like realism can evoke an illusion of life and subsequently an illusion of humanness, two versions of agent representation are put to the test based on the spectrum of application interface design in relation to human likeness (Figure 2) In order to achieve high-human likeness, a series of design decisions were made by following the ECADM (Figure 3). The choices were based on the literature which suggested that realism in all levels evokes the illusion of life. For the purposes of this research, two versions of a finance-related SG were compared, the high human-likeness version where the agents were represented by a humanoid ECA and a low human-likeness version where the agents are represented by neutral text conversational agents. Two agents for distinctive purposes (collaborator and instructor) were chosen to explore the dimension of the role of the agent. A mobile serious game was chosen as the application area because mobile gaming is expected to represent more than Figure 2: Spectrum of application interface design in relation to human likeness. 50% of the total games market in 20203, there is a significant trend towards mobile serious games (Adkins, 2020) and empirical evaluations in mobile SGs are limited (Jordine et al., 2016). Serious games are defined as games, therefore interactive, with a clear goal, based on a set of rules and provide feedback (Wouters, 2013); whose primary purpose is not entertainment or enjoyment (Michael and Chen, 2005); yet they are fun to play and/or engaging, have a scoring system (feedback) and teach a skill, knowledge or attitude to the user that can be then used in the real world (goal) (Bergeron, 2006); and "a mental contest, played with a computer (interactive) in accordance with specific rules (rules) that uses entertainment to further government or corporate training, education, health, public policy, and strategic communication objectives" (Zyda, 2005). Footnote 3: Reported by Newzoo: [https://newzoo.com/insights/articles/the-global-games-market-will-reach-108-9-billion-in-2017-with-mobile-taking-42/](https://newzoo.com/insights/articles/the-global-games-market-will-reach-108-9-billion-in-2017-with-mobile-taking-42/) Figure 3: **ECA design decisions that result in high human-likeness** Computer games are undisputedly popular in modern society. Statistics show that the games industry is the fastest growing entertainment industry with 2.2 billion people playing games around the world. This fast growth is attributed to the popularity of games especially among younger people making them a great medium to obtain information and knowledge (Lenhart et al., 2008; Seng and Yatim, 2014; Korre, 2012). Mondly, a company that makes language learning applications claim that "The new generation of learning should be about gamified, immersive experiences that always make the users crave for more." (Rajnerowicz, 2020). Moneyworld is a 3D interactive mobile serious game where the user travels back in time in order to learn more about the old money system that was used in the UK till the early 1970s. In this application, two photorealistic agents equipped with speech recognition are used. The participant partakes in a shopping experience using voice and mouse as input methods. In the game introduction a female unembodied voice welcomes the user to the time machine chamber and introduces the concept of the application (Figure 4). After the time travelling, the participant is transferred to a corner store in the 1960s were the main interaction takes place. The virtual shop designed in this research is based on a typical 1960s corner shop with the items displayed behind the counter. Figure 5 shows the shop-keeper in the corner shop. The interaction starts with a tutorial by the same unembodied voice, introducing the old money system to the participant. After the review, the voice demonstrates how to use the co Figure 4: **Introduction to Moneyworld, Time machine chamber.** After the introduction the application starts with a small tutorial on the gameplay delivered by another agent, Alex (instructor). Alex provides background information to the user on the currency and assistance when needed. In the 1960s, the currency used in Britain was an old monetary system based on pounds and shillings which was denominated by 12. After the description, Alex asks the participant to review the coins via an understanding exercise through speech. Associated error recovery dialogue was included for instances where the user was silent or answered with an incorrect response. Alex also tells the user which item to purchase in the shop. Figure 6 depicts Alex in the virtual portal within the shop. Figure 5: Corner store layout with shopkeeper ECA. Figure 6: Alex shown in the virtual portal After Alex's tutorial, she introduces the multimodality of the application, that of the coin submission tray. For the user t pay for the products in the shop, a virtual wallet is presented on the bottom of the screen with all the coins (see Figure 7). In the neutral text version of Moneyworld, both the instructor agent (Alex) and the collaborator agent (shopkeeper) were presented in the form of a neutral text, as shown in Figure 8. Figure 8: Neutral text instructor. Figure 7: Coin submission. In total, the user is given four items on their shopping list to 'buy' at the virtual shop and is given feedback after each item for correct payment made, efficiency of payment (payment made with the fewest number of coins), and efficiency of task (whether any additional help was required for each item on the shopping list). ## 4 An Evaluation of Spoken HECAs Within Msgs This experiment investigates user attitudes to two versions of Moneyworld involving speech recognition and conversational agents. The objective of this experiment is to examine the extent to which the illusion of humanness evoked by a conversational agent affects the usability of the application and the users' attitudes towards agents with different roles. **R1:** To what extent do HECAs affect the usability of a mobile serious game? **R2:** To what extent do users perceive a difference in agent persona between ECA and neutral text presentation as measured by the agent persona instrument (API)? **R3:** Which factors relating to the HECA's persona attributes account for variability in usability, and to what extent? ### Experimental Design A 2x2 factorial repeated measures experimental design was adopted for this study as the application had two different factors each constituted by two levels as it is shown in Table 2. The columns of the table represent the two shopping lists used to avoid overexposure between designs, and the rows represent the level of humanness of the agents used (text-low humanness level, HECA-high humanness level). There was no hypothesis for the shopping lists. \begin{table} \begin{tabular}{|c|c|c|c|} \multicolumn{2}{c}{**2x2 design**} & \multicolumn{2}{c}{**Shopping list**} \\ \multicolumn{2}{c}{} & \multicolumn{1}{c|}{**1**} & \multicolumn{1}{c|}{**2**} \\ \hline & HECA & V1 & V3 \\ \cline{2-4} & Text & V2 & V4 \\ \end{tabular} \end{table} Table 2: 2x2 factorial design table A power calculation was conducted with G*Power to determine the required sample size. To conduct a two-tailed t-test in order to detect a small effect (d= 0.3) (Cohen, 1988), with an alpha value of 0.05, and a power of 0.8 and a repeated measures design, 90 participants were required. #### 4.1.1 Participants A total of 90 participants were recruited for this experiment, with an age of under 40 years old. The age limit was calculated based on the context of the game, since the old sterling coins that were used for the game were in circulation till 15 February 1971. Therefore, it was highly unlikely for someone under 40 years old to have knowledge about the old money system. The participants were balanced for version and shopping-list order. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Title** & **Usability Evaluation: Presence of Humanoid Animated Agents on Mobile Serious Game** \\ \hline Design & Repeated measures \\ \hline Null Hypothesis & There is no difference in usability ratings between software version \\ & There is no difference in API ratings between software version \\ \hline Dependent & Usability Questionnaire Responses (1-7 Likert scale) \\ Variables & Agent Persona Instrument (1-5 Likert Scale) \\ \hline Other Data & Exit Interview Answers \\ \hline (Experiment) & 1 & Agent Embodiment (2 levels) \\ Independent & \\ Variables: & \\ \hline Other Variables: & Presentation Order \\ \hline Other Variables: & Shopping list Order \\ \hline Researcher Differences & Controlled by following a prepared procedure and script. \\ & Location & Social space in a university building \\ \hline Cohort & N = 90 \\ & power.t.test (power=0.8, d=0.3, sig.level=0.05, type="paired") \\ \hline Remuneration & f10 \\ \hline Duration: & 45-60 minutes \\ \hline \end{tabular} \end{table} Table 3: Summary Table of Usability Evaluation: Presence of Humanoid Animated Agents in Mobile Serious Game. Data were collected from a cohort of 90 participants (47 males, 43 females) with an average age of 25.6 years old. Most participants were international students and professionals (38 native language English, 7 Chinese, 13 Greek, 3 Russian-Ukrainian, 1 Bulgarian, 2 French, 2 German, 3 Hindi, 3 Italian, 1 Indonesian, 1 Japanese, 2 Lithuanian, 3 Romanian, 6 Spanish, 1 Malay, 1 Polish, 1 Telugu, 1 Palestinian Arabic; some were bilingual). #### Materials For this research, two validated questionnaires were used: one to assess the usability of the application and two identical questionnaires (API), one for each agent. The usability questionnaire used in this evaluation is a standardised and validated metric for assessing usability (Jack et al., 1993, Doolin, 2013). Previous research (Dutton, et al., 1993; Jack, et al., 1993; Love, et al., 1992) has identified salient attributes of the perceived usability of interactive systems. The result of this research is the CCIR MINERVA usability questionnaire that was chosen for this research which has been developed and tested as a tool for assessing users' attitudes (McBreen, 2002; Gunson, et al., 2011). The validity of the questionnaire was confirmed by experimental work (Jack et al., 1993). \begin{tabular}{|c|} \hline **Usability Questionnaire Statements** \\ \hline 1. I found Moneyworld confusing to use \\ \hline 2. I had to concentrate hard to use Moneyworld \\ \hline 3. I felt flustered when using Moneyworld \\ \hline 4. I felt under stress when using Moneyworld \\ \hline 5. I felt relaxed when using Moneyworld \\ \hline 6. I felt nervous when using Moneyworld \\ \hline 7. I found Moneyworld frustrating to use \\ \hline 8. I felt embarrassed while using Moneyworld \\ \hline 9. While I was using Moneyworld I always knew what I was expected to do \\ \hline 10. I felt in control while using Moneyworld \\ \hline 11. I would be happy to use Moneyworld again \\ \hline 12. I felt Moneyworld needs a lot of improvement \\ \hline 13. I enjoyed using Moneyworld \\ \hline 14. I thought Moneyworld was fun \\ \hline \end{tabular} **Table 4. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 4. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 5. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 6. Usability attributes.** The agent has a personality increasing. The agent's emotion was neither a persona nor a persona. The agent was human-like. The agent's movement was natural. The agent showed emotion. **Table 7. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 8. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 9. The API (Agent Persona Instrument) attributes (Baylor and Ryu, 2003).** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 10. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. I felt part of Moneyworld** **Table 16. I found the use of Moneyworld stimulating** **Table 17. Moneyworld was easy to use** **Table 18. I liked the voices in Moneyworld.** **Table 19. I thought the voices in Moneyworld were very clear.** **Table 10. I thought Moneyworld was too complicated** **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. I felt part of Moneyworld** **Table 16. I found the use of Moneyworld stimulating** **Table 17. Moneyworld was easy to use** **Table 18. I liked the voices in Moneyworld.** **Table 19. I thought the voices in Moneyworld were very clear.** **Table 19. I thought Moneyworld were very clear.** **Table 10. I thought Moneyworld was too complicated** **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. I felt part of Moneyworld** **Table 16. I found the use of Moneyworld stimulating** **Table 17. Moneyworld was easy to use** **Table 18. I liked the voices in Moneyworld.** **Table 19. I thought the voices in Moneyworld were very clear.** **Table 19. I thought the voices in Moneyworld were very clear.** **Table 10. I thought Moneyworld was too complicated** **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. I felt part of Moneyworld** **Table 16. I found the use of Moneyworld stimulating** **Table 17. Moneyworld was easy to use** **Table 18. I liked the voices in Moneyworld.** **Table 19. I thought the voices in Moneyworld were very clear.** **Table 19. I thought Moneyworld were very clear.** **Table 10. I thought Moneyworld was too complicated** **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 16. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 17. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 18. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 19. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 10. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 11. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 12. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 13. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 14. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 15. Usability attributes.** The second questionnaire that has been used for this research was a validated metric for assessing the agent's persona called agent persona (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 16. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 17. Usability attributes.** The second questionnaire that has been used for this research was a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 18. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 19. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 20. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. **Table 21. Usability attributes.** The second questionnaire that has been used for this research was also a validated metric for assessing the agent's persona called agent persona instrument (API) (Baylor and Ryu, 2003) as shown in Figure 9. The questionnaires were modified to fit the context of the application; therefore, irrelevant Likert items were removed, namely the item "The agent's movement was natural". Responses for the usability questionnaire were on a Likert-type scale, ranging from 1 = "Strongly agree", 2 = "Agree", 3 = "Slightly agree", 4 = "Neutral", 5 = "Slightly disagree", 6 = "Disagree", 7 = "Strongly disagree". Responses for the API were on a Likert-type scale, ranging from 1 = "Strongly disagree", 2 = "Disagree", 3 = "Neutral", 4 = "Agree", 5 = "Strongly agree". An exit interview was designed in order to retrieve information on the following topics: * Participant's view of the use of spoken HECAs and text conversational agent in a mobile serious game. * The effective deployment of spoken HECAs and text conversational agents in the interface. ### Experimental Procedure The experiment took place in open space workstation within a university building communal space. The setup allowed for observation under circumstances where ambient noise and other people are present. This was in order to simulate the conditions in which people might use ECAs on mobile devices for real applications. The device used was a smartphone with the following specifications: Super AMOLED capacitive touchscreen, 16M colors, 5.1 inches, 71.5 cm2 (\(\sim\)70.7% screen-to-body ratio), Resolution of 1440 x 2560 pixels, 16:9 ratio (\(\sim\)577 ppi density), Android 5.0.2 (Lollipop), upgradable to Android 8.0 (Oreo) OS and Exynos 7420 Octa (14 nm) chipset. First, the participants were informed about the purpose of the experiment and then they started the tutorial. In the tutorial a female unembodied voice welcomed the participants and introduced the concept of the game. The user went through the teleporter and the time/space channel and arrived at the 1960s corner shop in order to play the game. In the corner store, the same voice introduced the old coins to the participant followed by a coin review dialogue. The same voice then asked the user to identify three coins from the set and to state the value of each of them in pence. After the review, the voice demonstrated how to use the coins in order to buy items. The tutorial was the same for both versions and was experienced once at the beginning of the session. A different voice than that of Alex, the assistant/instructor, was used for the tutorial in order to avoid overexposure of one style over the other. After each participant interacted with the tutorial, they were asked to answer some relevant questions to the tutorial. After finishing with the tutorial's questionnaire, the user played Version 1 of Money World, where they were asked to buy 4 items by Alex, the assistant/instructor, who appeared on the right-top corner window, followed by Version 2. The scene comprised the corner store; the shopkeeper/collaborator that the player interacted with in order to buy items as dictated by Alex; and on the left side there was an inventory of the items purchased and the rewards system. ### Analysis Method Research question one was answered by a paired t-test analysis on the Usability questionnaire data; research question two was answered by paired t-test analysis on the API questionnaire data. An overall score for each questionnaire was calculated as the mean of its items, and then paired t-tests scores and Cohen's d effect size were computed. Subsequent paired t-tests and effect size calculations were conducted on each questionnaire item, with Bonferroni Correction and Holm-Bonferroni Sequential Correction to correct for Type 1 errors. Research question 3 is answered by performing a multiple regression analysis with data from both the usability and the API questionnaires. Multiple linear regression analysis estimates the coefficients of a linear equation, involving multiple independent variables (IVs), that best predict the value of the dependent variable (DV). In this research, there was no prior knowledge to select some variables based on previous research as no prior research looked on the relationship of the agent's persona and usability. The predictors used in this research were informed by the nature and theoretical base of the experiment. From the literature (Tibshirani & Hastie, 2016), it is known that sparser statistical models perform better and tackle the problem of overfitting. Thus, a reduction of complexity was achieved by selecting the IVs from theory rather than using all 24 predictors. Another reason for not using all 24 items as IVs is for model interpretability; by removing irrelevant features a model is more easily interpreted. The data was first assessed for normality using visual representations, tests for skewness and kurtosis and z-scores. Since this research focuses mostly on the affective effect of the HECA using the API instrument, the variables selected for the model belong to the "Emotive interaction" latent variable; this variable is subdivided into the "Human-like" factor and the "Engaging" factor. According to Baylor (Baylor and Ryu, 2003) who developed the instrument: "The characteristics of the Engaging factor represent the social richness of the communication channels (Whitelock et al., 2000) and play an important role to provide 'personality' to the agent and enhance the learning experience", while "the Human-like factor of pedagogical agent persona is what makes it figuratively'real'. Thus, both the Human-like factor and Engaging factors shape the pedagogical agent's social presence and personality". That limits the number of predictors to 9 ("The agent was human-like", "The agent was entertaining", "The agent was friendly", "The agent has a personality", "The agent showed emotion", "The agent emotion was natural", "The agent was enthusiastic", "The agent was expressive" and "The agent was motivating"). An _a priori_ sample size calculation for multiple regression was performed. Based on the rule of thumb that 10 to 15 samples are needed per predictor, 90 samples for 9 predictors should suffice (Tabachnick and Fidell, 2001). In this research, the ordinary least squares (OLS) full model is used with 9 items as predictors and the usability mean value for the shopkeeper agent. The method used is the hierarchical multiple linear regression, since from theory the "Human-like" factor is more relevant (Model 1: 4 predictors) and is followed by the "Engaging" factor (5 predictors). Model two is a combination of the "Human-like" and "Engaging" attributes and includes the following variables: "The agent was human-like", "The agent was entertaining", "The agent was friendly", "The agent has a personality", "The agent showed emotion", "The agent emotion was natural", "The agent was enthusiastic", "The agent was expressive" and "The agent was motivating". One case was deemed to be an outlier. The outlier was not removed, instead the mean score for the ECA version was corrected with the next highest score plus one unit as suggested by Field (2013). **4.4.Research Question 1: Usability Questionnaire Results** An overall mean usability score was calculated from the 18 usability attributes scores for each of the two treatment groups. The overall mean scores for the questionnaire taken differed between the two versions. The ECA version received the highest overall mean score of 5.32 (which translates to slightly agree on overall usability), while the Text version received a score of 4.40 (which translates to Neutral on overall usability). Table 5 and Figure 10 detail the descriptive statistics for the mean scores of the two versions. \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{**Descriptive Statistics**} \\ \hline & Order of experience & Mean & Std. Deviation & N \\ \hline **ECA MEAN** & ECA first & 5.21 &.72 & 45 \\ \cline{2-5} & Text first & 5.43 &.80 & 45 \\ \cline{2-5} & Total & 5.32 &.76 & 90 \\ \hline **TEXT MEAN** & ECA first & 4.20 & 1.06 & 45 \\ \cline{2-5} & Text first & 4.60 &.95 & 45 \\ \cline{2-5} & Total & 4.40 & 1.02 & 90 \\ \hline \end{tabular} **Table 5. Descriptive statistics of usability questionnaire** \(\mathrm{H_{o}}\): There is not a statistically significant difference between HECA mean and Text mean. \(\mathrm{H_{a}}\): There is a statistically significant difference between HECA mean and Text mean. The assumption of normality was examined using a one-sample Kolmogorov-Smirnov (KS) test and both were normally distributed. A dependent sample _t_-test for paired means showed a statistically significant difference between the two mean scores (t=9.45; df=89; p.=0.000) and therefore the null hypothesis can be rejected. Based on the data from the paired samples t-test summarised in Table 6, the null hypothesis can be rejected for all the individual items in the questionnaire. \begin{tabular}{|l|l|c|c|c|c|c|c|} \multicolumn{1}{c}{} & \multicolumn{4}{c|}{**Paired samples test\({}^{*}\)**} \\ \hline & **Mean** & **Mean** & **T** & **df** & **Sig.** & **Std.** \\ & **ECA** & **Text** & & & **(2-** & **Deviation** \\ & **version** & **version** & & & **tailed**) & \\ \hline Pair 1 & ECA1 - TEXT1-I found Moneyworld confusing to & 5.93 & 5.01 & 5.485 & 89 &.000 & 1.60 \\ & use. & & & & & \\ \hline Pair 2 & ECA2 - TEXT2-I had to concentrate hard to use & 5.18 & 4.28 & 5.135 & 89 &.000 & 1.66 \\ & Moneyworld. & & & & & \\ \hline Pair 3 & ECA3 - TEXT3-I felt clustered when using & 5.44 & 4.42 & 6.169 & 89 &.000 & 1.57 \\ & Moneyworld. & & & & & \\ \hline Pair 4 & ECA4 - TEXT4-I felt under stress when using & 5.83 & 4.76 & 6.618 & 89 &.000 & 1.55 \\ & Moneyworld. & & & & & \\ \hline Pair 5 & ECA5 - TEXT5-I thought Moneyworld was too & 6.16 & 5.84 & 3.209 & 89 &.002 & 0.92 \\ & complicated. & & & & & \\ \hline \end{tabular} Figure 10: **Summary of usability questionnaire.** \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**Mean**} & \multicolumn{1}{c|}{**T**} & \multicolumn{1}{c|}{**df**} & \multicolumn{1}{c|}{**Sig.**} & \multicolumn{1}{c|}{**Std.**} \\ & \multicolumn{1}{c|}{**ECA**} & \multicolumn{1}{c|}{**Text**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**(2-**} & \multicolumn{1}{c|}{**Deviation**} \\ & & & & & & & **tailed)** & \\ \hline Pair 6 & ECA6 - TEXT6-I felt nervous when using & 5.49 & 4.92 & 3.567 & 89 &.001 & 1.51 \\ & Moneyworld. & & & & & & **Deviation** \\ \hline Pair 7 & ECA7 - TEXT7-I found Moneyworld frustrating to & 5.29 & 3.77 & 7.913 & 89 &.000 & 1.83 \\ & use. & & & & & & \\ \hline Pair 8 & ECA8 - TEXT8-I felt embarrassed while using & 5.32 & 4.70 & 3.433 & 89 &.001 & 1.72 \\ & Moneyworld. & & & & & & \\ \hline Pair 9 & ECA9 - TEXT9-I felt Moneyworld needs a lot of & 4.17 & 2.99 & 7.747 & 89 &.000 & 1.44 \\ & improvement. & & & & & & \\ \hline Pair 10 & ECA10 - TEXT10-I felt in control while using & 5.20 & 4.23 & 5.356 & 89 &.000 & 1.71 \\ & Moneyworld. & & & & & & \\ \hline Pair 11 & ECA11 - TEXT11-I would be happy to use & 5.18 & 4.24 & 5.913 & 89 &.000 & 1.50 \\ & Moneyworld again. & & & & & & \\ \hline Pair 12 & ECA12 - TEXT12-I felt I relaxed when using & 5.04 & 4.32 & 4.481 & 89 &.000 & 1.53 \\ & Moneyworld. & & & & & & \\ \hline Pair 13 & ECA13 - TEXT13-I enjoyed using Moneyworld. & 5.26 & 4.30 & 5.433 & 89 &.000 & 1.67 \\ & & & & & & & \\ \hline Pair 14 & ECA14 -TEXT14-I thought Moneyworld was fun. & 5.22 & 4.30 & 6.144 & 89 &.000 & 1.42 \\ & & & & & & & \\ \hline Pair 15 & ECA15 -TEXT15-I felt part of Moneyworld. & 4.64 & 3.51 & 7.060 & 89 &.000 & 1.52 \\ & & & & & & & \\ \hline Pair 16 & ECA16 - TEXT16-I found the use of Moneyworld & 4.76 & 4.26 & 3.554 & 89 &.001 & 1.34 \\ & stimulating. & & & & & & \\ \hline Pair 17 & ECA17 - TEXT17-Moneyworld was easy to use. & 5.81 & 4.98 & 4.891 & 89 &.000 & 1.62 \\ & & & & & & & \\ \hline Pair 18 & ECA18 - TEXT18-While I was using Moneyworld & 5.34 & 4.47 & 3.990 & 89 &.000 & 2.09 \\ & I always knew what I was expected to do. & & & & & \\ \hline \multicolumn{6}{l}{_*99.9972% Confidence Interval of the Difference alpha=0.0027_} \\ \end{tabular} **Table 6. Sample t-test summary after Bonferroni correction.** There was a significant difference in all individual usability items. The ECA version scored higher on all questions. This difference support that the illusion of humanness effect theory holds in participants' perceptions of the software usability. The Text version scored below neutral in 3 attributes (frustration, needs a lot of improvement and immersion) and over slightly agree in only 2 (confusing to use and too complicated). The ECA version scored overall above average and was perceived to be usable. It scored between neutral and slightly agree in 3 attributes (needs improvement, stimulation and immersion), and over agree in all the rest except one where it was scored as strongly agree; that translates to participants feeling that the version was not too complicated. **4.5.Research question 2: Agent Persona Instrument Analysis** In this experiment there were two agents, an instructor agent (Alex) that gives instructions on how the coins should be used and says which items should be purchased next and a collaborator agent which interacts with the user during the transaction (the shopkeeper). Although there were two agents in each version, they were assessed and analysed separately as they serve different purposes in the interaction and they are different on many levels; therefore, the two agents cannot be aggregated. The overall mean scores for the collaborator agent questionnaire differed between the two versions. The ECA agent received the highest overall mean score of 3.67 which translates to between neutral and agree and that participants reacted positively to the agent. The Text agent received a score of 2.81 which translates to between disagree and neutral about their reaction towards the agent. Table 7 details the descriptive statistics for the mean scores of the two versions. The overall mean scores for the instructor agent questionnaire taken differed between the two versions. The ECA version received the highest overall mean score of 3.54 which translates to between neutral and agree and, thus, participants reacted positively to the agent. The Text version received a score of 2.91 which translates to between disagree and neutral on their reaction towards the agent. Table 8 details the descriptive statistics for the mean scores of the two versions on individual items. The assumption of normality was examined using a one-sample Kolmogorov-Smirnov (KS) test and both were normally distributed for both agents. There was a statistically significant difference between the two mean scores for the collaborator agent (t=13.068; df=89; p.=0.000; d = 1.34); therefore, the null hypothesis was rejected. In addition, there is a statistically significant \begin{table} \begin{tabular}{|l|c|c|c|} \hline & & **Mean** & **Std.** \\ & & & **Deviation** \\ \hline Collaborator Mean ECA version & Total & 3.67 &.58 \\ \hline Collaborator Mean TEXT & Total & 2.81 &.69 \\ \hline Instructor agent ECA Mean & & 3.54 &.58 \\ \hline Instructor agent TEXT Mean & & 2.91 &.68 \\ \hline \end{tabular} \end{table} Table 7: Descriptive statistics for the API difference between the two mean scores for the instructor (t=8.428; df=89; p.=0.000; d= 1.34); therefore, it is assumed that there is a statistically significant difference between the ECA mean and Text mean for the instructor-agent persona questionnaire. The effect sizes are considered large by Cohen's heuristic [10]. The HECA version of the collaborator scored higher than the text version on all cases but one (The agent was instructor-like). As seen in Table 9, the Text agent scored below neutral in 11 attributes (made the instruction interesting, enthusiastic, made me think more deeply about the presentation, interesting, \begin{table} \begin{tabular}{|p{113.8pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Questionnaire statement** & **ECA (Mean =)** & **TEXT (Mean =)** & **t** & **df** & **p.** & **Std. Deviation** \\ \hline The agent kept my attention. - & 4.01 & 3.28 & 5.86 & 89 &.000 & 1.19 \\ \hline The agent made the instruction interesting. - & 3.79 & 2.52 & 12.11 & 89 &.000 & 0.99 \\ \hline The agent presented the material effectively. - & 4.09 & 3.64 & 3.69 & 89 &.000 & 1.14 \\ \hline The agent helped me to concentrate on the presentation. - & 3.73 & 3.12 & 5.01 & 89 &.000 & 1.16 \\ \hline The agent was knowledgeable. - & 3.68 & 3.21 & 4.35 & 89 &.000 & 1.02 \\ \hline The agent encouraged me to reflect what I was learning. - & 3.23 & 3.00 & 1.83 & 89 &.070 & 1.21 \\ \hline The agent was enthusiastic. - & 3.68 & 2.29 & 10.93 & 89 &.000 & 1.21 \\ \hline The agent led me to think more deeply about the presentation. - & 3.16 & 2.68 & 4.18 & 89 &.000 & 1.08 \\ \hline The agent focused me on the relevant information. - & 3.69 & 3.61 & 0.69 & 89 &.493 & 1.07 \\ \hline The agent improved my knowledge of the content. - & 3.50 & 3.26 & 1.87 & 89 &.065 & 1.24 \\ \hline The agent was interesting. - & 3.72 & 2.50 & 11.97 & 89 &.000 & 0.97 \\ \hline The agent was enjoyable. - & 3.78 & 2.50 & 11.32 & 89 &.000 & 1.07 \\ \hline The agent was instructor-like. - & 2.80 & 3.53 & -5.16 & 89 &.000 & 1.35 \\ \hline The agent was helpful. - & 3.86 & 3.53 & 2.86 & 89 &.005 & 1.07 \\ \hline The agent was useful. - & 3.82 & 3.59 & 1.94 & 89 &.056 & 1.14 \\ \hline The agent showed emotion. - & 3.76 & 1.81 & 17.88 & 89 &.000 & 1.03 \\ \hline The agent has a personality. - & 3.96 & 1.94 & 18.87 & 89 &.000 & 1.01 \\ \hline The agent’s emotion was natural. - & 3.29 & 2.53 & 4.87 & 89 &.000 & 1.47 \\ \hline The agent was human-like. - & 3.78 & 2.06 & 13.73 & 89 &.000 & 1.19 \\ \hline The agent was expressive. - & 3.81 & 2.12 & 12.63 & 89 &.000 & 1.27 \\ \hline The agent was entertaining. - & 3.77 & 2.23 & 12.51 & 89 &.000 & 1.16 \\ \hline The agent was intelligent. - & 3.34 & 2.86 & 5.07 & 89 &.000 & 0.92 \\ \hline The agent was motivating. - & 3.53 & 2.69 & 7.97 & 89 &.000 & 1.01 \\ \hline The agent was friendly. - & 4.34 & 3.06 & 11.87 & 89 &.000 & 1.03 \\ \hline \end{tabular} \end{table} Table 8: **Mean scores and results of paired t- tests on Individual Agent Persona Instrument for version - Collaborator agent.** enjoyable, natural emotion, human-like, expressive, entertaining, intelligent, motivating, friendly) and over agree in only 1 (kept my attention). ECA agent scored above neutral in all attributes apart from 1 (the agent is instruction like). It scored over agree in 1 attribute (kept my attention). The difference in the overall mean API scores of the two versions of the game could be attributed to all the items. Based on the literature review, this research focused more on the items of the Human-Like factor of the questionnaire (highlighted in orange) while 6 additional attributes (the ECA: made the instruction interesting, was not instructor-like, was expressive, was entertaining, was friendly and was human-like) were found to have the biggest difference. \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Questionnaire statement** & **ECA** & **TEXT** & **t** & **df** & **p.** & **Std.** \\ & **(Mean =)** & **(Mean =)** & & & & **Deviation** \\ \hline The agent kept my attention. - & 3.87 & 3.08 & 6.01 & 89 &.000 & 1.23 \\ \hline The agent made the instruction interesting. - & 3.48 & 2.63 & 6.40 & 89 &.000 & 1.25 \\ \hline The agent presented the material effectively. - & 4.17 & 3.59 & 5.01 & 89 &.000 & 1.08 \\ \hline The agent helped me to concentrate on the presentation. - & 3.84 & 3.24 & 4.83 & 89 &.000 & 1.18 \\ \hline The agent was knowledgeable. - & 3.96 & 3.44 & 5.70 & 89 &.000 & 0.85 \\ \hline The agent encouraged me to reflect what I was learning. - & 3.49 & 2.99 & 3.75 & 89 &.000 & 1.27 \\ \hline The agent was enthusiastic. - & 3.10 & 2.44 & 4.80 & 89 &.000 & 1.30 \\ \hline The agent led me to think more deeply about the presentation. - & 3.27 & 2.73 & 4.40 & 89 &.000 & 1.15 \\ \hline The agent focused me on the relevant information. & 4.09 & 3.70 & 4.06 & 89 &.000 & 0.91 \\ \hline The agent improved my knowledge of the content. & 3.83 & 3.34 & 3.47 & 89 &.001 & 1.33 \\ \hline The agent was interesting. - & 3.24 & 2.61 & 5.15 & 89 &.000 & 1.17 \\ \hline The agent was enjoyable. - & 3.29 & 2.62 & 6.02 & 89 &.000 & 1.05 \\ \hline The agent was instructor-like. - & 4.27 & 3.80 & 3.90 & 89 &.000 & 1.13 \\ \hline The agent was helpful. - & 4.12 & 3.67 & 4.67 & 89 &.000 & 0.93 \\ \hline The agent was useful. - & 4.00 & 3.79 & 2.07 & 89 &.041 & 0.97 \\ \hline The agent showed emotion. - & 2.92 & 2.01 & 7.95 & 89 &.000 & 1.09 \\ \hline The agent has a personality. - & 3.10 & 2.11 & 7.67 & 89 &.000 & 1.22 \\ \hline The agent's emotion was natural. - & 2.99 & 2.52 & 3.40 & 89 &.000 & 1.30 \\ \hline The agent was human-like. - & 3.20 & 2.22 & 7.30 & 89 &.000 & 1.27 \\ \hline The agent was expressive. - & 3.16 & 2.16 & 8.04 & 89 &.000 & 1.18 \\ \hline The agent was entertaining. - & 2.89 & 2.46 & 3.43 & 89 &.001 & 1.20 \\ \hline \end{tabular} **Table 9-Mean scores and results of paired t-tests on Individual Agent Persona Instrument for version - Instructor agent.** All API items were statistically significant. The HECA version of the collaborator scored higher than the text version on all cases. The Text version scored below neutral in 14 attributes (made instruction interesting, encourage to reflect, enthusiastic, think more deeply, interesting, enjoyable, emotional, has personality, natural emotion, human-like, expressive, entertaining, intelligent, motivating, friendly) and over agree in none. The ECA version scored overall above neutral apart from 3 attributes (emotion, natural emotional, entertaining). It scored above agree in 5 attributes (presented the material effectively, focus on the information, helpful, useful, emotive). **4.6.Research question 3: Hierarchical Multiple Regression Analysis** **4.6.1. Results for the shopkeeper- collaborator agent** The descriptive statistics for the predictors used in the model are presented in Table 10. The skewness and kurtosis for each variable were examined with indices for acceptable limits of \(\pm 2\) used one predictor variable was skewed. That is a mere indicator of non-normality though, since skewed data often occur due to lower or upper bounds on the data such as Likert data produce (NIST, 2017). Upon further investigation, all the predictors were normally distributed apart from item 24 (The agent was friendly), while the box plots of items 17 and 21 were not balanced but the Stem-Leaf plots, Q-Q plots and histograms indicated a normal distribution. Thus, the data were treated as normal and were analysed parametrically. \begin{tabular}{|l|c|c|c|} \hline & **Mean** & **Std. Deviation** \\ \cline{2-4} & & & \\ \hline The agent was friendly & 4.34 &.673 \\ \hline The agent was motivating & 3.53 &.837 \\ \hline The agent was entertaining & 3.77 & 1.028 \\ \hline The agent was expressive & 3.81 &.982 \\ \hline The agent was human-like & 3.78 &.957 \\ \hline The agent's emotion was natural & 3.29 & 1.073 \\ \hline \end{tabular} \begin{tabular}{|l|c|c|} \hline & **Mean** & **Std. Deviation** \\ \hline The agent has a personality & 3.96 &.873 \\ \hline The agent showed emotion & 3.76 &.916 \\ \hline The agent was enthusiastic & 3.68 &.934 \\ \hline \end{tabular} **Table 10. Descriptive statistics for agent persona predictors (collaborator agent)** In a summary, no multivariate outliers existed; the assumption of non-zero variance was met as the predictors vary in value; the assumptions of linearity, homoscedasticity and normality were met; the assumption for independent errors was deemed to be inconclusive; the assumption of multicollinearity has been met; the data were suitably correlated with the dependent variable in order to be examined with multiple linear regression. For model 1 ("Human-Like" predictors), the strongest and the only statistically significant (p. =0.008) predictor was "The agent was human-like" (\(\beta\) =.39). In model 2 (full model with all 9 predictors from both "Human-Like" and "Engaging" factors), two were the most statistically significant predictors, "The agent was human-like" (\(\beta\) =.4)(p. = 0.010), "The agent was entertaining" (\(\beta\) =.03)(p.=0.05) (see Table 11). \begin{tabular}{|l|l|l|l|} \hline & B & SE B & \(\beta\) \\ \hline Model 1 & & & \\ \hline **Constant** & 4.06 & 0.38 & \\ \hline _The agent showed_ & -0.13 & 0.12 & -.15 \\ _emotion_ & & & \\ \hline _The agent has a_ & 0.15 & 0.13 &.18 \\ _personality_ & & & \\ \hline _The agent's emotion was_ & -0.01 & 0.09 & -0.02 \\ _natural_ & & & \\ \hline _The agent was human-like_ & 0.31 & 0.11 &.39** \\ \hline Model 2 & & & \\ \hline **Constant** & 4.08 &.51 & \\ \hline **The agent showed** & -0.15 & 0.14 &.18 \\ **emotion** & & & \\ \hline **The agent has a** & 0.84 & 0.14 &.09 \\ **personality** & & & \\ \hline **The agent's emotion was** & -0.02 & 0.9 & -0.03 \\ **natural** & & & \\ \hline **The agent was human-like** & 0.3 & 0.1 & 0.4** \\ \hline **The agent was enthusiastic** & -0.06 & 0.1 & -0.07 \\ \hline \end{tabular} For multiple regression the formula to calculate the effect size is: \[\mathbf{F^{2}}\mathbf{=}\mathbf{R^{2}}\mathbf{/}\mathbf{1}\mathbf{-}\mathbf{R^{2}}\] **Equation 1-Cohen's formula for calculating effect size in multiple regression (Selya, et al., 2012).** In this case, Cohen's formula gives an effect size \(f^{2}=0.297\). This represents a moderate to large effect according to Cohen's guidelines (Cohen, 1988). For the first model, the 4 independent variables from the "Human-like" factor produced an effect size \(R\)\({}^{2}\) of \(.17\) (\(F\) (4,85) = 4.28, \(p=.003\)) which means that the "Human-like factors" accounted for 17% of the variation in ECA Usability. However, for the final model and all 9 predictors, this value increased to 0.229 (\(F\) (9,80) = 2.64, \(p=.010\)) or 23% of the variation in ECA Usability. Therefore, whatever variable entered the model in block 2 and the "Engaging" factors accounted for an extra 6% of the variance. The adjusted R\({}^{2}\) shows how well the model can be generalised. It was 0.13 for the first model and 0.142 for the second model which implies that the model with all 9 predictors includes some non-important variables that add noise to the model. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **B** & **SE B** & **\(\boldsymbol{\beta}\)** \\ \hline **The agent was expressive** & -0.05 & 0.1 & -0.06 \\ \hline **The agent was entertaining** & 0.2 & 0.1 & 0.3** \\ \hline **The agent was motivating** & 0.13 & 0.12 & 0.14 \\ \hline **The agent was friendly** & -0.1 & 0.15 & -0.09 \\ \hline \end{tabular} \end{table} Table 11: Hierarchical Multiple Regression Analyses for the Shopkeeper Agent of the Embodied Conversational Agent Version. **Results for the Alex- instructor agent** The relevant assumptions of this analysis were tested prior to the multiple regression analysis. In a summary, no multivariate outliers existed; the assumption of non-zero variance was met as the predictors vary in value; the assumptions of linearity, homoscedasticity and normality were met; the assumption for independent errors has been met; the assumption of multicollinearity has been met; the data were suitably correlated with the dependent variable in order to be examined with multiple linear regression. For model 1, the strongest predictor that was statistically significant was "The agent was human-like" (\(\beta=.47\)). For model 2, the strongest predictor was "The agent was entertaining" (see Table 13). \begin{table} \begin{tabular}{|l|c|c|} \hline & Mean & Std. Deviation \\ \hline ECA\_MEAN & 5.3173 &.76617 \\ \hline The agent showed emotion & 2.92 & 1.008 \\ \hline The agent has a personality & 3.10 &.995 \\ \hline The agent’s emotion was natural & 2.99 & 1.022 \\ \hline The agent was human-like & 3.20 & 1.041 \\ \hline The agent was enthusiastic & 3.10 &.972 \\ \hline The agent was entertaining & 2.89 &.929 \\ \hline The agent was motivating & 3.44 &.751 \\ \hline The agent was friendly & 3.72 &.750 \\ \hline The agent was expressive & 3.16 & 1.005 \\ \hline \end{tabular} \end{table} Table 12: **Descriptive statistics for agent persona instrument predictors (Instructor agent)** In this case Cohen's formula yields an effect size \(f^{2}=0.4\). This represents a large effect according to Cohen's guidelines (Cohen, 1988). For the first model, the 4 independent variables from the "Human-like" factor produced a \(R^{2}\) of.20 (\(F\) (4,85) = 5.37, \(p=.001\)) which means that the "Human-like factors" accounted for 20% of the variation in ECA version Usability. However, for the final model and all the 9 predictors, this value increased to 0.29 (\(F\) (9,80) = 3.56, \(p=.00\)) or 29% of the variation in ECA version Usability. Therefore, whatever variable entered the model in block 2 and the "Engaging" factors accounted for an extra 9% of the variance. The adjusted R\({}^{2}\) was 0.16 for the first model and 0.21 for the second model which implies that not all the predictors contributed to the model significantly. **4.7.Qualitative analysis** After interacting with each version, participants were asked to comment on their experience with the application and then specifically on each version. All the answers for the open-ended questions were analysed using thematic analysis (Hayes, 2000). \begin{table} \begin{tabular}{|l|l|l|l|} \hline & **B** & **SE B** & **\(\boldsymbol{\beta}\)** \\ \hline **Model 2** & & & \\ \hline **Constant** & 4.19 &.42 & \\ \hline **The agent showed emotion** & 0.07 & 0.11 &.09 \\ \hline **The agent has a personality** & -0.23 & 0.12 & -.30 \\ \hline **The agent’s emotion was natural** & -0.01 & 0.1 & -0.11 \\ \hline **The agent was human-like** & 0.11 & 0.28 & 0.38** \\ \hline **The agent was enthusiastic** & 0.08 & 0.11 & 0.11 \\ \hline **The agent was expressive** & & & \\ \hline **The agent was entertaining** & -0.05 & 0.11 & -0.07 \\ \hline **The agent was motivating** & 0.30 & 0.11 & 0.36** \\ \hline **The agent was motivation** & 0.02 & 0.12 & 0.02 \\ \hline **The agent was friendly** & -0.07 & 0.12 & -0.07 \\ \hline \end{tabular} \end{table} Table 13: Hierarchical Multiple Regression Analyses for Instructor Agent of the Embodied Conversational Agent Version. #### 4.7.1 Explicit preference for software version After experiencing both versions, participants were asked which version of Moneyworld they preferred. They were asked to give their answer in terms of their first or second version experienced, and the answers were re-ordered for each version. Eighty-one participants (90%) stated that they preferred the ECA version, eight participants (8.9%) stated that they preferred the text version and one participant (1.1%) had no stated preference. Participants were also asked to give reasons for their answer. The majority of comments about the ECA version mentioned that characters were more fun [(20)]; they preferred interacting with a human/character [(16)]; it was more human-like and natural [(14)]; it was more interactive (27 participants); it was easier [(17)]; and the text version was boring and added cognitive load [(15)]. Some sample comments made by participants are: "The interaction with humans makes the game engaging."; "The ECAs were more engaging and fun. It was easier for me to understand the instructions."; "The shopkeeper made me feel relaxed. It was more interactive and enjoyable."; "I felt I was very familiar, and it was easy to deal with it. I was interacting with a human, so communication was easy." For those who preferred the Text version, participants commented that it was clearer [(2)], reading was faster than the ECA [(2)] or that they preferred the text version [(4)]. Example comments are: "I prefer to read because it is faster."; "Reading didn't take as long as the ECA version." #### 4.7.2 Agent version Before asked about the shopkeeper, participants were presented with a laminated picture of the shopkeeper as it appeared on the screen during the shopping task and asked "The interface that you interacted with in order to buy the items on the list looked like this. What did you think about it?". The comments were overall positive. Even though the question did not refer to the agent as "He" but rather asked what they thought about it, most participants commented on the human characteristics of the agent. Some participants [(23)] thought that the shop-keeper was human-like. Several participants [(16)] characterised the agent as friendly while others as funny or fun to interact with [(26)]. Some participants [(16)] commented that they liked him or liked interacting with him and five said that having a person to interact with made the experience better. Example comments made are: "I could imagine how he would be in real life. It was a realistic, human-like character."; "It was human-like. I liked interacting with someone and receiving positive feedback."; "Having human-like characters makes it more captivating and enjoyable."; "He added a personality. It was fun and interesting to interact with him. He gave funny comments." Seven participants made negative comments on the ECA which mainly had to do with the uncanny valley theory and the face animations but were accompanied by some positive comments like he was fun or friendly e.g. "He was interesting, funny and human-like. He was friendly, but he had a worrying expression." Participants were asked their opinion on Alex where they were also presented with a picture of the agent as it was presented in the game. Most comments on Alex were positive with 18 participants reporting that they liked her voice and they could focus better due to the voice; nineteen participants identified her role in the interaction as the agent that gave instructions and their perception was positive as they felt Alex was helpful; seven thought she was human-like, while eight stated that the addition of character was better as it made it more natural or easier to focus; thirteen commented that the interaction was more interesting and fun; and twelve that it was more clear. Example comments made are: "More interesting, less boring, human-like."; "Because of the voice I was able to perceive emotion. I think this is a better way to receive instructions."; "She was nice and friendly. She was encouraging and gave clear instructions."; "She was more instructive than the text." Fourteen comments that were made were on the negative side. Most had to do with the lip synching that was lacking or that she came across as robotic, her face was distracting or was emotionless and that she did not add much to the game. Some examples are: "Her lip synchronisation was not good and this made her funny. She came across as an emotionless robot."; "She was creepy and unnecessary. I do not think that she added anything (any value)."; "It was clear because you get the information. The agent is distracting. It may be boring, but it is clear." #### 4.7.3 Text version While focusing specifically on the text version, participants were asked their opinion on the agents. Similar to the ECA version, before asked about each agent, participants were presented with laminated pictures of the agents as they were presented in the game. In the question "The interface you interacted with in order to buy the items on the list looked like this (show text-based shopkeeper). What did you think about it?" only a few comments where positive. Some participants (15) answered that it was clear, straight forward or direct although not human-like or emotional. A few (12) had a lukewarm reaction towards the text shopkeeper by saying that it was fine, good or ok but not engaging. Only six thought that it was easy, five liked it, two thought it was helpful and one said that it helps them focus. Example comments made were: "Good but poor compared to the ECA version which was an improvement."; "It was straightforward, clear but not emotional." However, one participant noted: "It was clear because you get the information. The agent (ECA) is distracting. It may be boring, but it is clear." Most comments regarding the text version of the shopkeeper were underwhelming and negative. Ten said it was boring and less entertaining. Other comments suggested that they had to concentrate hard to remember the prices and was stressful (14); the text agent was less engaging (eight); it was frustrating to use (eight); and it was confusing (five). A few examples of comments are: "It was stressing for me to read it. It was more difficult to remember the prices."; "I found it easier to understand the task but less engaging, less entertaining and unrealistic."; "It was boring. I did not feel immersed. I was frustrated."; "I got nervous when the text went away because I had to remember. I was not immersed but I was more concentrated on the task. It was a very mechanical experience like an exam." Similar comment were made about the text version of Alex the instructor agent: "It was clear but not interesting."; "It was helpful, but it was not clear that I had to speak." #### 4.7.4 Agent preference After having been asked about their thoughts on each agent they faced during the game, participants were asked to explicitly state which agent they preferred in each role. Again, participants were presented with screenshots of all the four agents to choose from. In the question "Which system did you prefer to interact with on the shop?", 76 participants (84.5%) preferred the ECA version, 13 the Text version (14.5%) and one (1%) had no preference. Participants justified selecting the ECA version of the shopkeeper saying they found him more interactive, entertaining, it made the interaction more natural and real, the addition of voice helped them concentrate better and they could focus better. Some of the comments were: "Really liked him. He was polite and funny."; "It felt more like a character, a human."; "It made it seem natural and interactive. I didn't have to focus as much." Those who preferred the text version of shopkeeper gave comments such as that it was straight forward, quicker and less distracting, such as "The shop keeper was fun, but he was distracting me from understanding and remembering" and "The ECA was slow". In the question "Which system did you prefer to be assisted from?", 67 preferred the ECA version (74.5%), 18 the text version (20%) and five (5.5%) had no preference. Participants who preferred the ECA version of Alex elaborated on their response by saying that the version with the character was more enjoyable and felt more interactive; it was easier to concentrate and understand the instruction because of the voice; it mimicked human to human interaction and added character; and that unlike the text, the agent made the application feel more like a game. The participants who preferred the text version of the instructional agent justified their choice with comments such as that having a character did not add to the interaction and it was distracting because the role of the agent was to give instructions. A few of the comments referred to the fact that her facial expressions (ECA Alex) were weird and it was distracting. Also, a few commented that reading instructions was easier or quicker and text was enough for instructions. #### 4.7.5 Use of agents Finally, participants were asked if they used agent/assistants on their phone and their opinion on speech interfaces and natural language interaction. Again, the answers were organised and analysed for recurring themes. The first question participants were asked was: "Do you use assistants/agents such as Siri/Cortana/Speaktoit on your smartphone in your everyday life?". The majority (48) stated that they do not use agents on their phone, 30 said that they use agent sometimes, nine answered that they use agent every day and three did not own a smartphone. Out of those who use agents, 22 use Siri, nine Ok Google, four Cortana, two Duolingo, one Google now and one S voice. When asked for what tasks they used agents, 14 answered for fun, 11 for web searching, six for checking the weather, five for calendar and reminders, three for calls, three for setting the alarm, two for texting, two for language learning, two for finding their contacts, two for navigation and two for basic functions. The next question was "What do you like about this kind of interface?" and "What do you dislike about this kind of interface?". In terms of what participants like, 26 participants responded that speech recognition systems are convenient for hands free situations, 12 said that it is faster than typing, ten answered that it is an easier type of interaction, seven said that it is a fun way to interact and five answered that it is a natural way to interact. Illustrative comments are: "The advantages are that it is hands-free. I find the keys on the phone to be tedious" and "Speaking is faster and more human-like." To the question "What do you dislike about this kind of interface?", 20 participants responded that speech recognition systems still have issues with picking up accents, 18 answered that using it in public would be embarrassing, 16 said that speech recognition systems need improvement as there are still many voice recognition issues that make the interface frustrating to use and 11 responded that they are used to do things manually. The main concerns for speech systems were privacy and that recognition is not optimal yet. A few of the comments were: "I would be embarrassed in public and I do not want to bother other people" and "Currently it needs improvement as due to accents it is not very reliable." **4.8.Summary** This chapter presents the findings of a large-scale evaluation on the effectiveness of spoken HECAs in a mobile serious game. Results show that perceived usability was statistically significantly higher for the version with the ECAs compared to the neutral text version with a large effect size (Cohen's \(d=1.01\)). The ECA version scored 5.32 while the text version scored 4.40 in a 7 point Likert scale. When exploring the agents' persona as perceived by the user, data showed that the difference between the ECA and the text version was statistically significant for both agents, with the ECA version scoring higher in both cases. The individual attributes that were the most significant for the shopkeeper/collaborator were: "The agent made the instruction interesting", "The agent was enthusiastic", "The agent showed emotion", "The agent has a personality", "The agent was human-like", "The agent was expressive", "The agent was entertaining" and "The agent was friendly". For the Alexa/instructor agent the most significant attributes were: "The agent showed emotion", "The agent has a personality", "The agent was human-like" and "The agent was expressive". Upon further analysis, the multiple regression that was conducted in order to identify how much of the variability in usability can be explained by the API attributes, showed that the agents' entertaining, and human-like qualities contributed most to usability for both agents in the scenario. Qualitative analysis supports the results obtained by the quantitative data with many participants referring to the ECAs as more fun to interact with, more human-like, more engaging, easier to use and making the transaction feel real. ## 5 Discussion **R1:** To what extent do HECAs affect the usability of a mobile serious game (MSG)? In this study, the use of a HECA resulted in significantly higher usability scores with a large effect size (Cohen's \(d=1.01\)). **R2:** To what extent do users perceive a difference in agent persona between ECA and neutral text presentation as measured by the agent persona instrument (API)? The agents' entertaining, and human-like qualities correspond most to usability for both agents in the scenario. The API were higher when using the ECA version of the software, particularly on items relating to personality, expressive and whether it was human-like. The agent's entertaining, and human-like qualities correspond most to usability for both agents in the scenario. ## 5.1Support for the illusion of humanness effect For both agents, the quantitative analysis revealed that the overall mean scores of the API questionnaire did differ between the two versions. The HECA agents received scores of between 3 and 4 (out of 5) on the likert scale which translates to between neutral and agree and that participants reacted positively to the agent. The Text agents received scores of between 2 and 3 which translates to between disagree and neutral about their attitude towards the agent. Further, Cohen's effect size value (d = 1.34) suggested a high practical significance which suggests that the inclusion of an HECA in the role of the collaborator or instructor has a meaningful impact on the API and how participants perceive the agent. In both agents, that of the collaborator which was the role of the shopkeeper and that of Alex the instructor in this scenario, the same two attributes out of nine were deemed significant for contributing to usability. The first attribute was "The agent was human-like" which is especially important since the underlying theme of the experiment was the illusion of humanness. The variable belongs to the "Human-like" factor which to quote Baylor "address the agent's behaviour and emotional expression in terms of its naturalness and personality." (Baylor & Ryu, 2003). The other factor belonged to the "Engaging" factor, also according to Baylor and Ryu "pertains to the motivational and entertaining features of the agent". Regardless the fact that their role in the interaction was different, for both agents the attributes that contributed more to usability were that they were perceived as human like and as entertaining. In the case of the shopkeeper the "The agent was friendly.", "The agent showed emotion", "The agent emotion was natural", "The agent was enthusiastic" and "The agent was expressive" variables, even though not significant, had a negative relationship with the DV which can be justified by the uncanny valley theory since the agents' animation and lip-synchroning weren't flawless thus producing an uncanny feeling, also some comments referred to the shopkeeper as 'overly friendly' and 'creepy'. Feelings like this would detract from the illusion of humanness. As further support for the illusion of humanness, the qualitative investigation also found comments relating to "human-like" and "entertaining" attributes of the ECA agents. Overall, 55 comments were made for either agents where they were described as human-like or human and 61 comments where they were described as fun and/or entertaining. The majority of the comments on the Shopkeeper that were positive had to do with the fact that the agent was humanlike (23), made the interaction feel real or referred to as a "real person" (13), he made the interaction fun or he was funny (26) and he was friendly (16). Similar comments were made about the instructor agent where she was described as friendly (18), human-like or like a real person (12) and fun or enjoyable (14). These comments attribute human characteristics or a human dimension to the agent. Further support for the illusion of humanness comes from, participants' use of "he/she" pronouns to refer to the agent when the agent was presented in the ECA form, and researchers' observations that the participants applied social rules and followed similar social cues as in human to human interaction as they waited for the agent to conclude the question before answering. Because of the agent's presence, when the system did not pick up their voice they sympathized with the agent as if he couldn't hear them correctly rather than thinking it was their fault. "I relaxed when the SK said that he did not hear me because it made me feel it was not my fault." And "He was entertaining. The comments made it like it was his fault. He was funny and human like." **5.2.Usability considerations for ECAs** The results from the study raise a series of usability issues which researchers and developers could consider when making the choice as to whether include an ECA, or when designing the features of an ECA in a similar context. **5.2.1. Concentration and ease of use** The data show that users reported that they had to concentrate harder when using the Text version compared to the ECA version. The empirical data are supported by the qualitative data with 14 participants suggesting that during the text version they had to concentrate hard to remember the prices and was more stressful. This can be connected to the fact that reading from a screen can increase the extraneous cognitive load, while interacting with an ECA did not require to concentrate as hard as there were auditory and visual cues. The explanation is supported by Wik's (2011) previous work who claimed that through task- based interactive exercises with sound, pictures, agents and games, a more robust memory trace is created. The empirical data also support claims by Doumanis (2013) and Van Mulken (1998) that ECAs can improve cognitive functions and that by using ECAs the user can spend their cognitive resources on the primary task. Also, the results contradict one of the main arguments against ECAs, i.e. ECAs can lead to cognitive overload and distract from the main task because participants have to spend cognitive resources in processing visual and auditory information (Walker et al., 1994). The reduced cognitive load compared to the text version contributes to the ECA version by appearing easier to use and demanding less concentration. #### 5.2.2 Frustration and Embarrassment Users reported feeling more frustrated while using the Text version of the game compared to the ECA version. Participants also commented that they felt the Text version was less responsive. While both versions were identical apart from the control factor, from the observations this can be explained by using the media equation theory. People responded to the questions at the appropriate time when they had visual and auditory cues from the ECA, while on the Text version people responded to the question as soon as they read it (speech input initiated when the question disappeared from the screen for the Text version and when the audio prompt for the question ended for the ECA version) thus making it look non-responsive. The qualitative data support the evidence with eight participants claiming that the text version was frustrating to use and five that it was confusing. An interesting finding was that although participants reported quite often that they would feel embarrassed using a speech recognition system in public, both versions were rated relatively high although they felt less embarrassed playing the game with the HECA. A possible justification might be the "illusion of humanness" since the unconscious reaction is like that of conversing with a human thus making it less embarrassing. ### 5.3.Fun and enjoyment Users rated the ECA version as more fun and enjoyable than the Text version. During the exit interview participants commented that the Text version felt outdated, while the ECA version felt more like a game and the graphics resembled more contemporary game. Also, many users commented that the Text version was more neutral, while the shopkeeper's comments and the more human-like interaction made the game more fun. Mulken et al. (1998), while empirically studying the persona effect, found that the presentation was perceived as less difficult and more entertaining even though the presence of an agent had no effect in comprehension. Even though the persona effect focusses more on the effect of agents on learning, the effect of ECAs on entertainment and ease of use is the same as in the empirical work presented here. Another pair of researchers (Koda and Maes,1996) supported that the presence of an ECA in a game application may result in increased entertainment, an assumption that can also be confirmed from the empirical data presented here. #### 5.3.1 Immersion Especially in game design, immersion is a significant element. Sweetser and Wyeth (2005) list immersion as an element of game flow which is the experience during the act of gaming. The empirical evidence shows that the HECA version scored significantly higher than the Text version in terms of immersion ("I felt part of Money world."). Also, the qualitative data confirmed that participants felt that the ECA version was more immersive and interacted like in a real transaction. This can be justified by the anthropomorphism of the system and the "illusion of humanness" which mimicked a real-life interaction. #### Knowing what to do Users reported feeling like they had a better understanding on what they were expected to do while using the ECA version of the game compared to the Text version ("When I was using Money world, I always knew what I was expected to do"). This can be partially explained by the theory of affordances (perception drives action). While playing the Text version of the game, most participants tried to tap on the items in the background rather than speaking. In the ECA version, due to the visual and auditory cues, they figured out that they had to respond verbally. Since speech interaction is an integral part of this study, the results support that visual and auditory cues evoke a verbal response. Shneiderman was one of the biggest critics of ECAs. He argued that humanising the system may induce false mental models (Shneiderman and Maes, 1997). An example is that anthropomorphic agents may lead the user to believe that the system is also human-like in terms of cognitive aspects. That can make the user have expectations from the system that it does not possess and may result in a negative experience (Doumanis, 2013) However, in the case of this research participants had the "illusion" that ECAs had human-like cognitive aspects, especially in the case of the shopkeeper, that resulted in a positive experience instead of a negative one. The results support the view of Cassell that: "Humans depend to a great extent on embodied behaviours to make sense and engage in face-to-face conversations. The same happens with machines: embodied agents help to leverage naturalness and users judge the system's understanding to be worse when it does not have a body (Cassell, 2001)." ### Does the role of the agent matter? Although the patterns are broadly similar for both agents, the users' preference data indicates that more participants preferred the ECA version of the shopkeeper agent to the text version than preferred the ECA instructor to text. While 84.5% of participants preferred the HECA version of shopkeeper, the corresponding percentage for ECA Alex was 74.5%. Some of the comments indicate that participants were prone to making comparisons between the two agents even though they recognized that the agents had different roles. According to participants, Alex's facial expressions were not as responsive as the shopkeeper's. Also, some identified that because this agent gave instructions only they were not bothered by having text. This is because they did not interact with this agent the same way they did with the shopkeeper thus having lower expectations which was further supported by participants' comments. Also, it was observed that when participants experienced the text version first, they preferred the text version of Alex. This was not the case for the shopkeeper agent. The facial animation along with the designated role of the agent as the instructor-with whom they did not interact directly-justifies the larger percentage of participants preferring the text version even though the majority preferred the HECA version. A couple of examples would be: "It was good for instructions, but I did not care much for it" and "It was less interacting, and it was more giving instructions. It was educational." ### 5.5.Limitations, Future work and Implications for Developers #### 5.5.1 Limitations Even though Moneyworld is a serious game, it was not developed as an educational software. The primary purpose of this evaluation was the usability of the application and how it is affected by the inclusion of HECAs and not learning effectiveness therefore it was not measured. This was a conscious decision as learning is a complex construct making it difficult to measure (Bellotti et al., 2013) while determining whether a serious game is successful at achieving the anticipated learning goals is a time consuming, complex, difficult and expensive process (Hays, 2005; Enfield et al., 2012 ). Chin et al. (2009) attribute part of this difficulty on the fact that video games are inherently open-ended which makes it difficult to collect data. Even though the advertisement for participants in the main experiment stated clearly that only people with proficient knowledge of English should participate, a few had difficulties in understanding the language in either verbal or textual form. As a result, a small number of participants had to be turned away. Relevant to international participants, a few had a strong accent and the speech recognition system could not easily pick up their voice because it was developed using an English vocal dictionary in Pocket Sphinx. A way to tackle this issue for future experiments would be instead of self- evaluation of English proficiency, prospective participants should complete a test. A few of the negative comments focused on the ECAs' facial animation. Animating a character by hand is a time consuming and tedious task that not always guarantees a good outcome. For that purpose, there is software that focuses on creating realistic facial and body animation. The main obstacle in the presented research is the financial limitations that did not allow using top tier facial and body animation software which usually costs a few thousand pounds also in equipment and training. That resulted in using software within our budget which created decent animations but there is surely more room for improvement in this area. At the time of the development of the SG, the ECAs used were built in order to approximate the highest human likeness possible with the means we had. However, there are more design decisions that can be made that can further human likeness most of which are reliant on technological advancements such as those in graphics, CPUs, GPUs, animation etc. Realistic looking characters are already being used in commercial games. It is very promising that we got these results despite the limitations we had, and it would be interesting to replicate this experiment with commercial grade realistic ECAs. #### 5.5.2 Future Work In the current study the usability questionnaire was developed to measure the participants' subjective impressions of efficiency and effectiveness (system performance). In the future performance could be based on objective scores (effectiveness) and time (efficiency). Further evaluations could identify which aspect of the anthropomorphic interface of ECAs evokes most the illusion of humanness and contributes more to usability. In order to examine that, further evaluations need to be carried out to specify which anthropomorphic elements are the ones evolving an illusion of humanness and affecting usability more (different levels of anthropomorphic agents). In addition, it is important to replicate this study in other contexts to discover whether the illusion of humanness effect would hold in other software applications. The medium on which Moneyworld was tested was a serious game, but the "illusion of humanness" is not specific to a certain topic or medium and could be used in other contexts. It is possible that users' perceptions of usability of conversational agents vary depending on context - in hands-free, eyes busy situations, users may prefer agents not to be embodied. The illusion of humanness becomes less of an academic issue but more of a real-life issue due to the increasing use of virtual agents and smart screens (virtual agents with a screen) such as Amazon Echo and Google Home in our homes. One possible topic for future evaluation would be testing the addition of ECAs in home smart screens like Amazon Echo show. Would it be worth adding and for which purposes? A Greek company called MLS already has an ECA version incorporated in their smart screen called MAIC but no data on its usability are available. Participants who took part in this research were in their majority highly educated, with technological literacy and between 18 and 40 years old. It would be worth exploring the illusion of humanness effect on older users or children and people of varying educational and cultural backgrounds as their response to the system might differ. #### 5.5.3 Implications for developers The development of ECAs is a time-consuming process that developers might not be willing to invest in without evidence showing that it is worth the effort. In application development, assuring usability is an important part of the success of the interaction. In the study reported in this paper, increased usability related to the "illusion of humanness" effect which in turn results from high human likeness. Due to the methodological approach followed and the attention to the effect sizes, the number of evaluation participants was large enough to allow for a safe generalisation to the population. However, the generalisability of the evaluation findings to the general adult population should be treated with care. When developing usable spoken multimodal systems, the appropriateness of speech interaction must be decided for each application anew based on the purpose and environment of the application (Dybkjaer et al. 2004). Weiss (2015) makes a similar claim that whether usability and quality are to be enhanced by using an ECA in a multimodal human-machine interface must be decided for each application anew. Since the platform for this evaluation was a mobile serious game, no generalisation can be made about the "illusion of humanness" in other applications with different purposes or contexts. Nevertheless, the generalisation that can be made safely based on the evaluation findings is that contextually relevant spoken HECAs of high human likeness with collaborative and instructional roles can induce illusion of humanness which results in increased usability in mobile serious games. A suggestion to developers for improving usability in similar contexts would be to incorporate spoken HECAs with high human likeness by following the design decisions in Figure 3. Those decisions are not arbitrary as there is evidence from the literature on what results in high human likeness. Isbister and Doyle (2002) claim that an agent with physical appearance, sound and animation can cause a powerful visceral reaction on the user and evoke the "illusion of life". By enhancing realism in movement, creating natural sounding speech and creating the right visual style that fits the application, user's reaction to the agent can be amplified. Applying however the same ECA design principles by following the ECADM under different circumstances (different media, different game genres, more diverse population etc.) would help determine the extent of the generalisability of the effect. The ECADM and the spectrum of application interface design in relation to human likeness can be used to inform design decisions on the development of ECAs and the level of human likeness desired respectively. The ECAD model serves a dual function; apart from informing design decisions for designers it can act as a guide to categorise ECA research which will allow for better comparisons and analysis; in ECA research the characteristics of ECAs are not always reported or when they do they lack information that can be used for replication, analysis and comparison. ## 6 Conclusions The primary aim of this research was to examine the extent to which spoken humanoid embodied conversational agents (HECAs) affect the usability of a mobile serious game application. Mixed method analysis allowed for triangulation of findings. Following specific design decisions based on the ECADM model resulted in ECAs with high human likeness. High human likeness in turn resulted in the illusion of humanness effect. The findings suggest that ECAs with high human likeness evoked the illusion of humanness effect and improved the usability of the application. Results are consistent throughout analyses. The ECA version scored statistically significantly higher than the text version with a large effect size that shows that the results translate to a meaningful real-life difference. The regression analysis showed that the attributes "entertaining" and "human- like" contributed more to usability for both agents which supports the theory that the illusion of humanness has an impact on usability. All quantitative results are supported and further explained by the qualitative data where users used pronouns when referring to the ECAs and justified saying that they were human-like and the interaction was more natural and fun because of them. In conclusion, ECAs on mobile devices have potential advantages over current interaction paradigms in improving usability because they provide a more "human-like" way of communicating with a complex system. The implications of these findings are that developers should decide for each application anew if ECAs are fitting to the context and purpose of the application. However, developers should consider that in this context ECAs with high human likeness result in the illusion of humanness which in turn improves the overall usability.
2309.09986
Reconstructing bifurcation diagrams of chaotic circuits with reservoir computing
Model-free reconstruction of the bifurcation diagrams of Chua's circuits by the technique of parameter-aware reservoir computing is investigated. We demonstrate that: (1) reservoir computer can be utilized as a noise filter to recover the system trajectory from noisy signals; (2) for a single Chua circuit, the machine trained by the noisy time series measured at several sampling states is capable of reconstructing the whole bifurcation diagram of the circuit with a high precision; (3) for two coupled chaotic Chua circuits of mismatched parameters, the machine trained by the noisy time series measured at several coupling strengths is able to anticipate the variation of the synchronization degree of the coupled circuits with respect to the coupling strength over a wide range. The studies verify the capability of the technique of parameter-aware reservoir computing in learning the dynamics of chaotic circuits from noisy signals, signifying the potential application of this technique in reconstructing the bifurcation diagram of real-world chaotic systems.
Haibo Luo, Yao Du, Huawei Fan, Xuan Wang, Jianzhong Guo, Xingang Wang
2023-09-12T07:52:32Z
http://arxiv.org/abs/2309.09986v1
# Reconstructing bifurcation diagrams of chaotic circuits ###### Abstract Model-free reconstruction of the bifurcation diagrams of Chua's circuits by the technique of parameter-aware reservoir computing is investigated. We demonstrate that: (1) reservoir computer can be utilized as a noise filter to recover the system trajectory from noisy signals; (2) for a single Chua circuit, the machine trained by the noisy time series measured at several sampling states is capable of reconstructing the whole bifurcation diagram of the circuit with a high precision; (3) for two coupled chaotic Chua circuits of mismatched parameters, the machine trained by the noisy time series measured at several coupling strengths is able to anticipate the variation of the synchronization degree of the coupled circuits with respect to the coupling strength over a wide range. The studies verify the capability of the technique of parameter-aware reservoir computing in learning the dynamics of chaotic circuits from noisy signals, signifying the potential application of this technique in reconstructing the bifurcation diagram of real-world chaotic systems. ## I Introduction In exploring chaotic systems, one of the central tasks is to characterize how the system dynamics is varying with the system parameters, namely finding the bifurcation diagram of the system dynamics [1; 2]. The study of the bifurcation diagram is not only of theoretical interest as it reveals the route from regular behaviors to chaos, but also of practical significance as it pinpoints the tipping points where a small change in the system parameters might result in a drastic change in the system dynamics [3; 4]. The latter is of particular concern to our modern society, as accumulating evidence indicates that many real-world complex systems are already in the vicinity of their tipping points, e.g., the global climate [5; 6], complex ecological systems [7; 8], and financial markets [9; 10]. When the exact equations governing the system dynamics are known, the bifurcation diagram can be constructed by the approach of model simulations. Yet in realistic situations the exact equations of the system dynamics are generally unknown, and what is available are only measured data. Different from model-based studies in which the signals are noise-free and the system parameters can be tuned arbitrarily according to the research request, signals measured from realistic systems are inevitably contaminated by noise. In addition, due to the cost of data acquisition and practical restrictions, it is infeasible to construct the bifurcation diagram of a realistic system by a fine scan of the system parameters over a wide range. These practical concerns make model-free reconstruction of the bifurcation diagram of realistic chaotic systems a challenging question of active research in the field of nonlinear science and complex systems [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. To reconstruct the bifurcation diagram of chaotic systems based on measured data, one approach is to rebuild the model first, including inferring the terms contained in the dynamical equations and estimating the system parameters, and then reconstructing the bifurcation diagram through the approach of model simulations [25; 26; 27]. The advantage of this model-rebuilding approach is that the equations governing the system dynamics can be obtained explicitly, while the drawbacks are that the data should be of high quality (with weak noise) and some prior knowledge of the system dynamics should be available, e.g., the form of the nonlinear terms in the equations. An alternative approach to reconstructing the bifurcation diagram is exploiting the machine learning techniques [15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Owning to the superpower of regression analysis, machine learning techniques are able to infer from data not only the dynamics of the chaotic systems but also the system parameters, and therefore are capable of reconstructing the bifurcation diagrams. Compared to the model-rebuilding approach, the advantages of the machine-learning approach are that no prior knowledge of the system dynamics is required and the techniques can be applied to noisy signals in general, yet the disadvantages are that the system dynamics are unknown, i.e., the machines are working as "black boxes", and a large amount of data are normally required for training the machines. Reservoir computing (RC) [28; 29], a special technique based on recurrent neural networks in machine learning, has been exploited recently for predicting chaos and reconstructing the bifurcation diagram of chaotic systems [20; 21; 22; 23; 24; 20; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. From the point of view of dynamical systems, a reservoir computer can be regarded as a complex network of coupled nonlinear units, which, driven by the input signals, generates the outputs through a readout function [35]. Compared to other types of deep learning techniques such as convolutional neural networks (CNNs), RC contains only a single hidden layer, namely the reservoir. Except for the output matrix which is to be estimated from the data through a training process, the machine is fixed at the construction, including the input matrix, the reservoir network, and the updating rules. Though structurally simple, RC has shown its great potential in many data-oriented applications [35], e.g., speech recognization, channel equalization, robot control, and chaos prediction. In particular, it has been shown that a properly trained RC is able to predict accurately the state evolution of typical chaotic systems for about half a dozen Lyapunov times [29; 32], which is much longer than the prediction horizon of the traditional methods developed in nonlinear science. Besides predicting the short-term state evolution, RC is also able to replicate faithfully the long-term statistical properties of chaotic systems, e.g., the dimension of strange attractors and the Lyapunov exponents [31]. This ability, known as climate replication, has been exploited very recently to predict the critical transitions and bifurcation points in complex dynamical systems [20; 21; 22; 24]. In particular, by incorporating a parameter-control channel into the standard RC, it has been demonstrated that the machine trained by the time series of several sampling states of a chaotic system is able to infer the dynamical properties of the other states not included in the training set. This new technique, which is named parameter-aware RC (PARC) in Ref. [20], has been applied successfully to predict the critical transition of system collapses, infer the bifurcation diagram of chaotic systems [21; 24], and anticipate the critical coupling for synchronization in coupled oscillators [22]. Whereas the efficacy of the PARC technique has been well demonstrated in these applications, the studies are restricted to modeling systems of noise-free signals and exact parameters. As noise perturbations and parameter uncertainty are inevitable in realistic systems, a question of general interest therefore is whether the PARC technique can be applied to realistic chaotic systems. It is worth noting that the impact of noise on the performance of RC in predicting chaotic systems is twofold. On the one hand, noise-corrupted signals blur the system trajectories, making it difficult to infer accurately the equations of the system dynamics [36; 37; 38; 39]. A typical case of this kind is measurement noise, which is commonly regarded as destructive to machine learning. To cope with measurement noise, techniques such as low-pass filters are usually adapted to process the data before feeding them into the machine [36; 38]. On the other hand, noise might play a constructive role in machine learning in some circumstances [40; 41; 42; 43]. For measurement noise, studies have shown that in the training phase the role of noise is similar to that of Tikhonov regularization [40], and the performance of the machine reaches its maximum at the moderate noise [43]. For dynamical (intrinsic) noise, studies have shown that the introduction of a certain amount of noise is helpful for exploring the global information of the system dynamics, and therefore is beneficial for machine learning, e.g., extending the transient dynamics and inferring the "unseen" attractors [41; 42]. The nontrivial relationship between noise and machine learning makes the inference of chaotic dynamics from noisy signals not only a practical concern in applications, but also an effective approach for exploring the working mechanism of the machines. For that, growing attention has been paid in recent years to the prediction and inference of chaos based on noisy signals [36; 37; 38; 39; 41; 42; 43]. The studies, however, are mostly conducted for modeling systems with artificial noise, with the validity of the results in realistic system is yet to be checked. In our present work, employing the classic Chua circuits as examples, we attempt to reconstruct from measured data the bifurcation diagrams of the circuits by the PARC technique proposed recently in machine learning. Two specific scenarios are considered and investigated. In the first scenario, we collect the time series from a single circuit under several sampling parameters, and the mission is to reconstruct the whole bifurcation diagram in the parameter space. In the second scenario, we collect the time series of two coupled chaotic circuits under several coupling parameters, and the mission is to anticipate the variation of the synchronization degree is the coupled circuits with respect to the coupling parameter over a large range. We are going to demonstrate that, despite the presence of noise (measurement and dynamical noise) and parameter mismatch (between two coupled circuits), the PARC technique is capable of reconstructing the bifurcation diagrams with high precision in both scenarios. The rest of the paper is organized as follows. In the following section, we will describe the experimental setups and the way how the data are acquired. The technique of PARC will be introduced briefly in Sec. III. Our main results on the application of the PARC technique will be presented in Sec. IV, including the filtering effect of RC on the noisy signals, the reconstruction of the bifurcation diagram for a single circuit, and the inference of the synchronization relationship between two coupled chaotic circuits. Finally, concluding remarks will be given in Sec. V. ## II Experimental setups The Chua's circuit adopted in our studies is schematically shown in Fig. 1(a), which consists of two capacitors (\(C_{1}\) and \(C_{2}\)), two linear resistors (\(R\) and \(R_{1}\)), one inductor (\(L\)), and a nonlinear resistor (NR) [44; 45; 46; 47]. The equations of the system dynamics read \[\begin{cases}C_{1}\dfrac{dv_{C_{1}}}{dt}=\dfrac{1}{R}(v_{C_{2}}-v_{C_{1}})-g( v_{C_{1}}),\\ C_{2}\dfrac{dv_{C_{2}}}{dt}=\dfrac{1}{R}(v_{C_{1}}-v_{C_{2}})+i_{L},\\ L\dfrac{di_{L}}{dt}=-v_{C_{2}}-R_{1}i_{L},\end{cases} \tag{1}\] Figure 1: (a) Schematic of Chua’s circuit. NR denotes the nonlinear resistor. The linear resistor \(R\) plays the role of the bifurcation parameter, which is adjusted to generate different dynamics. (b) The piecewise-linear characteristic curve of the NR. with \(g(v_{C_{1}})=m_{0}v_{C_{1}}+0.5(m_{1}-m_{0})(|v_{C_{1}}+B_{p}|-|v_{C_{1}}-B_{p}|)\) the characteristic curve of the nonlinear resistor. The characteristic curve of the nonlinear resistor is schematically plotted in Fig. 1(b), in which the parameters are \(m_{0}=-0.41\,\text{ms}\) (mA/V) \(\pm 10\%\), \(m_{1}=-0.7\,\text{mS}\pm 10\%\), and \(B_{p}=1.7\,\text{v}\pm 5\%\). In our experiments, we fix the components \(R_{1}=10\,\Omega\pm 1\%\), \(C_{1}=10\,\text{nF}\pm 5\%\), \(C_{2}=100\,\text{nF}\pm 5\%\), \(L=20\,\text{mH}\pm 10\%\), while changing \(R\) over the range \((1.73\,\Omega,1.77\,\Omega)\) to generate different dynamics. The variables measured in the experiments are \(v_{C_{1}}\) (the voltage of capacitor \(C_{1}\)), \(v_{C_{2}}\) (the voltage of capacitor \(C_{2}\)), and \(v_{R_{1}}=i_{L}R_{1}\) (the voltage of resistor \(R_{1}\)), which are acquired by the sampling frequency \(f_{0}=50\,\text{kHz}\). For each value of \(R\), we first let the circuit operate for a transient period of \(1000\,\text{ms}\), and then record the system state, \((v_{C_{1}},v_{C_{2}},v_{R_{1}})\), for a period of \(100\,\text{ms}\). As such, each time series contains \(n=5000\) data points. Setting \(R=1.738\,\text{k}\Omega\) in the circuit, we plot in Figs. 2(a) and (b) the system trajectories projected onto the 2D phase spaces \((v_{C_{1}},v_{C_{2}})\) and \((v_{C_{2}},v_{R_{1}})\), respectively. We see that the trajectories are blurred by noise severely, rendering it difficult to figure out accurately the periodicity of the trajectories. (The trajectories seem to be period-3, but might be period-6 or weakly chaotic.) We also see from the figures that compared to the variables \(v_{C_{1}}\) and \(v_{C_{2}}\), the variable \(v_{R_{1}}\) is more corrupted by noise. For this reason, we choose the variable \(v_{C_{1}}\) to investigate experimentally the bifurcation diagram. Decreasing \(R\) from \(1.77\,\text{k}\Omega\) to \(1.73\,\text{k}\Omega\) by the decrement \(\Delta R=0.5\,\Omega\), we measure the time series of \(v_{C_{1}}\) for each value of \(R\) and, by recording the local minimums of \(v_{C_{1}}\), plot in Fig. 2(c) the bifurcation diagram of the circuit. We see that, while the figure shows roughly the route from limit cycle to chaos through the period-doubling bifurcations, the bifurcation details are not clearly shown. For instance, we can not infer from the figure when will the system dynamics present the period-8 orbit and what happens in the window \(R\in[1735\,\Omega,1741\,\Omega]\). _The first objective of our present work is to reconstruct the bifurcation diagram of Chua's circuit with a high quality (precision), based on the noisy series acquired at several values of \(R\) in experiments._ The second experiment we conduct is the synchronization of two coupled chaotic Chua circuits. The diagram of the coupled circuits is schematically shown in Fig. 3(a), and a photo of the experimental setup is given in Fig. 3(b). The dynamics of the coupled circuits are governed by the equations \[\begin{cases}C_{3}\dfrac{dv_{C_{3}}}{dt}=\dfrac{1}{R_{2}}(v_{C_{3}}-v_{C_{4}}) -g(v_{C_{3}})+\dfrac{1}{R_{6}}(v_{C_{5}}-v_{C_{3}}),\\ C_{4}\dfrac{dv_{C_{4}}}{dt}=\dfrac{1}{R_{2}}(v_{C_{4}}-v_{C_{3}}+i_{L_{1}}),\\ L_{1}\dfrac{di_{L_{1}}}{dt}=-v_{C_{4}}-R_{4}i_{L_{1}},\\ C_{5}\dfrac{dv_{C_{5}}}{dt}=\dfrac{1}{R_{3}}(v_{C_{5}}-v_{C_{6}})-g(v_{C_{5}}) +\dfrac{1}{R_{6}}(v_{C_{3}}-v_{C_{5}}),\\ C_{6}\dfrac{dv_{C_{6}}}{dt}=\dfrac{1}{R_{3}}(v_{C_{6}}-v_{C_{5}}+i_{L_{2}}),\\ L_{2}\dfrac{di_{L_{2}}}{dt}=-v_{C_{6}}-R_{5}i_{L_{2}},\end{cases} \tag{2}\] with \(g(v_{C})\) the piecewise-linear function characterizing the nonlinear resistors. [The parameters of the nonlinear resistors are identical to the one used Fig. 1(b)]. Here, to better demonstrate the synchronization phenomenon, we choose the circuit components \(R_{2,3}=1.6\,\text{k}\Omega\), \(C_{3,5}=10\,\text{nF}\pm 5\%\), \(C_{4,6}=100\,\text{nF}\pm 5\%\), \(L_{1,2}=26\,\text{mH}\pm 10\%\), and \(R_{4,5}=10\,\Omega\pm 10\%\). Note that due to the mismatched parameters (components), the two circuits are non-identical. Despite the mismatched parameters, both circuits present chaotic motions when isolated, as depicted in Fig. 3(c). The two circuits are coupled through the resistor \(R_{6}\), which can be adjusted between \(9\,\text{k}\Omega\) (strong coupling) and \(13\,\text{k}\Omega\) (weak coupling) with a high precision (\(\sim 0.1\Omega\)). Still, the currents of the inductors \(i_{L_{1}}\) and \(i_{L_{2}}\) are monitored, respectively, by the voltages \(v_{R_{4}}\) and \(v_{R_{5}}\), and data are acquired by the sampling frequency \(f_{0}=100\,\text{kHz}\) for a Figure 2: Setting \(R=1.738\,\text{k}\Omega\) in Chua’s circuit, the system trajectories plotted on the planes \((v_{C_{1}},v_{C_{2}})\) (a) and \((v_{C_{2}},v_{R_{1}})\) (b). (c) By the data measured from experiments, the bifurcation diagram of Chua’s circuit plotted according to the local minimums of \(v_{C_{1}}\). period of \(100\) ms in each experiment. Setting \(R_{6}=10.2\) k\(\Omega\), we plot in Fig. 3(d) the relationship between the voltages \(v_{C_{3}}\) (from circuit 1) and \(v_{C_{5}}\) (from circuit 2). We see that the data are distributed roughly along the diagonal line, indicating that the two circuits are oscillating in a weakly coherent fashion. The synchronization degree of the coupled circuits is evaluated by the time-averaged synchronization error \(\delta r=\left\langle\delta e(t)\right\rangle_{T}\), with \(\delta e=\sqrt{(v_{C_{3}}-v_{C_{5}})^{2}+(v_{C_{4}}-v_{C_{6}})^{2}+(v_{R_{4}}- v_{R_{5}})^{2}}\) the instant synchronization error between the circuits and \(\langle\cdot\rangle\) the time-average function. For the results shown in Fig. 3(d), we have \(\delta r\approx 0.303V\). Here the question we are interested in is: _given experiments are conducted at only several values of \(R_{6}\) and the time series of the sampling states are available, can we anticipate the synchronization degree of the coupled circuits for a random \(R_{6}\) and, furthermore, the variation of the synchronization degree with respect to \(R_{6}\) over a wide range?_ The second objective of our present work is to demonstrate that this question can be addressed by the technique of PARC in machine learning. ## III Parameter-aware Reservoir Computing The PARC technique exploited for reconstructing the bifurcation diagrams is generalized from the one proposed in Refs. [20; 21; 22; 23]. Like the conventional RCs, the machine employed here is also constructed by four modules: the \(I/R\) layer (input-to-reservoir), the parameter-control channel, the reservoir network, and the \(R/O\) layer (reservoir-to-output). The structure of the machine is schematically shown in Fig. 4(a). The \(I/R\) layer is characterized by the matrix \(\mathbf{W}_{in}\in\mathbb{R}^{D_{r}\times D_{in}}\), which couples the input vector \(\mathbf{u}_{\beta}(t)\in\mathbb{R}^{D_{in}}\) to the reservoir network. Here, \(\mathbf{u}_{\beta}(t)\) denotes the input vector acquired from the target system at time \(t\) under the specific bifurcation parameter \(\beta\). (For objective one in which the task is to reconstruct the bifurcation diagram of a single circuit, we have \(\beta=R\); for objective two in which the task is to anticipate the variation of the synchronization degree of coupled chaotic circuits, we have \(\beta=R_{6}\).) The elements of \(\mathbf{W}_{in}\) are randomly drawn from a uniform distribution within the range \([-\sigma,\sigma]\). The parameter-control channel is characterized by the vector \(\mathbf{s}=\beta\mathbf{W}_{b}\), with \(\beta\) be control parameter and \(\mathbf{W}_{b}\in\mathbb{R}^{D_{r}}\) the bias vector. The control parameter \(\beta\) can be treated as an additional input channel marking the input vector \(\mathbf{u}(t)\). The elements of \(\mathbf{W}_{b}\) are also drawn randomly within the range \([-\sigma,\sigma]\). The reservoir network contains \(D_{r}\) nodes, with the initial states of the nodes being randomly chosen from the interval \([-1,1]\). The states of the nodes in the reservoir network, \(\mathbf{r}(t)\in\mathbb{R}^{D_{r}}\), are updated as \[\mathbf{r}(t+\Delta t)=(1-\alpha)\mathbf{r}(t)+\alpha\tanh[\mathbf{A}\mathbf{ r}(t)+\mathbf{W}_{in}\mathbf{u}_{\beta}(t)+\beta\mathbf{W}_{b}]. \tag{3}\] Here, \(\Delta t\) is the time step for updating the reservoir network, \(\alpha\in(0,1]\) is the leaking rate, \(\mathbf{A}\in\mathbb{R}^{D_{r}\times D_{r}}\) is a weighted adjacency matrix representing the coupling relationship between nodes in the reservoir. The adjacency matrix \(\mathbf{A}\) is constructed as a sparse random Erdos-Renyi matrix: with the probability \(p\), each element of the matrix is arranged a nonzero value drawn randomly from the interval \([-1,1]\). The matrix \(\mathbf{A}\) is rescaled to make its spectral radius equal \(\lambda\). The output layer is characterized by the matrix \(\mathbf{W}_{out}\in\mathbb{R}^{D_{out}\times D_{r}}\), which generates the output vector, \(\mathbf{v}(t)\in\mathbb{R}^{D_{out}}\), according to the equation \[\mathbf{v}(t+\Delta t)=\mathbf{W}_{out}\mathbf{\tilde{r}}(t+\Delta t), \tag{4}\] with \(\mathbf{\tilde{r}}\in\mathbb{R}^{D_{r}}\) the new state vector transformed from the reservoir state (i.e., \(\tilde{r}_{i}=r_{i}\) for the odd nodes and \(\tilde{r}_{i}=r_{i}^{2}\) for the even nodes) [32], and \(\mathbf{W}_{out}\) the output matrix to be estimated by a training process. Except \(\mathbf{W}_{out}\), all other parameters of the RC, e.g., \(\mathbf{W}_{in}\), \(\mathbf{A}\) and \(\mathbf{W}_{b}\), are fixed at the construction. For the sake of simplicity, we set \(D_{out}=D_{in}\) in our studies [30; 31; 32]. The implementation of PARC consists of three phases: training, validating, and predicting. The mission of the training phase is to find a suitable output matrix \(\mathbf{W}_{out}\) so that the output vector \(\mathbf{v}(t+\Delta t)\) as calculated by Eq. (4) is as close as possible to the input vector \(\mathbf{u}(t+\Delta t)\) for \(t=(\tau+1)\Delta t,\dots,(\tau+\hat{L})\Delta t\), with \(T_{0}=\tau\Delta t\) the transient period (used for removing the impact of the initial conditions of the Figure 3: (a) Schematic of two coupled Chua circuits. (b) The experimental setup. (c) The trajectories of isolated chaotic circuits on the 2D phase spaces \((v_{C_{3}},v_{C_{4}})\) and \((v_{C_{5}},v_{C_{6}})\). (d) Setting \(R_{6}=10.2\) k\(\Omega\) in the experiment, \(v_{C_{3}}\) versus \(v_{C_{5}}\) plotted according to the measured data. reservoir) and \(\hat{L}\) the length of the training series. This is done by minimizing the cost function with respect to \(\mathbf{W}_{out}\)[30; 31; 32], which gives \[\mathbf{W}_{out}=\mathbf{U}\mathbf{V}^{T}(\mathbf{V}\mathbf{V}^{T}+\eta\mathbb{I})^{-1}. \tag{5}\] Here, \(\mathbf{V}\in\mathbb{R}^{D_{r}\times\hat{L}}\) is the state matrix whose \(k\)th column is \(\mathbf{\vec{r}}[(\tau+k)\Delta t]\), \(\mathbf{U}\in\mathbb{R}^{D_{out}\times\hat{L}}\) is a matrix whose \(k\)th column is \(\mathbf{u}[(\tau+k)\Delta t]\), \(\mathbb{I}\) is the identity matrix, and \(\eta\) is the ridge regression parameter for avoiding the overfitting. We note that in the training phase the input data consists of two different time series: (1) the input vector \(\mathbf{u}_{\beta}(t)\) representing the state of the target system and (2) the control parameter \(\beta(t)\) labeling the condition under which the input vector \(\mathbf{u}_{\beta}(t)\) is acquired. In specific, the input vector \(\mathbf{u}_{\beta}(t)\) is composed of \(m\) segments of length \(\hat{n}\), while each segment is a time series obtained from the target system under the specific control parameter \(\beta\). As such, the training dataset is a concatenation of the sampling series, and \(\beta(t)\) is a step-function of time. The structure of the training data is schematically shown in Fig. 4(b). The machine that performs well on the training data might not perform equally well on the testing data. The finding of the optimal machine performing well on both the training and testing data is the mission for the validating phase. The set of hyperparameters to be optimized in the machine include \(D_{r}\) (the size of the reservoir network), \(p\) (the density of the adjacency matrix \(\mathbf{A}\)), \(\sigma\) (the range defining the input matrix and the bias vector), \(\lambda\) (the spectral radius of the adjacency matrix \(\mathbf{A}\)), \(\eta\) (the regression coefficient), and \(\alpha\) (the leaking rate). In our studies, the optimal hyperparameters are obtained by scanning each hyperparameter over a certain range in the parameter space using conventional optimization algorithms such as the Bayesian and surrogate optimization algorithms [20]. After finding the optimal machine, we then utilize it to reconstruct the bifurcation diagrams, namely the predicting phase. Shown in Fig. 4(c) is the flowchart of the machine in the predicting phase. In making the predictions, we replace \(\mathbf{u}_{\beta}(t)\) with \(\mathbf{v}(t)\) (so that the machine is working in the closed-loop configuration), while setting the control parameter \(\beta\) to a specific value of interest. As such, in the predicting phase the machine is still driven by the externally added parameter \(\beta\). The output vector \(\mathbf{v}(t)\) then gives the predictions, based on which the climate of the system dynamics associated with \(\beta\) can be replicated. (Still, before making the predictions, a short transient is discarded to avoid the impact of the initial conditions of the reservoir.) Finally, by tuning \(\beta\) in the parameter space, we can reconstruct the whole bifurcation diagram according to the machine predictions. ## IV Results We first utilize the PARC technique to reconstruct the bifurcation diagram of a single circuit. We begin by choosing the set of sampling states from which the data are acquired from experiments. Previous studies have shown that the performance of PARC is influenced by both the number and the locations of the sampling states [20; 22; 23]. In general, the more the sampling states, the better the machine predictions. Additionally, to replicate the dynamics of a new state that is not included in the sampling set, it is better to choose the sampling states evenly over the parameter space. For demonstration purpose, here we choose \(m=3\) sampling states over the bifurcation range plotted in Fig. 5(c), \(R=1.735\,\mathrm{k}\Omega\), \(1.745\,\mathrm{k}\Omega\), and \(1.755\,\mathrm{k}\Omega\). For each of the sampling states, we record the system evolution for \(T=100\,\mathrm{ms}\), from which we obtain a time series of \(n=5000\) data points. Following the standard strategies in machine learning, we separate the time series into two segments of equal length, with the first half being used as training data and the second half as validating data. The size (length) of the whole training dataset therefore is \(\hat{N}=m\times n/2=7500\), so is the validating dataset. (To make the predictions more relevant to the experimental results, here we use the raw data as the input, i.e., the data are not processed.) We next train the machine and find the optimal set of hyperparameters. In training the machine, the transient series used to remove the impact of the initial conditions of the reservoir contains \(\tau=200\) data points (which applies to each of the sampling series in the training data). As such, the total number of data points used for estimating the output matrix \(\mathbf{W}_{out}\) is \(\hat{L}=m\times\hat{n}=m\times(n/2-\tau)=6900\). To find the optimal set of hyperparameters, we search the hyperparameters over the ranges \(D_{r}\in(200,1000)\), \(p\in(0,0.2)\), \(\sigma\in(0,1)\), \(\lambda\in(0.5,1)\), \(\eta\in(1\times 10^{-8},1\times 10^{-2})\), and \(\alpha\in(0,1]\) by the Bayesian optimization algorithm. Each set of hyperparameters defines a machine, whose performance is evaluated on the validating data according to the prediction error \(\left\langle\left|\mathbf{u}(t)-\mathbf{v}(t)\right|\right\rangle_{T}\). Still, in evaluating the machine per Figure 4: Schematic of the PARC technique. (a) The open-loop configuration of the machine in the training phase. (b) Schematic of the training data. (c) The closed-loop configuration of the machine in the predicting phase. formance by the validating data, a transient series of \(\tau=200\) points are used to remove the impact of the initial conditions of the reservoir. For this application, the optimal hyperparameters are \((D_{r},p,\sigma,\lambda,\eta,\alpha)=(502,0.15,0.32,0.85,1.2\times 10^{-5},0.54)\), which define the optimal machine to be used for prediction purposes. Before employing the trained machine to reconstruct the bifurcation diagram, we check first the capability of the machine in predicting the dynamics of a new state not included in the sampling set. The resampling state we choose is \(R=1.738\) k\(\Omega\). [The trajectories of this state plotted according to experimental data are shown in Figs. 2(a) and (b).] Setting the control parameter as \(\beta=1.738\) k\(\Omega\), we now operate the machine in the closed-loop configuration [see Fig. 4(c)]. After a transient period of \(\tau=1000\) iterations, the machine begins to output the predictions. The trajectories predicted by the machine are plotted in Figs. 5(a) and (b). Compared to the smeared trajectories plotted in Figs. 2(a) and (b), we see in Figs. 5(a) and (b) that the trajectories show clearly the period-6 orbits. We therefore see that the machine is able to not only infer the dynamics of a new state, but also recover from noise-contaminated signals the true trajectories (i.e., the climate of the system dynamics). We proceed to reconstruct the bifurcation diagram of the circuit by the PARC technique. This is done by increasing the control parameter from \(\beta=1.73\) k\(\Omega\) to \(1.77\) k\(\Omega\) gradually, while for each value of \(R\) we collected from the machine output a sequence of \(10000\) data points. Shown in Fig. 5(c) is the bifurcation diagram plotted according to the machine predictions. Compared with the experimentally obtained results [see Fig. 2(c)], we see that the bifurcation diagram predicted by the machine is of high quality and precision. Specifically, we can infer from the reconstructed bifurcation diagram not only the transition points of the high-order periodic orbits, but also the periodic windows embedded in the chaotic regions. We continue to anticipate the synchronization degree of two coupled chaotic Chua circuits by the PARC technique. Still, to generate the training and validating datasets, we acquire from experiments the time series of \(m=3\) sampling states, \(R_{6}=9.4\) k\(\Omega\), \(10.2\) k\(\Omega\) [the state shown in Fig. 3(d)], and \(11\) k\(\Omega\). Each series contains \(n=10000\) data points, with the first half being used as training data and the second half being used as validating data. The transient period of the training phase contains \(\tau=500\) data points, and the same transient period is applied in the validating phase. Still, the machine hyperparameters are optimized by the Bayesian optimization algorithm. In this application, the optimal hyperparameters are \((D_{r},p,\sigma,\lambda,\eta,\alpha)=(983,4.8\times 10^{-3},0.88,0.39,2.9 \times 10^{-3},0.73)\). We check first the capability of the trained machine in replicating the synchronization dynamics of the sampling states. Setting the control parameter as \(\beta=10.2\) k\(\Omega\), we operate the machine in the closed-loop configuration [see Fig. 4(c)], and estimate from the machine outputs the synchronization error, \(\delta r\), between the circuits. The results show that \(\delta r\approx 0.34\,V\), which is in good agreement with the experimental results (\(\delta r\approx 0.30\,V\)). Shown in Fig. 6(a) is the relationship between \(v_{C_{3}}\) and \(v_{C_{5}}\) for the machine-predicted data (red dots), which is also consistent with the one plotted according to the experimental data (black dots). We check next the capability of the machine in anticipating the synchronization climate of a new state not included in the sampling set. To demonstrate, we set \(\beta=12\) k\(\Omega\) and, based on the machine predictions, plot in Fig. 6(b) the relationship between \(v_{C_{3}}\) and \(v_{C_{5}}\). Compared to the results of \(\beta=10.2\) k\(\Omega\), we see that the synchronization degree between the circuits is clearly decreased for \(\beta=12\) k\(\Omega\). Specifically, for \(\beta=12\) k\(\Omega\), the synchronization error estimated from the machine predictions is \(\delta r\approx 0.65\,V\). This estimation is also in good agreement with the experimental result (\(\delta r\approx 0.64\,V\)), as depicted in Fig. 6(b). We finally utilize the machine to anticipate the variation of Figure 5: Reconstructing the bifurcation diagram of Chua’s circuit by the PARC technique. (a,b) The trajectories predicted by the machine for the parameter \(R=1.738\) k\(\Omega\), which is not included in the sampling set. (c) The bifurcation diagram predicted by the PARC technique. Red dashed lines denote the sampling states from which data are measured from experiments. the synchronization error, \(\delta r\), with respect to the coupling coefficient, \(R_{6}\), over a wide range in the parameter space. In doing this, we increase \(\beta\) from \(9\,\mathrm{k}\Omega\) to \(13\,\mathrm{k}\Omega\) by the increment \(\Delta\beta=0.2\,\mathrm{k}\Omega\), and for each \(\beta\) calculate from the machine outputs the value of \(\delta r\). The results are plotted in Fig. 6(c) (red circles), which shows that with the increase of \(\beta\), the value of \(\delta r\) is monotonically increased. To validate the predictions, we tune \(R_{6}\) in the experiment over the same range, and for each \(R_{6}\) calculate from the measured data the synchronization error. The experimental results are also plotted in Fig. 6(c) (black squares). We see that the predicted and experimental results are consistent within the range \(R_{6}\in(9\,\mathrm{k}\Omega,12\,\mathrm{k}\Omega)\), but are slightly diverged when \(R_{6}>12\,\mathrm{k}\Omega\). The difference between the predicted and experimental results at large \(R_{6}\) is attributed to the large distance between the sampling and testing states, which has been also observed in previous studies [22, 20, 23]. ## V Concluding remarks In reconstructing the bifurcation diagram of chaotic systems based on measured data, two of the major difficulties encountered in practice are: (1) the signals are contaminated by noise and (2) the signals are acquired at only a few sampling states. The former makes the reconstructed bifurcation diagram coarse and unclear; the latter renders the reconstructed bifurcation diagram fragmented and incomplete. In our present work, by the experimental data of chaotic Chua circuits, we have shown that both difficulties can be well addressed by the technique of PARC proposed recently in machine learning. Two scenarios have been considered and investigated: reconstructing the bifurcation diagram of a single circuit and anticipating the synchronization transition of two coupled chaotic circuits. In the first scenario, we have demonstrated that by the noisy signals acquired at several sampling states, the trained machine is able to reconstruct the whole bifurcation diagram with high precision. The success of the machine in reconstructing the bifurcation diagram is attributed to the noise-filtering effect of the reservoir and the property of transfer learning. Specifically, fed with noisy signals from which the system dynamics can not be inferred directly, the reservoir is able to output a smooth and clear trajectory. And, guided by the parameter-control channel, the knowledge that the machine learned from the time series of the sampling states can be transferred to infer the dynamics of a new state not included in the sampling set. In the second scenario, we have demonstrated that, trained by the noisy signals collected at a handful of coupling parameters, the machine is able to anticipate the variation of the synchronization degree of the coupled circuits with respect to the coupling parameter over a wide range. Whereas the capability of PARC for inferring the dynamics climate of chaotic systems has been well demonstrated in the literature, the previous studies are all based on modeling systems of noise-free signals [20, 21, 22, 23, 24]. Our studies show that this technique can be also applied to noisy signals generated from realistic systems. Though our studies demonstrate preliminarily the capability of the PARC technique for reconstructing the bifurcation diagram of realistic chaotic systems, many questions remain to be addressed. First, for convenience and simplicity, we have adopted Chua's circuits as examples to demonstrate the performance of the PARC technique. The applicability of this technique to other real-world chaotic systems is yet to be checked. Second, recent studies show that noise might play a constructive role in the machine learning of chaotic systems [41, 42, 43]. In particular, a stochastic-resonance-like phenomenon has been observed in predicting chaos, where it is shown that the prediction performance can be improved the Figure 6: Reconstructing the synchronization transition of two coupled chaotic Chua circuits by the PARC technique. The relationship between \(v_{C_{3}}\) and \(v_{C_{5}}\) for (a) \(R_{6}=10.2\,\mathrm{k}\Omega\) and (b) \(R_{6}=12\,\mathrm{k}\Omega\). Black dots are results acquired from experiments. Red dots are results predicted by the machine. (c) The variation of the synchronization error between the coupled circuits, \(\delta r\), with respect to the coupling coefficient, \(R_{6}\). Black squares are results obtained from experiments. Red circles are results predicted by the machine. Blue dashed lines denote the sampling states from which data are measured from experiments. introducing a certain amount of noise [43]. It will be interesting to check whether a similar phenomenon can be observed in the experiments of Chua's circuits. Third, our studies focus on only the low-dimensional chaotic systems (a single Chua's circuit and two coupled chaotic Chua circuits). It remains not clear whether the same PARC technique can be applied to high-dimensional chaotic systems, e.g., spatially extended chaotic systems and large-size complex networks of coupled oscillators. In applying the technique to high-dimensional chaotic systems, one difficulty concerns the super size of the reservoir network. One possible approach to addressing this difficulty could be adopting the scheme of parallel RC [32], which, however, might need a significant modification of the machine structure. Finally, an important feature of many real-world chaotic systems is that their asymptotic dynamics are dependent on the initial conditions, namely the property of multistability [48]. The application of the PARC technique to reconstruct the bifurcation diagrams of multistable chaotic systems, probably by incorporating some additional modules to the current machine, is another interesting topic warranting further studies. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (NNSFC) under Grant Nos. 12275165 and 12105165. XGW was also supported by the Fundamental Research Funds for the Central Universities under Grant No. GK202202003.
2309.13706
Simple Power-Law Model for generating correlated particles
A search for the critical point of the strongly interacting matter by studying power-law fluctuations within the framework of intermittency is ongoing. In particular, experimental data on proton and pion production in heavy-ion collisions are analyzed in transverse momentum space. In this regard, a simple model with a power-law multi-particle correlations is introduced. The model can be used to study sensitivity to detect power-law correlated particles in the presence of various detector effects.
Tobiasz Czopowicz
2023-09-24T17:37:57Z
http://arxiv.org/abs/2309.13706v1
# Simple Power-law model for generating correlated particles ###### Abstract A search for the critical point of the strongly interacting matter by studying power-law fluctuations within the framework of intermittency is ongoing. In particular, experimental data on proton and pion production in heavy-ion collisions are analyzed in transverse momentum space. In this regard, a simple model with a power-law multi-particle correlations is introduced. The model can be used to study sensitivity to detect power-law correlated particles in the presence of various detector effects. ## I Motivation One of the goals of the high-energy heavy-ion physics is to locate the critical point (CP) in the phase diagram of the strongly interacting matter. Theoretical studies suggest a smooth crossover transition at small baryochemical potential \(\mu_{B}\) and high temperature \(T\)[1]. At lower \(T\) and larger \(\mu_{B}\), a first-order phase transition is expected [2]. The CP is a hypothetical end point of the first-order phase transition that has properties of the second-order phase transition. In the vicinity of CP, fluctuations of the order parameter become self-similar [3], belonging to the 3D-Ising universality class. This can be detected by studying particles' fluctuations in the transverse momentum, \(p_{T}\), space within the framework of intermittency analysis by use of Scaled Factorial Moments (SFM). A search for such power-law fluctuations was proposed in Refs. [4; 5; 6; 7] and experimental data on proton and pion multiplicity fluctuations have been analyzed in transverse momentum space [8; 9; 10]. To study the sensitivity to detect power-law correlated particles in the presence of various detector effects, a simple, fast model that can generate particles with properties expected for the CP was developed and is presented below. ## II Power-law model The Power-Law Model generates events with momenta of a given number of particles with a given power-law correlation of their transverse momentum difference and/or a given number of uncorrelated particles while maintaining a given shape of single-particle inclusive \(p_{T}\) distribution and a given multiplicity distribution. Results, events with list of particles with their momenta components, are stored in a text file and can be used for calculating SFM or undergo further processing (e.g. momentum smearing to mimic detector's momentum resolution). The model is written in ANSI C with no external dependencies. It uses SFC64 [11] random number generator. ### Power-law correlation The model allows for generating groups of correlated particles (pairs, triplets, quadruplets, etc.). The correlations are introduced using the average, over particles in a group, pair transverse momentum difference \(S\). For a given number of particles in a group \(g\) that form \(g_{\mathrm{p}}=g(g-1)/2\) pairs, \(S\) is defined as \[S=\frac{1}{g_{\mathrm{p}}}\sum_{i=1}^{g}\sum_{j=1}^{g}|\Delta\overrightarrow{p _{T}}|_{i,j}. \tag{1}\] For a correlated pair, when \(g=2\) and \(g_{\mathrm{p}}=1\), \(S\) is equal to the difference of the two particles transverse momenta, \(S=|\Delta\overrightarrow{p_{T}}|\). The correlations are introduced by generating \(S\) according to the power-law distribution: \[\rho_{S}(S)=S^{-\phi}\,, \tag{2}\] with a given exponent \(0\leq\phi<1\). Due to scaling of power law, \(S\) for sub-groups of \(g\) particles follow the same distribution (e.g. quadruplet of particles correlated with \(\phi\) consists of 4 triplets correlated with \(\phi\) and 6 pairs also correlated with \(\phi\)). ### Event multiplicity Number of events, \(N_{\text{events}}\), is one of the input (command-line) parameters. Processing will stop after reaching the requested value. Number of particles in each event is drawn from either a standard distribution (e.g Poisson with a given expected value) or a custom distribution supplied in a text file. Event multiplicity can also be set to a constant value. ### Particles' transverse momentum components In order to generate transverse momentum components of each particle, the following parameters are used: 1. desired ratio of total number of correlated particles to all particles \(r\) (default: 0.5), 2. power-law exponent \(\phi\) (default: 0.8), 3. minimum and maximum value of the average pair momentum difference \(S\) of correlated particles (default: \(S_{\text{min}}=0.0\), \(S_{\text{max}}=1.2\) GeV/\(c\)), 4. number of correlated particle in group \(g\) (default: 2), 5. single-particle transverse momentum distribution \(\rho_{p_{T}}(p_{T})\) (up to \(p_{T\text{max}}=1.5\) GeV/\(c\)) in a text file (default: \(\rho_{p_{T}}(p_{T})=p_{T}exp(-6p_{T})\)). Uncorrelated particles are generated as long, as generating a correlated group would still not exceed the ratio \(r\). Then, a correlated group if generated. #### ii.2.1 Uncorrelated particles Generating uncorrelated particle's transverse momentum components takes following steps: 1. draw \(p_{T}\) form the supplied transverse momentum distribution \(\rho_{p_{T}}\), 2. draw azimuthal angle \(\varphi\) from a uniform distribution \([0,2\pi)\), 3. calculate the components, as \(p_{x}=p_{T}cos(\varphi)\) and \(p_{y}=p_{T}sin(\varphi)\). #### ii.2.2 Correlated particles Before generating the first correlated group, the total number of correlated particles is estimated, as \[n_{\text{c}}=N_{\text{events}}\cdot\langle N\rangle\cdot r\,,\] where \(\langle N\rangle\) is the mean value of the requested multiplicity distribution. Total number of correlated groups to be generated is \(n_{\text{g}}=n_{\text{c}}/g\). Then, an array of \(n_{\text{g}}\) values of \(S\) following the power-law distribution \(\rho_{S}\) from Eq. 2 is generated and sorted in ascending order. Each value is calculated using the inverse transform sampling method (with additional constraints on minimum and maximum values of \(S\)) as \[S_{i}=\left[(S_{\text{min}})^{1-\phi}+\left((S_{\text{max}})^{1-\phi}-(S_{ \text{min}})^{1-\phi}\right)R_{i}\right]^{\frac{1}{1-\phi}}\,,\] where \(i=1\ldots n_{\text{g}}\) and \(R_{i}\) are random numbers from a uniform distribution \([0,1)\). Also, a histogram \(H\) of each correlated particle's transverse momentum is created by randomly drawing \(n_{\text{c}}\) values of \(p_{T}\) from \(\rho_{p_{T}}\). In the \(p_{x}-p_{y}\) plane, each correlated particle in a group, is evenly positioned on a circle with diameter \(S\), centered at their average \(p_{T}\). For each group a next value from the array of \(S\) values is used. In combination with the maximum value of \(p_{T}\), 1.5 GeV/\(c\), it determines available range of values of single particle \(p_{T}\). Next, a 1D probability distribution of available \(p_{T}\) values from histogram \(H\) is constructed. It is then used to draw average \(p_{T}\) of particles in group. Having the center (average \(p_{T}\)) and the diameter (\(S\)) of the circle in \(p_{x}-p_{y}\) plane, particles are placed evenly starting at a random position. Then, components of their transverse momenta are calculated and stored. As a last step, histogram \(H\) is updated by removing obtained values of \(p_{T}\). ### Particles' longitudinal momentum components Both correlated and uncorrelated particles' longitudinal momentum components \(p_{z}\) are calculated independently from \(p_{T}\) from a center-of-mass rapidity distribution. The following parameters are used: 1. minimum and maximum value of the center-of-mass rapidity (default: \(y_{\text{min}}^{\text{CMS}}=-0.75\), \(y_{\text{max}}^{\text{CMS}}=0.75\)), 2. mass of particles \(m\) (default: 0.938 GeV), 3. rapidity of the center-of-mass in laboratory frame (default: \(y_{\text{cms}}^{\text{LAB}}=2.88\)). The center-of-mass rapidity distribution is assumed to be uniform in a given range and one value \(y^{\text{CMS}}\) is chosen at random. Using a given particle mass and generated \(p_{T}\), transverse mass is calculated as \[m_{T}=\sqrt{(p_{T})^{2}+m^{2}}\,.\] Knowing rapidity of the center-of-mass in laboratory frame and transverse mass allows to calculate \(p_{z}\): \[p_{z}=m_{T}\cdot sinh(y^{\text{CMS}}+y_{\text{cms}}^{\text{LAB}})\,.\] ### Model performance The key feature of the model is introducing a power-law correlation of particles while preserving a given single-particle transverse momentum distribution. To test it, 10000 events with different settings have been generated and the relevant distributions are shown in Fig. 1. For \(\phi=0.8\) and \(\phi=0.5\) the model generates power-law distributions close to the ones requested (_bottom_). Also, for the two requested transverse momentum distributions, generated data follow them closely (_top_). Generating these data sets took approximately 200ms each. ### Scaled Factorial Moments for the model data The main purpose of the model is to study SFM, which is used within intermittency analysis as a tool for locating the CP. Therefore it must generate particles with properties expected for the CP. One of the properties, power-law correlation is explicitly Figure 1: Transverse momentum (_top_) and \(S\) (_bottom_) distributions for 10000 events generated with different model parameters. Transverse momentum distributions are plotted separately for uncorrelated (_blue_) and correlated (_red_) particles. The \(S\) distribution is fitted with \(f(S)=A\cdot S^{-\phi}\) and presented in log-log scale to reveal the power-law shape. built-in in the model. It would result in a power-law dependence of SFM of the order \(q\) with respect to the number of \(p_{x}\)-\(p_{y}\) cells \(M^{2}\): \[F_{q}(M)\propto(M^{2})^{\phi_{q}}\,.\] Another feature of SFM in the vicinity of the CP is a linear relation between exponents (intermittency indices) \(\phi_{q}\). To test it with the model, 10000 events each containing \(N=6\), all correlated particles (\(r=1\), \(g=6\), \(\phi=0.8\)) have been generated. Then, SFM up to \(q=6\) order were calculated and fitted with a power law. Results are shown in the left panel of Fig. 2. The obtained exponents \(\phi_{2},\ldots,\phi_{6}\) are presented in the right panel. Clearly they do exhibit expected linearity. ## III Summary and outlook This work is motivated by experimental searches for the critical point of the strongly interacting matter in heavy-ion collisions. A model introducing a power-law correlation predicted in the vicinity of the CP was presented. The expected scaling behavior of \(F_{q}(M)\) in \(M^{2}\) as well as a linear relation of obtained intermittency indices is observed. Introducing correlations between particles does not affect transverse momentum and multiplicity distributions. The model can be used to study the impact of detector effects (e.g. acceptance, efficiency, resolution, etc.) on the behavior of the scaled factorial moments. ###### Acknowledgements. The author would like to express gratitude to Marek Gazdzicki for the motivation, help and critical comments. This work was supported by the Polish National Science Centre grant 2018/30/A/ST2/00226.
2309.12418
Twenty-five years of random asset exchange modeling
The last twenty-five years have seen the development of a significant literature within the subfield of econophysics which attempts to model economic inequality as an emergent property of stochastic interactions among ensembles of agents. In this article, the literature surrounding this approach to the study of wealth and income distributions, henceforth the "random asset exchange" literature following the terminology of Sinha (2003), is thoroughly reviewed for the first time. The foundational papers of Dragulescu and Yakovenko (2000), Chakraborti and Chakrabarti (2000), and Bouchaud and Mezard (2000) are discussed in detail, and principal canonical models within the random asset exchange literature are established. The most common variations upon these canonical models are enumerated, and significant papers within each kind of modification are introduced. The successes of such models, as well as the limitations of their underlying assumptions, are discussed, and it is argued that the literature should move in the direction of more explicit representations of economic structure and processes to acquire greater explanatory power.
Max Greenberg, H. Oliver Gao
2023-09-21T18:42:32Z
http://arxiv.org/abs/2309.12418v1
# Twenty-five years of random asset exchange modeling ###### Abstract The last twenty-five years have seen the development of a significant literature within the subfield of econophysics which attempts to model economic inequality as an emergent property of stochastic interactions among ensembles of agents. In this article, the literature surrounding this approach to the study of wealth and income distributions, henceforth the "random asset exchange" literature following the terminology of Sinha (2003), is thoroughly reviewed for the first time. The foundational papers of Dragulescu and Yakovenko (2000), Chakraborti and Chakrabarti (2000), and Bouchaud and Mezard (2000) are discussed in detail, and principal canonical models within the random asset exchange literature are established. The most common variations upon these canonical models are enumerated, and significant papers within each kind of modification are introduced. The successes of such models, as well as the limitations of their underlying assumptions, are discussed, and it is argued that the literature should move in the direction of more explicit representations of economic structure and processes to acquire greater explanatory power. ###### Contents * I Introduction * I.1 The universality of economic inequality * I.2 Measuring inequality: the Gini coefficient * I.3 Pareto, Gibrat, and the econophysicists * II The Taxonomy of Random Asset Exchange Models * I.1 Kinetic wealth exchange * I.2 Theft, fraud, and yard sales * I.3 Bouchaud-Mezard models * I.3 Other formulations * III Notable Trends in the Literature * III.1 Non-conservation of wealth * III.2 Networks and preferential attachment * III.3 Goods and rationality * III.4 Strategic behavior * III.5 Class division * III.6 Taxation and redistribution * III.7 Miscellaneous * IV Discussion ## I Introduction Over the last fifteen years, the question of economic inequality has become the epicenter of one of the most intense political debates in the United States. Awareness of the growing gap between rich and poor has been growing since the 2008 American bank bailouts and the 2010 _Citizens United v. FEC_ Supreme Court decision, but the inequality question was decisively pushed to the forefront of American politics in September 2011 with the beginning of the Occupy Wall Street protest movement, which introduced the dichotomy of "the 1%" vs. "the 99%" to public consciousness. Though the Occupy movement did not immediately produce anything by way of practical politics, it nonetheless laid the foundation for U.S. Senator Bernie Sanders' two campaigns for president, in which he used the language of Occupy to reframe economic inequality as the result of policy choices which could be rectified through a social-democratic "political revolution." But the idea that economic inequality is a problem which needs to be addressed by way of policy is by no means an uncontroversial one. The majority of American adults who electorally favor the Republican Party do not believe that the current level of economic inequality in the United States is excessive (Horowitz _et al._, 2020). The legacy of "Reaganomics"--the economic policy pursued by the Federal government of the United States under the tenure of former President Ronald Reagan, which was characterized by cuts to tax rates and other concessions to proponents of "supply-side" economic theory--remains contentious. And within the Republican delegation to the United States House of Representatives, a proposal to eliminate the current bracketed income tax system and to replace it with a much higher nationwide flat sales tax, in order to dramatically lessen the tax burden on the wealthy, is gaining some traction (Dore, 2023). The controversy surrounding economic inequality is just as longstanding and just as intense within the realm of academic economics. On one hand, economists tend to be more skeptical than other social scientists of government intervention in the economy, as a great deal of emphasis is placed on the fact that, within the discipline's canonical models, free markets have no trouble arriving at socially optimal allocations of resources all on their own. On the other hand, French economist Thomas Piketty's 2013 magnum opus _Capital in the 21st Century_, which proposes the imposition of a global, progressive tax on wealth in order to rein in what he views as the excessive amount of economic inequality in the world today, has become greatly influential in the popular-academic debates concerning the issue (Piketty, 2014). Needless to say, the intense political disputes around the question of inequality show no sign of abating anytime soon. ### The universality of economic inequality What is indisputable, however, is that in nearly every single developed market economy, the degree of stratification between the rich and everyone else is not only staggering, but is also increasing. In the United States, the share of household wealth owned by the top 1% of the population by net worth grew from 29.9% in 1989 to 35.5% in 2013; meanwhile, the share of wealth owned by the bottom 50% of the population shrunk from 3.0% to 1.1% during the same period (Killewald _et al._, 2017). In Germany, individuals at the 90th percentile of net assets own 13 times as much wealth as the median individual and over a quarter of individuals have liabilities equal to or greater than their assets, resulting in a negative net worth (Grabka and Westermeier, 2014). While there are many countries where the degree of wealth inequality is not this extreme--the United States has one of the most unequal distributions of wealth in the world--the overall structure is strikingly similar in almost every country (Davies _et al._, 2009). In every market economy for which data exists, many possess very little wealth and a few possess much. Great inequality also governs the distribu tion of incomes within market economies. As reported by Horowitz _et al._ for the Pew Research Center, the share of aggregate income possessed by high-income households, defined as households with incomes greater than twice the national median, has grown from 29% in 1970 to 48% today. In the same period, the share of aggregate income possessed by low-income households, defined as households with incomes less than two-thirds the national median, fell from 10% to 9% over the same period. In a similar vein, the incomes of those who are already in the top 5% of the population in terms of earnings have consistently grown the faster than all other earners over the past 40 years. ### Measuring inequality: the Gini coefficient One of the most popular metrics used to quantify the degree of inequality present in a given wealth or income distribution--or, indeed, any density distribution over a non-negative domain--is the Gini coefficient, named for Italian statistician Corrado Gini. The Gini coefficient of a distribution \(f(x)\) is defined by reference to the Lorenz curve, itself defined as the function: \[L(F(x))=\frac{1}{\mu}\int_{0}^{x}s\cdot p(s)ds \tag{1}\] where \(F(x)=\int_{0}^{x}f(s)ds\) is the cumulative density distribution of \(f(x)\) and \(\mu=\int_{0}^{\infty}s\cdot f(s)ds\) is the mean of \(f(x)\)(Gastwirth, 1971). Intuitively, this integral represents the share of some asset--say, income--held by the bottom 100x% of the population, normalized by the mean of the distribution. The Gini coefficient is then given by twice the difference between the area under Lorenz curve of a perfectly egalitarian distribution--a straight line with a slope of 1--and the Lorenz curve of the distribution in question (Dorfman, 1979). Thus, the canonical formula used to calculate the Gini coefficient is: \[G=1-2\int_{0}^{1}L(x)dx \tag{2}\] Thus, the Gini coefficient can take on any value between 0--perfect equality--and 1--perfect inequality. To extend this statistic to describe dispersion within a finite population \(\{x_{i}\}_{i=1}^{N}\), however, it is more convenient to leverage the alternative (but equivalent) definition of the Gini coefficient, given by half the relative mean absolute difference of a distribution: \[G=\frac{1}{2}\frac{\left(\sum_{i=1}^{N}\sum_{j=1}^{N}|x_{i}-x_{j}|\right)/N^{ 2}}{\mu} \tag{3}\] If the population \(\{x_{i}\}_{i=1}^{N}\) is sorted such that \(x_{j}>x_{k}\) if and only if \(j>k\), the Gini coefficient may be calculated using the computationally much faster formula: \[G=\frac{N+1}{N}-\frac{2}{\mu N^{2}}\sum_{i=1}^{N}(N-i+1)x_{i} \tag{4}\] as demonstrated by Allison (1978). Note that the Gini coefficient over a discrete population does not perfectly correspond to its continuous counterpart, however, as the former has an upper bound of \(1-1/N\)(Allison, 1979). The Gini coefficent is not a perfect measure of inequality. It has been criticized on the basis that distributions with very different levels of concentration in the right tail can produce identical indices; there is therefore significant information lost when using it to represent an entire distribution with a single scalar value (Osberg, 2017). Nonetheless, the Gini coefficient serves as a useful and widely-used benchmark for summarizing the degree of dispersion present in a given wealth or income distribution. Underscoring the universality of steep economic inequality in both wealth and income distributions, Table 1 displays the Gini coefficients for the wealth and income distributions of ten countries. We observe that wealth distributions are almost always "more unequal" than income distributions: Gini coefficients for wealth distributions tend to range between 0.5 and 0.8, while Gini coefficients for income distributions tend to range from 0.25 to 0.45. Furthermore, there is no obvious correlation between the Gini coefficients for wealth distributions and for income distributions: some countries, such as China, have coefficients relatively close in value, while other countries, such as France, have Gini coefficients for wealth over twice as high as the corresponding value for income. ### Pareto, Gibrat, and the econophysicists Regardless of whether one personally believes governments should play a role in redistributing wealth from the rich to the poor, the universality of the phenomenon of extreme inequality should raise eyebrows. Different countries have dramatically different approaches to welfare programs, taxation, and all other sorts of policy. Yet the distributions of wealth and income which emerge in these countries are remarkably similar in form. It follows that there must be some shared set of characteristics that account for this common structure of wealth distribution. This line of questioning points us to an often overlooked and still poorly understood aspect of economic inequality: its origin. The nature and origin of the distribution of wealth and income in market economies has been an open problem in economics for more than a century. In 1897, the Italian civil engineer-turned-economist Vilfredo Pareto attempted to provide an answer after noticing a striking pattern in data for land-ownership rates in Italy. Specifically, Pareto posited that income in every society was distributed according to a decreasing power law; namely: \[p(x)\propto x^{-1-\alpha} \tag{5}\] where \(p(x)\) represents the probability density function of income and \(\alpha\) represents the "Pareto index," with smaller values producing fatter tails and thus representing more unequal distributions. This observation has come to be known as the "weak Pareto law," with its strong counterpart including the additional claim that the Pareto index possesses a value in the range \(1.5\pm 0.5\)(Pareto, 1897). But not long thereafter it became apparent that this law did not actually well characterize the entire income distribution. Instead, when low- and middle \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{**Gini Coefficients of Wealth and Income for Ten Countries**} \\ \hline _Country_ & _Gini Coefficient of Wealth_ & _Gini Coefficient of Income_ \\ \hline United States & 0.801 & 0.401 \\ France & 0.730 & 0.311 \\ United Kingdom & 0.697 & 0.396 \\ India & 0.669 & 0.344* \\ Germany & 0.667 & 0.289 \\ Netherlands & 0.650 & 0.298* \\ Australia & 0.622 & 0.331* \\ Italy & 0.609 & 0.353 \\ Spain & 0.570 & 0.343 \\ China & 0.550 & 0.420* \\ \hline \end{tabular} \end{table} Table 1: Gini coefficients of wealth and income inequality for ten countries, all major world economies, based on data from the year 2000. Data for Gini coefficients of wealth are taken from Davies _et al._ (2009), while data for Gini coefficients of income are taken from the FRED database hosted by the Federal Reserve Bank of St. Louis. Gini coefficients of income for India, the Netherlands, and Australia are from 2004. Gini coefficient of income for China is from 2002. income strata were taken into account, the data seemed to be much better fit by a right-skewed lognormal distribution: \[p(x)=\frac{1}{x\sigma\sqrt{2\pi}}\exp\left(-\frac{(\ln(x)-\mu)^{2}}{2\sigma^{2}}\right) \tag{6}\] This fact was first noticed by Robert Gibrat (1931). It is now well established that, in fact, both Pareto and Gibrat were correct: a lognormal-like distribution tends to characterize the bulk of incomes, while the Pareto distribution tends to characterize the highest 2-3% of incomes (Montroll and Shlesinger, 1982). Since these discoveries, mainstream economic theory has, broadly speaking, shield away from further attempts to impose a universal form to these distributions or to explain the processes responsible for their emergence. There are both normative and methodological reasons for this gap in the economics literature. The normative aversion, as voiced by Piketty, manifests as skepticism that a universal law governing the right tails of income distributions exists at all (Piketty, 2014). The methodological aversion, on the other hand, stems from the fact that most macroeconomic models make use of single, representative agents, which are ill-suited for describing heterogeneity within a population. Meanwhile, more sophisticated tools capable of addressing such questions, such as the Heterogeneous Agent New Keynesian (HANK) class of models, are still relatively new to the scene (Achdou _et al._, 2022). Nonetheless, this aversion has meant that there has remained comparatively little in the way of literature concerning one of the most crucial questions in economics today. This gap drew the attention of physicists interested in applying methods developed for the study of the natural sciences to questions in the social sciences in the late 1990s. The aim of these "econophysicists" was to capture the characteristic features of empirical wealth and income distributions, as made known by extensive statistical analyses. There is now substantial evidence that the bulk of the income distribution in all capitalist countries follows an exponential distribution (Tao _et al._, 2019). The right tail of the income distribution follows the aforementioned Pareto law and the left tail follows Gibrat's law. The exponential bulk and the log-normal left tail are sometimes unified in the form of the closely related Gamma distribution: \[p(x)=\frac{\beta^{\alpha}}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x} \tag{7}\] where \(\alpha\) is called the "shape parameter" and \(\beta\) the "rate parameter." However, there remains insufficient data to conclude whether the Gamma or log-normal distribution provides the better empirical fit (Chakrabarti _et al._, 2013). Wealth distributions are unfortunately much less well understood due to a dearth of publicly available data. Rough estimates of wealth distributions in pre-capitalist societies, such as in the New Kingdom of Egypt and medieval Hungary, provide some evidence that such societies exhibited power-law distributions of wealth, but these results are far from conclusive (Abul-Magd, 2002; Hegyi _et al._, 2007). Dragulescu and Yakovenko (2001b) used inheritance tax data to study the wealth distribution in the modern United Kingdom, which was found to have a similar structure to the UK's income distribution. Further supporting this conclusion, Sinha (2006), among others, found evidence that the very wealthiest stratum of society, as measured by published "rich lists," follows a power law distribution as well. These features appear to emerge even in artificial economies, with Fuchs _et al._ (2014) having observed an exponential bulk and power-law tail even in the wealth distribution across players of a massively multiplayer online game with inbuilt systems of production and trade. Thus, early exchange models in the econophysics literature sought to generate distributions exhibiting both the exponential bulk and power-law tail observed in data by means of symmetric binary interactions. The first paper in this lineage was Ispolatov _et al._ (1998), and shortly thereafter two papers which would ultimately become the cornerstones of the random asset exchange modeling literature--Dragulescu and Yakovenko (2000) and Bouchaud and Mezard (2000)--emerged. As it turned out, however, the econophysicists were not the first to approach the question of inequality in this way. The sociologist John Angle had actually published a series of papers containing a model extremely similar to Ispolatov _et al._'s more than a decade earlier, though the literature had no knowledge of this fact until it was pointed out by Lux (2005) (Angle, 1986, 1992, 1993). Likewise, it was noticed by Patriarca _et al._ (2005) that Dragulescu and Yakovenko's model was anticipated by a series of papers by Eleonora Bennati, which had been published in ill-known Italian economics journals in the 1980s and which had not been translated into English (Bennati, 1988, 1993). Nonetheless, in the twenty-five years since Ispolatov _et al._'s initial paper, a sizeable literature on this subject has emerged, with countless variations of the aforementioned models having been proposed and investigated. The literature has also become much more diverse in that time: though this subject was initially solely the domain of a subset of physicists interested in exploring economic questions, they have since been joined by researchers with backgrounds in mathematics, economics, systems science, and more. This article provides, for the first time, a comprehensive and thoroughgoing review of this literature, which, following the terminology of Sinha (2003), will be referred to as the "random asset exchange" literature. While many excellent partial reviews do already exist (see Chatterjee and Chakrabarti (2007), Yakovenko and Rosser (2009), Patriarca et al. (2010), and Patriarca and Chakraborti (2013), just to name a few), all either have become dated or have focused on only a delimited part of the literature. This review is the first, to our knowledge, that not only discusses all significant econophysical models of income inequality, but fully enumerates all the most common variations upon the literature's canonical models as well. ## II The taxonomy of random asset exchange models Most random asset exchange models tend to fall into one of two classes. The first of these is conventionally called the "kinetic wealth exchange" (KWE) class of model, which was popularized by Dragulescu and Yakovenko (2000). Named such because of their similarity to thermodynamic models from the kinetic theory of gases, KWE models are typically--though not always--characterized by the following properties: 1. Pairwise exchange between agents is the primary system state transition function; 2. Total money present in the system is conserved; and 3. Total money present between all pairs of agents engaged in exchange is conserved. These features are analogous to the role of particle collisions, conservation of energy, and conservation of momentum in the kinetic theory of gases, respectively. The second prominent class of model, inspired by models of directed polymers rather than ideal gases, is the Bouchaud-Mezard (BM) type model, first introduced by Bouchaud and Mezard (2000). In contrast to KWE-style models, BM-style models tend to be characterized instead by fixed wealth flow rates between all "adjacent" pairs of agents, as defined by an implicit adjacency network, being the main mechanism of system evolution. Furthermore, each agent's wealth is subject to endogenous stochastic variation, leading to systemic non-conservation of wealth. While other formulations exist, models belonging to one of these two classes represent the great majority of the literature. In this section, we review the most significant variations of both classes--as well as a handful of minor but nonetheless significant alternative model classes. ### Kinetic wealth exchange While their work was anticipated by Angle (1986), Bennati (1988, 1993), and Ispolatov _et al._ (1998), it is Dragulescu and Yakovenko (2000) who are credited with first formalizing and thoroughly studying the KWE model. In their initial formulation, a system of \(N\gg 0\) agents with \(M\gg N\) units of wealth between them is posited. Agents then engage in random pairwise exchanges, with a winner and loser being randomly selected in each pair and a transfer of wealth occurring, following some exchange rule: \[\begin{bmatrix}w_{i}\\ w_{j}\end{bmatrix}\rightarrow\begin{bmatrix}w_{i}+\Delta w\\ w_{j}-\Delta w\end{bmatrix} \tag{8}\] with \(\Delta w>0\) if agent \(i\) is the winner of the exchange, and \(\Delta w<0\) if instead \(j\) is victorious. Since KWE models almost always feature exclusively linear, pairwise exchanges, it is often convenient to represent the model's exchange rule as a \(2\times 2\) matrix \(M\), such that: \[\begin{bmatrix}w_{i}(t+1)\\ w_{j}(t+1)\end{bmatrix}=M\begin{bmatrix}w_{i}(t)\\ w_{j}(t)\end{bmatrix} \tag{9}\] Dragulescu and Yakovenko showed that, so long as \(\Delta w\) is chosen so that the exchange process was time-reversal symmetric, then the distribution of money among agents converges to the entropy-maximizing exponential distribution: \[p(w)=\frac{1}{T}\exp\left(-\frac{w}{T}\right) \tag{10}\] where \(T=\langle w\rangle\) represents the average wealth held by agents--analogous to temperature in the equivalent thermodynamic system (Dragulescu and Yakovenko, 2000). This result proves to be extremely robust, not varying with one's choice of time-reversal symmetric exchange rule or underlying adjacency network (Lanchier, 2017). The differences between Dragulescu and Yakovenko's model and those of Angle, Ispolatov _et al._, and Bennati are subtle. In both Angle's initial model (the "one-parameter inequality process," or OPIP) and Ispolatov _et al._'s "multiplicative-random" exchange model, \(\Delta w=\varepsilon w_{loser}\), such that exchanges are of the form: \[\begin{bmatrix}w_{i}\\ w_{j}\end{bmatrix}\rightarrow\begin{bmatrix}w_{i}+\varepsilon w_{j}\\ (1-\varepsilon)w_{j}\end{bmatrix} \tag{11}\] if agent \(i\) wins the exchange. The sole difference between these two formulations is that Angle (1986) draws \(\varepsilon\) from a uniform distribution before each exchange, whereas Ispolatov _et al_ (1998) define \(\varepsilon\) as a fixed, global parameter. In contrast to the exchange rules investigated by Dragulescu and Yakovenko (2000), both of these models break time symmetry and produce identical distributions which are very well-approximated by, but not exactly given by, Gamma distributions (Angle, 1993). In both Ispolatov _et al._'s additive-random exchange model and Bennati's model, on the other hand, agents exchange constant, quantized amounts of wealth, equivalent under rescaling to \(\Delta w=1\). In Ispolatov _et al._ (1998), agents with \(0\) wealth are removed from the system entirely, causing all the wealth in the system to eventually be accumulated by a single agent (a phenomenon termed "condensation"). In Bennati (1988), however, agents with \(0\) wealth are permitted to win, but not to lose, exchanges, identical to the provision in the constant exchange rule discussed by Dragulescu and Yakovenko. For that reason, the KWE model with time reversal-symmetric exchange rule is sometimes referred to as the Bennati-Dragulescu-Yakovenko (BDY) model of wealth exchange (Yakovenko and Rosser, 2009). An extension of Dragulescu and Yakovenko's initial model proposed contemporaneously with its initial publication was investigated by Chakraborti and Chakrabarti (2000), which introduced a "saving propensity" parameter \(\lambda\). Called the CC model (or, more rarely, the "saved wealth" model), its system dynamics are characterized by the fact that, for \(\lambda\in[0,1)\), every agent engages in multiplicative exchange with only a fraction \(1-\lambda\) of their total wealth. The exchange rule in such models can thus be defined by: \[M=\begin{bmatrix}\lambda+\varepsilon(1-\lambda)&\varepsilon(1-\lambda)\\ (1-\varepsilon)(1-\lambda)&\lambda+(1-\varepsilon)(1-\lambda)\end{bmatrix} \tag{12}\] where \(\varepsilon\) is drawn from a uniform distribution on \([0,1]\) at every exchange. Curiously, this slight modification dramatically changes the equilibrium distribution of money among agents within the system as the mode of the distribution (the "most likely agent wealth") becomes non-zero, approaching \(T=\frac{M}{N}\) (an egalitarian distribution) as \(\lambda\) approaches 1. Gupta (2006) observes that this departure from the entropy-maximizing distribution is a consequence of the fact that the introduction of the saving propensity parameter \(\lambda\) results in the system transition matrix becoming non-singular Gupta (2006). Patriarca _et al._ (2004a, b) demonstrate that the resultant distribution is extremely well fit by a scaled Gamma distribution: \[p(w)=\frac{1}{\Gamma(n)}\left(n\cdot\frac{w}{T}\right)^{n-1}\exp\left(-n\cdot \frac{w}{T}\right) \tag{13}\] where \(n=1+3\lambda/(1-\lambda)\). The fit is not exact, however, as the distributions differ in their fourth moments Lallouache _et al._ (2010). The CC model is extremely influential in the random asset exchange literature, and it itself has two major variations which must be mentioned. The first, introduced by Chatterjee _et al._ (2003), defines the saving propensity parameter to be heterogenously distributed throughout the population; instead of having identical saving propensities, each agent \(i\) has their own individual saving propensity \(\lambda_{i}\in[0,1)\) drawn from the uniform distribution during model initialization. The exchange rule of this model, called the CCM model, is thus: \[M=\begin{bmatrix}\lambda_{i}+\varepsilon(1-\lambda_{i})&\varepsilon(1-\lambda_ {j})\\ (1-\varepsilon)(1-\lambda_{i})&\lambda_{j}+(1-\varepsilon)(1-\lambda_{j}) \end{bmatrix} \tag{14}\] The steady-state distribution then exhibits a Gamma-like bulk, as in the CC model, as well as a right tail well fit by a power law with Pareto parameter \(\alpha=1\); this power law is robust for any distribution of saving propensity of the form \(\rho(\lambda)\approx|\lambda_{0}-\lambda|^{\alpha}\), or for uniform distributions within a restricted range \(\lambda_{i}\in[a,b]\subset[0,1)\) Chatterjee _et al._ (2004). One well-known (and, arguably, unrealistic) aspect of the CCM model is that average agent wealth is highly correlated with their saving parameter, such that the agents who save nearly all of their money in every transaction invariably become the wealthiest. This remains the case even if a significant bias in favor of poorer agents is introduced, because thrifty agents in the CCM model always stand to gain much more than they lose from every transaction Nener and Laguna (2021). This aspect of the model also explains the surprising appearance of the Pareto tail, which is actually somewhat illusory: the right tail of the equilibrium distribution of the CCM model is constituted by the overlapping exponential tails of the exponential distributions corresponding to the subpopulations with the highest saving parameters Patriarca _et al._ (2005). Another significant drawback of the CCM model is that, while the right tails of the steady state distributions change from approximately Pareto with index 1 to exponential as the distribution of \(\lambda_{i}\) narrows, the empirical value of \(\alpha\approx 1.5\) is never reached Repetowicz _et al._ (2005). However, there are a number of ways to modify the CCM model and recover such a regime: Repetowicz _et al._ (2006), for instance, note that introducing modified wealth parameters with memory--\(\hat{w}_{i}(t)=w_{i}(t)+\gamma w_{i}(q)\), with \(\gamma\in(0,1)\) and \(q<t\)--before each transaction and applying the CC exchange rule thereto does permit Pareto tails with indices \(\alpha>1\) to be obtained. Likewise, Bisi (2017) demonstrates that replacing the saving propensity parameter with a bounded, global function of an agent's wealth \(\gamma(w_{i})\) also permits superunitary Pareto indices. The second variation, introduced by Cordier _et al._ (2005) and often called the CPT model, investigates the CC model with an additional stochastic growth term: \[M=\begin{bmatrix}(1-\lambda)+\eta_{i}&\lambda\\ \lambda&(1-\lambda)+\eta_{j}\end{bmatrix} \tag{15}\] Where \(\eta_{i}\) and \(\eta_{j}\) are independent and identically distributed variables with mean 0 and variance \(\sigma^{2}\). As in the BDY, CC, and CCM models, debts are not permitted, so a transaction only takes place so long as neither agent is reduced to a negative level of wealth. Because \(\eta_{i}\) and \(\eta_{j}\) are uncorrelated, total wealth is now only preserved in the mean. The CPT model has an inverse-Gamma equilibrium, with shape parameter \(\alpha=1+\frac{2\lambda}{\sigma^{2}}\) and scale parameter \(\beta=\alpha-1\): \[p(w)=\frac{\beta^{\alpha}}{\Gamma(\alpha)}\cdot w^{-1-\alpha}\cdot\exp\left(- \frac{\beta}{w}\right) \tag{16}\] In this case, the shape parameter \(\alpha\) may be interpreted as the "Pareto index" of the approximately power-law right tail. The CPT model, like the CCM model, is quite flexible and has been studied in a variety of other contexts. During and Toscani (2008) employ a CPT model with quenched saving propensities to study international transactions, representing countries as subpopulations with different saving propensities. Bisi and Spiga (2010) consider a variation on the CPT wherein the amount of wealth an agent receives from his trading partner is also subject to stochastic fluctuations. More recently, Zhou _et al._ (2021) investigate the effect of introducing a non-Maxwellian (i.e. wealth-varying) collision kernel in the CPT model. #### ii.2.1 Theft, fraud, and yard sales Following the terminology of Hayes (2002), binary exchange models in which the transfer amount is proportional to the wealth of the loser are commonly referred to as "theft and fraud" (TF) models, while those in which the transfer amount is proportional to the wealth of the poorer agent are referred to as "yard sale" (YS) models. That is, the YS model posits an exchanged quantity \(\Delta w\) of the form: \[\Delta w\propto\min\{w_{i},w_{j}\} \tag{17}\] The advantage of the YS model is that, from a strategic perspective, agents are not disincentivized from engaging in trade, as the expected value of an exchange is always 0. This is in contrast to the TF model, which is so-named precisely because the expected value of an exchange is always negative for the richer agent. If agents were allowed to choose whether or not to engage in a given exchange, a TF economy would immediately freeze as soon as a wealth differential appeared. The principal drawback of the YS model, however, is that it is now well-known that the unmodified YS model always exhibits condensation, though a non-degenerate equilibrium can be recovered if the probability of winning a given exchange is biased in favor of the less wealthy agent, if a mechanism for redistributing wealth from richer agents to poorer ones is introduced, or if extremal dynamics are coupled to the system (Bagatella-Flores _et al._, 2015; Boghosian _et al._, 2015; Cardoso _et al._, 2021; Sinha, 2003). Moukarzel _et al._ (2007) demonstrated that, in the case of the YS model where the proportion of the poorer agent's wealth at stake in each transaction is a fixed constant \(f\), a sufficient bias of the probability \(p\) towards the poorer agent alone was sufficient to avoid condensation. In particular, the critical probability \(p^{*}\) above which the system does not condense Figure 1: Stationary distributions produced by the BDY (left), CC (middle), and CCM (right) models, with best-fit gamma curves shown. All simulations were performed with the parameterization \(N=5000\) and \(\langle w\rangle=100\) over \(10^{5}\) iterations. was found to be: \[p^{*}=\frac{\log\left(\frac{1}{1-f}\right)}{\log\left(\frac{1+f}{1-f}\right)} \tag{18}\] Based on this result, Bustos-Guajardo and Moukarzel (2012) studied an extension of the YS model on an adjacency network, such that exchanges may only take place between adjacent agents. They found that the value of the critical probability remains the same regardless of the choice of network. In fact, most system dynamics in the stable phase of the system are independent of the choice of network. However, certain dynamical aspects of the system (such as time required for the system to fully condense) do differ from the fully-connected case in the unstable (i.e. condensing) region. This is not entirely surprising, seeing as the number of agents to whom the wealth will condense is directly determined by the underlying network; instead of one agent accumulating all the money, the distribution condenses to a set of "locally rich agents," sometimes termed the "oligarchy." Redistribution in the YS model was examined by Boghosian (2014a), in which a mechanism by which, at each time step, \(\chi\) percent of each agent's wealth was confiscated and subsequently redistributed uniformly among the population. Introducing this mechanism not only prevented condensation, but also produced a gamma-like steady state distribution with a Pareto-like tail. The dynamics of this mechanism were studied in more detail by Boghosian _et al._ (2017) and Devitt-Lee _et al._ (2018), in which it was combined with an bias in exchanges in favor of the wealthy, called the "wealth-attained advantage." In this variation, termed the "extended yard sale" (EYS) model, the wealthier agent wins a given exchange with probability \(p=rT(w_{i}-w_{j})\), where \(T\) is the average wealth of the system and \(w_{i}\) is the wealth of the richer agent. The wealth-attained advantage formally acts as a net tax on the non-oligarchy, while the redistribution acts as a net tax on the oligarchy; once the poor-to-rich flux of the redistributive mechanism was eclipsed by the rich-to-poor flux of the wealthy agents' advantage, the system moves from a subcritical to a supercritical state and the inequality of the resulting wealth distribution, as measured by the Gini coefficient, begins increasing rapidly. This acts as a second-order phase transition within the system. One additional variation of the EYS model, the "affine wealth" (AW) model, was introduced by Li _et al._ (2019). The AW model permits negative wealth by defining a debt limit \(\Delta\), adding \(\Delta\) to the wealths of both agents before each exchange, and subtracting \(\Delta\) once the exchange is complete. The AW model provides a remarkably good fit to the U.S. wealth distribution, as reported by the U.S. Survey of Consumer Finances. It is also worth mentioning the variation on the YS model first formulated by Iglesias _et al._ (2004). This model, sometimes referred to as the IGAV model, sees agents with wealths \(w_{i}\) and \(w_{j}\) and saving parameters \(\lambda_{i}\) and \(\lambda_{j}\) exchange quantities \(\Delta w_{ij}=\min\left\{(1-\lambda_{i})w_{i},(1-\lambda_{j})w_{j}\right\}\), where the bias in favor of the poorer agent is defined as per Scafetta _et al._ (2002): \[p=\frac{1}{2}+f\cdot\frac{w_{1}-w_{2}}{w_{1}+w_{2}} \tag{19}\] The asymmetry flux index \(f\in[0,1/2]\) essentially defines the degree of systemic social protection offered to the poor. A number of similar models which modify Iglesias _et al._'s exchange rule were studied by Caon _et al._ (2007). Recently, Nener and Laguna (2021a) showed that, in the IGAV model, the richest agents are not necessarily the thrifiest. Instead, the saving propensity \(\lambda_{i}^{*}\) that maximizes equilibrium average wealth lies in the interval \((0,1)\), increasing with \(f\). Heinsalu and Patriarca (2014) introduce a variation of the BDY model meant to more explicitly model the dynamics of barter economies. While Dragulescu and Yakovenko examined, among others, the TF exchange rule: \[M=\begin{bmatrix}\varepsilon&\varepsilon\\ 1-\varepsilon&1-\varepsilon\end{bmatrix} \tag{20}\] where \(\varepsilon\) is a uniform random variable with mean \(0.5\), Heinsalu and Patriarca consider the rule: \[M=\begin{bmatrix}1-\varepsilon_{i}&\varepsilon_{j}\\ \varepsilon_{i}&1-\varepsilon_{j}\end{bmatrix} \tag{21}\] where \(\varepsilon_{i}\) and \(\varepsilon_{j}\) are i.i.d. uniform random variables with mean \(0.5\). This modification, called the "immediate exchange" (IE) model, changes the system from a pure TF one, where wealth flows unidirectionally and, on average, from richer to poorer agents, to one in which wealth flows bidirectionally. In the general IE model, transactions have some probability \(\mu\) of occurring unidirectionally in the manner of Angle (1986). In the pure IE model with \(\mu=0\) has a steady state distribution \(p(w)\) which is an exact Gamma distribution with a shape parameter of \(2\), meaning \(\lim_{x\to 0}p(x)=0\), \(p(x)\) has a non-zero mode, and the right tail is well-approximated by a Pareto distribution with \(\alpha=1\)(Katriel, 2014). ### Bouchaud-Mezard models Unlike the KWE model, the BM model does not make use of agents pairing up and engaging in binary transactions with a winner and a loser; rather, the rate of exchange between agents are defined by a fixed adjacency matrix \(\mathbf{J}\), each entry of which \(J_{ij}\) represents the "cash flow rate" from agent \(j\) to agent\(i\). In Bouchaud and Mezard's original paper, each agent in the population of size \(N\) has two sources of income--stochastic returns from investments and sales of a product to other agents--and one source of expenses--purchases of products from other agents. Thus, the income of agent Figure 2: Stationary distribution produced by the IGAV model with quenched savings propensities and a bias function per Eq. (12). Simulation was performed with the parameterization \(N=5000\) and \(\langle w\rangle=100\) over \(10^{6}\) iterations. is given by: \[\frac{\mathrm{d}w_{i}}{\mathrm{d}t}=\eta_{i}(t)w_{i}(t)+\sum_{j\neq i}J_{ij}w_{j}( t)-\sum_{j\neq i}J_{ji}w_{i}(t) \tag{22}\] where \(\eta_{i}\) is a Gaussian random variable with variance \(2\sigma^{2}\)(Bouchaud and Mezard, 2000). Notably, the BM model has no restriction on total wealth being conserved. The simplest case, in which all rates of exchange are equalized such that \(J_{ij}=\frac{J}{N}\), lends itself well to a mean-field approximation, which produces an inverse-Gamma equilibrium distribution with shape parameter \(\alpha=1+\frac{J}{\sigma^{2}}\) and scale parameter \(\beta=\alpha-1\)--strikingly similar in form to the equilibrium distribution of the CPT model. However, the mean-field approximation is time-limited; for any finite number of agents, the BM model on a complete graph will eventually exhibit wealth condensation and the probability that a given agent will have wealth less than any finite fraction of total wealth grows to 1 (Medo, 2009). Further investigation into this class of model demonstrated that the resulting distribution is also sensitive to the nature of the underlying network defining the non-zero entries of the transaction matrix \(\mathbf{J}\). Souma _et al._ (2001) demonstrated through simulation that defining \(\mathbf{J}\) on a small-world network--where each agent neighbors only 0.1% of the population--leads to distributions which are best fit by a combination of log-normal and power-law distributions. Garlaschelli and Loffredo (2004, 2008) likewise showed that it is possible to retrieve a realistic mixed log-normal-power law distribution by simulating the model on a simple heterogeneous network with a small number of "hub" agents, and that the BM model on a homogeneous network is able to reproduce either a log-normal or a power law distribution--but not both--depending on the average number of adjacencies per agent. Ma _et al._ (2013) simulated the BM model on a partially connected network and found the generalized inverse Gamma (GIGa) distribution provided the best fit to the steady state. Though the original BM model is a continuous-time model, a number of authors have studied similar models in discrete time as well. Di Matteo _et al._ (2003), for example, considers the variation: \[\begin{split}\Delta w_{i}(t)&=A_{i}(t)+B_{i}(t)w_{ t}\\ &+\sum_{j\neq i}Q_{j\to i}(t)w_{j}(t)-\sum_{j\neq i}Q_{i\to j}(t)w_{i}(t) \end{split} \tag{23}\] For the purposes of their analysis, additive noise \(A_{i}(t)\) is assumed to be Gaussian with mean zero, and multiplicative noise \(B_{i}(t)=0\). Additionally, each agent \(i\) is assumed to split a fixed share \(q_{0}\) of their wealth evenly with all of their neighbors \(j\in\mathcal{I}_{i}\), where \(\left|\right|=z_{i}\). Thus, \(Q_{i\to j}(t)=\frac{q_{0}}{z_{i}}\) if \(j\in\mathcal{I}_{i}\) and 0 otherwise. Their restricted system dynamics thus become: \[w_{i}(t+1)-w_{i}(t)=A_{i}(t)-q_{0}w_{i}(t)+\sum_{j\in\mathcal{I}_ {i}}\frac{q_{0}}{z_{j}}w_{j}(t) \tag{24}\] Simulating this time series produces results dependent on the choice of adjacency network, the most notable being that scale-free networks produce power-law distributions. In this case, the equilibrium wealth level of a given node is nearly perfectly correlated with the number of neighbors it has in the specified network, as shown in Figure 4c. Scafetta _et al._ (2004) propose another discrete-time variation of Bouchaud and Mezard's model, motivated by a dissatisfaction with the formulation of wealth transfer via exchange as it appears in their original paper, which sees a constant wealth flux from rich to poor. This is not necessarily realistic as wealth should only be transferred in exchange if an agent buys an asset for a price different than its value; such a model cannot explain wealth inequality under the assumption of perfect pricing. Thus, Scafetta _et al._ propose a model in which the wealth of agent \(i\) is given by: \[w_{i}(t+1)=w_{i}(t)+r_{i}\xi(t)w_{i}(t)+\sum_{j\neq i}w_{i\to j}(t) \tag{25}\] where \(r_{i}=V\Pi_{i}>0\) is the "individual investment index," given as the product of the global investment index and the proportion of wealth actually invested by agent \(i\), \(\xi(t)\) is a Gaussian random variable representing return on investment, and \(w_{i\to j}(t)\) represents the flow of wealth from agent \(j\) to agent \(i\) in period \(t\), which is assumed to be Gaussian with mean \(\mu=f\hbar\frac{w_{i}-w_{j}}{w_{i}+w_{j}}\min[w_{i},w_{j}]\) and standard deviation \(\sigma=h\min\{w_{i},w_{j}\}\). Varying \(f\), \(h\), and \(r\) then allowed the authors to tune the strengths of different system dynamics. If \(h>0\) and \(f=r=0\) (the symmetric trade-only model), wealth condensation occurs. If \(f,h>0\) and \(r=0\) (the asymmetric trade-only model), a Gamma-like distribution is observed. Finally, if \(f,h,r>0\) (the asymmetric trade-investment model), a Gamma-like distribution with a power-law tail is observed. Various other modifications to the BM model have been studied as well. Huang (2004) extends the BM model to negative wealth levels, and Torregrossa and Toscani (2017) prove analytically that a unique steady state with support on the entire real number line exists. Johnston _et al._ (2005) imposes the additional restriction of conservation of wealth, finding that wealth condensation still occurs for high values of \(\mu\). Finally Ichinomiya (2012a, b) relaxes Bouchaud and Mezard's mean field assumption to adiabatic and independent assumptions, drawn from quantum mechanics. The power law-like tail is reproduced and condensation is seen to take place at a higher \(J\) than the mean-field case would indicate, though the Pareto index obtained is smaller than those empirically observed (Ichinomiya, 2013). ### Other formulations While the two principal model classes of the random asset exchange literature are KWE model and the BM model, a variety of less influential formulations also exist. Figure 3: Stationary distribution produced by the discrete-time BM model on a Barabási–Albert scale-free network, as described by Di Matteo _et al._ (2003). Simulation was performed with the parameterization \(N=5000\), \(\langle w\rangle=100\), \(q_{0}=0.1\), and \(E\left[A_{t}(t)^{2}\right]=1\) over \(10^{5}\) iterations. A simple model which has nonetheless been significant in the economics literature is the multiplicative stochastic process (MSP), which was studied by the economists Robert Gibrat and D.G. Champernowne (Garlaschelli and Loffredo, 2008). It would however be a stretch to say that the MSP class of models is truly a type of random asset _exchange_ model as it is primarily characterized by the lack of exchange or any other sort of interaction between agents. Such models essentially represent agents' levels of wealth in terms of independent random walks, but nonetheless are able to capture some essential characteristics of observed distributions. The simplest model in this vein is the pure MSP \(w(t+1)=\lambda(t)w(t)\), where \(\lambda(t)\) is a Gaussian random variable. It is straightforward to show that the distribution of wealth among an ensemble of agents whose wealth evolution is governed by a pure MSP will follow a log-normal distribution, though variations which include additive noise (as in the Kesten process from the biological sciences) and "minimum wage"-style boundary constraints can also reproduce power law tails (Souma, 2002). For examples of such models, see Biham _et al._ (1998), Huang and Solomon (2001), Souma and Nirei (2005), and Basu and Mohanty (2008). Another class of model which studied in the early days of the RAE literature in particular is the Generalized Lotka-Volterra (GLV) model, which also has its origins in the biological sciences. The original Lotka-Volterra process, studied by Biham _et al._ (1998), is given by: \[w_{i}(t+1)=\lambda(t)w_{i}(t)+a\bar{w}(t)-bw_{i}(t)\bar{w}(t) \tag{26}\] where \(\lambda\) is a time-dependent random variable and \(\bar{w}(t)\) is the average wealth in the system. The inclusion of the \(\bar{w}(t)\) terms represents a form of indirect interaction between agents: much like in the mean-field approximation of the BM model, instead of including specific interaction terms \(b_{ij}w_{i}(t)w_{j}(t\), all interactions are assumed to be symmetrical: \(b_{ij}=b/N\). The generalized form of this model was introduced by Solomon and Richmond (2001, 2002), and follows: \[\begin{split}\Delta w_{i}(t)&=\left(\varepsilon_{i} (t)\sigma_{i}+c_{i}(w_{1},w_{2},\ldots,w_{N},t)\right)w_{i}(t)\\ &+a_{i}\sum_{i}b_{j}w_{j}(t)\end{split} \tag{27}\] where \(\epsilon_{i}\) is a stochastic variable such that \(E[\epsilon_{i}]=0\) and \(E[\epsilon_{i}^{2}]=1\), \(c_{i}\) represents endogenous and exogenous dynamics in returns, and \(a_{i}\) and \(b_{i}\) represent arbitrary redistributions of wealth among agents. The restrictions on \(\epsilon\) can be made without loss of generality thanks to the \(c_{i}\) term. Under certain assumptions, this model also produced mixed exponential-Pareto distributions. However, this model ultimately faded in popularity due to the difficulty it has accurately representing the left tail of income distributions, as well as the lack of economic justification for some of its terms (Repetowicz _et al._, 2005). ## III Notable trends in the literature While the papers discussed above serve as the foundation for the random asset exchange literature, the flexibility of the underlying modeling framework have allowed a vast number of featural variations upon these canonical models to have proliferated. In this section, we provide an overview of the most significant of these trends is provided and summarize a number of key papers in each category. ### Non-conservation of wealth One of the primary criticisms leveled against the original KWE models is that the assumption of total conservation of wealth, made by analogy with the conservation of energy in ideal gas models, is highly unrealistic. In real economies, wealth is constantly being created and destroyed--not just by means of production and consumption, but even by the constant issuing and repaying of loans. Thus, a number of modifications to the conservative KWE model have attempted to represent this fact. Most such models can be classified into one of two types: models which, like the CPT model, conserve wealth in the mean, and models which tie the global wealth level to a fixed influx rate. Bisi _et al._ (2009) and Bassetti and Toscani (2010) both consider models of the first type. The latter considers the non-conservative exchange rule: \[M=\begin{bmatrix}\varepsilon_{i}&\varepsilon_{i}\\ \varepsilon_{j}&\varepsilon_{j}\end{bmatrix} \tag{28}\] where \(\varepsilon_{i}\) and \(\varepsilon_{i}\) are i.i.d. and \(E[\varepsilon_{i}+\varepsilon_{j}]=1\). Bassetti _et al._ (2014) considers a class of similar lotteries and demonstrates they tend to produce inverse-Gamma steady states. Slanina (2004) was the first to consider a non-conservative model of the second type, in which a constant inflow of wealth from outside the system of interacting agents is permitted. As in other formulations, the model sees pairs of agents \(i\) and \(j\) chosen at random to engage in a transfer of wealth, defined by the dynamics: \[M=\begin{bmatrix}1-\lambda+\epsilon&\lambda\\ \lambda&1-\lambda+\epsilon\end{bmatrix} \tag{29}\] where \(\lambda\in[0,1]\) represents the wealth exchanged between two interacting agents and \(\epsilon>0\) represents the rate at which exogenous wealth flows into the system. Slanina's model produces a Gamma-like equilibrium distribution with a Pareto tail with an index \(\alpha\sim 1+\frac{2\lambda}{\epsilon^{2}}\). Coelho _et al._ (2008) extended this model, redefining \(\lambda(w_{i})\) as a piecewise function taking on two different values depending on which side of a pre-specified wealth threshold \(m\tilde{w}(t)\) an agent's wealth \(w_{i}(t)\) fell: \[M=\begin{bmatrix}1-\lambda(w_{i}(t))+\epsilon&\lambda(w_{j}(t))\\ \lambda(w_{i}(t))&1-\lambda(w_{j}(t))+\epsilon\end{bmatrix} \tag{30}\] This modification reproduced a double power-law regime, a phenomenon observed when comparing the right tail of income from tax data to estimates for the capital gains of a country's very wealthiest individuals. A number of non-conservative models have dynamics which attempt to more directly model the process of money creation through borrowing. For example, Chen _et al._ (2013) consider a random exchange model in which agents who would otherwise reach zero wealth are permitted to borrow money from a central bank, which in turn can issue loans with no interest up to a certain global debt limit. This process of money creation (issue of loans) and annihilation (paying back of loans) leads to a system in which the money supply grows logarithmically. Schmitt _et al._ (2014) introduces a similar system of money creation and analyzes the non-local effect that issuing credit has on the rest of the system; though the recipient of the loan clearly benefits, the effects of the increase in the money supply quickly propagate and all agents suffer the resultant inflationary effects. Recently, Liu _et al._ (2021) and Klein _et al._ (2021) introduced a generalization of the unaltered YS model which permits growth in the money supply over time, which they call the "Growth, Exchange, and Distribution" (GED) model. Each time-step, total wealth \(W(t)\) is increased by a factor of \(1+\mu\), and the wealth influx \(\mu W(t)\) is distributed among agents such that agent \(i\) receives \(w_{i}^{\mathrm{t}}/(\sum_{j}w_{j}^{\mathrm{t}})\). For subunitary values of \(\lambda\), poorer agents disproportionately benefit from the growth in the money supply and a quasi-stationary distribution exists; otherwise, the system exhibits wealth condensation as in the unaltered YS model. The system dynamics at play here are quite similar to the model of Vallejos _et al._ (2018), in which growth surplus is apportioned according to a more indirect "wealth power" parameter. ### Networks and preferential attachment It is a notable and well-established result that the significance of the introduction of adjacency networks into random asset exchange models depends heavily on the specific model formulation. While, for instance, the specific nature of the network has a decisive effect on the steady state wealth distribution in BM-style models, the opposite tends to be true for KWE-style models. Networks of exchange are an important aspect of real economic systems, and as such there has been a significant effort to study the effect they have on various types of RAE models. Interestingly, models characterized by unidirectional exchange exhibit greater sensitivity to network structure than bidirectional exchange models. Chatterjee (2009), for example, introduces a toy model in which agents exchange fixed fractions of their wealth on a directed network characterized by a disorder parameter \(p\). Higher values of \(p\) produced networks where more agents had similar incoming and outgoing connections; the distributions obtained therefrom were more Gamma-like, as opposed to the Boltzmann-like distributions obtained from lower values of \(p\). Martinez-Martinez and Lopez-Ruiz (2013) study a unidirectional model with random exchange fractions, meant to represent payments on a non-complete graph. This "directed random exchange" (DRE) model thus has the exchange rule: \[M=\begin{bmatrix}\varepsilon&0\\ 1-\varepsilon&1\end{bmatrix} \tag{31}\] As with Chatterjee (2009), the choice of adjacency network affects the equilibrium distribution of the DRE model. For the fully-connected case, the equilibrium distribution \(p(x)\) is again exactly Gamma, with shape parameter \(\frac{1}{2}\)(Katriel, 2015). Notably, this implies that \(p\) possesses a singularity at \(0\), explaining why Martinez-Martinez and Lopez-Ruiz observed a condensation-like phenomenon even on fully-connected networks. Sanchez _et al._ (2007) investigate a model in which agents populate a one-dimensional lattice. Each agent's wealth grows in a deterministic fashion as a product of a linear "natural growth" term and an exponential "control" term, which retards growth as the difference between an agent's wealth and the average wealth of its neighbors increases. While this system produces a pure power law distribution, it was later demonstrated that, for different values of the system's endogenous parameters or a rearrangement of agents' neighborhoods, either a Boltzmann-Gibbs or a Pareto distribution could be obtained (Gonzalez-Estevez _et al._, 2009, 2008). A handful of models have included the additional possibility of agents exchanging connections or positions on a lattice as well as units of wealth. Gusman _et al._ (2005) define an IGAV model on a random network in which the winner of an exchange is rewarded with additional connections on the network, producing a power law regime. Aydiner _et al._ (2019) examine a CCM-style bidirectional exchange model on a one-dimensional lattice, with the twist that some fraction of agents exchange lattice position each iteration of the simulation. Fernandes and Tempere (2020) likewise consider a variation of the CC model in which agents on a two-dimensional lattice randomly switch positions on the lattice such that the average wealth difference between neighboring nodes is reduced. This ultimately results in perfect wealth segregation and uniformly higher inequality. The dynamics of wealth exchange coupled with extremal dynamics was thoroughly studied in the "conservative exchange market" (CEM) model (Iglesias _et al._, 2003; Pianegonda and Iglesias, 2004; Pianegonda _et al._, 2003). Said model populates a lattice with agents who possess wealth levels in the range \([0,1]\), and each time step sees the poorest agent's wealth randomly re-randomized at the expense or benefit of its two closest neighbors. The selec tion rule in this model induces self-organizing behavior such that almost all agents end up with wealth levels above a "poverty line," which proved to be higher in the restricted lattice case than in the fully-connected case. This model has been extended by a number of follow-up papers over the years. Iglesias _et al._ (2010) used this model to compare two different redistribution schemes, and Ghosh _et al._ (2011) considered its mean-field approximation. Chakraborty _et al._ (2012) and Braunstein _et al._ (2013) studied the same dynamics on various other networks. A concept closely related to adjacency is that of preferential attachment, which defines the likelihood of two agents interacting as a function of endogenous variables. The variable chosen is usually wealth, representing the fact that, in real economies, both the rich and the poor tend to interact more often with people of similar socioeconomic status to themselves. Because of its non-discrete nature, preferential attachment can allow for somewhat more dynamic interactions than adjacency networks can, permitting agents who become wealthy to access the networks of the rich and not totally disallowing chance rich-poor interactions. In fact, adjacency networks can be viewed as a special case of preferential attachment. Laguna _et al._ (2005) study the effect of this phenomenon on the IGAV model by imposing the restriction that a given agent is only permitted to interact with another agent if the difference between their two wealth levels is less than a given threshold value \(u\). Large values of \(u\) more or less replicate the IGAV model and small values freeze the system entirely, as one would expect. Intermediate values, however, produced a self-organizing separation within the distribution of wealth, with a gap separating rich agents from poor ones spontaneously arising. This bimodal distribution persisted even for high values of the poor-bias parameter \(f\). Chakraborty and Manna (2010) introduce a model with simple preferential attachment behavior, such that richer agents engage in exchange more frequently. That is, the probability that agent \(i\) is selected as the first trader is proportional \(w_{i}^{\alpha}\), and the probability that \(j\) is selected as the second trader is proportional \(w_{j}^{\beta}\). The limit as either exponent goes to infinity yields purely extremal mechanics, while \(\alpha=\beta=0\) is the CCM model. Goswami and Sen (2014) defines a more complicated attachment function, wherein the probability of a given pair of agents \((i,j)\) interacting depends on \(i\)'s total wealth, the difference in wealth between \(i\) and \(j\), and the number of past interactions between agents \(i\) and \(j\). The strength of each factor is modulated by a corresponding exponent, and, when applied to the classical BDY model, the choice of modulation has a significant effect on the Pareto index of the steady-state distribution. ### Goods and rationality Despite their reductive nature, many of the simplifying assumptions discussed above are not uncommon to find in the economics literature as well. Many neoclassical models of simple "exchange economies" study the distribution of endowed and conserved assets absent wealth creation, and comparatively few consider the effect of exchange networks or other kinds of barriers to freely-associating exchange between agents (an important source of imperfect competition and thus market inefficiency). Rather, the main distinction between RAE-style models and those found in the mainstream economics literature lies in the fact that the models preferred in the former typically study ensembles of agents exchanging money directly in a stochastic fashion, while the latter typically study ensembles of rational (i.e. utility-maximizing) agents exchanging goods, with money exchange being an implicit consequence of goods exchange. A number of attempts have been made to partially bridge this difference between these two literatures by introducing goods and rationality into RAE models. Chakraborti _et al._ (2001) study a model with both a fixed commodity supply \(Q\) and money supply \(M\) distributed among a population of agents. These agents first seek to ensure their level of goods \(q_{i}\) exceeds some subsistence level \(q_{0}\), and afterwards seek to maximize their money holdings \(m_{i}\); thus, agents with \(q_{i}>q_{0}\) find agents with \(q_{j}<q_{0}\) to sell their excess goods to at a fixed price of 1. Not surprisingly, the steady-state distribution of this system is found to be sensitive to the global quantities \(Q\) and \(N\); if the commodity supply is limited (\(Q/N<q_{0}\)), some fraction of agents will necessarily fall below subsistence level, while if the money supply is limited, agents lack the ability to redistribute the commodity supply in an efficient manner. A similar model with stochastic price fluctuations is considered by Chatterjee and Chakrabarti (2006), in which wealth is taken to be the sum of money and commodity holdings. In both models, the money distribution exhibits a Pareto tail with index 1 while the commodity distribution is exponential so long as neither \(Q\) nor \(M\) is restricted. Silver _et al._ (2002) considers a model with a more sophisticated utility function, in which agents possess stochastically time-varying Cobb-Douglas utility functions of the form: \[u_{i,t}(a_{i,t},w_{i,t})=(a_{i,t})^{f_{i,t}}(w_{i,t}-a_{i,t})^{1-f_{i,t}} \tag{32}\] where \(a_{i,t}\) represents agent \(i\)'s holdings of the money commodity at time \(t\), \(w_{i,t}-a_{i,t}\) represents agent \(i\)'s holdings of non-money commodities at time \(t\), and \(f_{i,t}\) is a random variable independently and identically distributed across both indices. In an approach highly reminiscent of the derivation of the equilibrium of the canonical Arrow-Debreu model in economics, each agent chooses to re-allocate their wealth between money and non-money commodities in such a way that maximizes \(u_{i,t}\) subject to supply constraints. Simulations of this system produce a wealth distribution well-fit by a Gamma Figure 4: Stationary distribution produced by the CEM model, with the best log-linear fit of the right tail of the distribution shown. Simulation was performed with \(N=5000\) over \(10^{7}\) iterations. distribution with a shape parameter of 1 and a rate parameter of \(1/\alpha\), where \(\alpha\) represents the global supply of the money commodity. However, not every exchange model with goods is paired with rational agents. Ausloos and Pekalski (2007) consider a model with money, goods, and completely stochastic agent behavior. Each time step, one agent decides via coin toss whether to purchase a nonzero number of goods. If so, he randomly selects a fraction of his money to spend and taps another agent to sell to him. If this second agent has enough goods to sell and has a desire to sell (again decided via coin toss), the exchange takes place. This model produces a distribution of wealth which interpolates between two power laws as time progresses, while the distribution of goods follows a static power-law. In general, those agents that are rich in terms of money are poor in terms of goods, and vice versa. Another interesting line of research has concerned itself with defining traditional macroeconomic ensembles which produce equivalent results to RAE models. For example, Chakrabarti and Chakrabarti (2009) demonstrates that the dynamics of the CCM model can be replicated in a neoclassical framework with rational agents producing differentiated goods and trading in order to maximize time-varying Cobb-Douglas utility functions for goods and money. In this case, the stochastic nature of exchange in the CCM model is represented by random variations in agents' utility functions in the analog model. Tao (2015) derives the entropy-maximizing exponential distribution as the statistical equilibrium of an Arrow-Debreu market system populated by agents with such time-varying utility functions. More recently, Quevedo and Quimbay (2020) have extended this formulation to permit agents to save a portion \(s\) of goods possessed, naturally leading to an equivalent non-conservative RAE model. ### Strategic behavior Another approach to modeling "smarter" agent behavior attempts to integrate game-theoretic or machine learning dynamics into RAE models. The exact nature of this integration can take various forms, including bilateral agreement, strategic heterogeneity, and behavioral evolution, just to name a few. In Heinsalu and Patriarca's original paper introducing the immediate exchange model, the authors consider the effect of introducing an acceptance criterion--a probabilistic factor defining the odds a given agent will agree to engage in a given transaction as a function of the difference in wealths between the agent and his partner, with both agents needing to agree to a transaction for it to take place (Heinsalu and Patriarca, 2014). In both the BDY and IE models, the choice of any symmetrical acceptance criterion (whether linear, exponential, etc.) only impacts the time of relaxation to equilibrium, but not the shape equilibrium itself. Asymmetrical decision criteria cause the equilibrium distribution to lose its universal form and to depend instead on the rule chosen. For the CC model, however, introducing even a symmetric criterion causes the equilibrium to lose its Gamma-like shape. Sun _et al._ (2008) investigate a KWE model in which each agent can follow one of four strategies, chosen at random before the simulation begins. The exchange rule between two agents depends on their strategy and the strategy of their partner: two of the strategies are passive and tend towards equalizing the wealth of the two agents, while the other two are aggressive and tend towards classical theft-and-fraud exchange. As with Heinsalu and Patriarca (2014), the introduction of heterogeneous trading strategies leads to a steady-state distribution which depends heavily on the model parameters, specifically those defining the rate of success of the aggressive strategies against the passive strategies. Heterogeneity in strategies is often stud ied alongside dynamics for updating agents' strategies, representing a rudimentary form of learning. Hu _et al._ (2006, 2007, 2008), for example, consider a model in which agents begin as either cooperators or defectors and play a series of prisoner's dilemma or snowdrift-style games with their neighbors. After each game, an agent identifies the strategy of its richest neighbor and adopts it with some probability defined by his most recent payout, leading on average to more successful strategies propagating throughout the network. In a similar vein, da Silva and de Figueiredo (2014) investigate an adaptive variation of the CCM model in which each agent \(i\) has a fixed probability \(\gamma_{i}\) of being able to update their savings parameter according to a pre-defined rule each time step. Nerer and Laguna (2021) study a variation on a poor-biased YS model with non-zero saving propensity, in which a fraction of agents are subjected to a genetic evolutionary algorithm after each Monte Carlo simulation step to update their exchange parameters, which approach the optimal values determined by Nerer and Laguna (2021). A BMstyle model coupled with game-theoretic dynamics is extensively analytically studied by Degond _et al._ (2014). ### Class division As mentioned above, one of the key features of empirical income distributions which RAE models attempt to capture is the bifurcation of the overall distribution into distinct exponential and Pareto ("thermal" and "superthermal," following the terminology of Silva and Yakovenko (2004)) components. While some models attempt to replicate this two-regime behavior while preserving homogeneity of system dynamics (e.g. by distributing a behavioral parameter throughout the population or imposing a specific network structure), a number of authors have instead sought an explanation by means of a bifurcation in system dynamics for agents with large wealth. It is very natural to identify the exponential bulk of the income distribution with labor income and the power law tail with capital gains, seeing as Pareto's original observations came from data for property incomes. In this way, asymmetric system dynamics represent the fact that, in real economies, the rich do indeed have access to economic mechanisms not available to the majority of the population (Montroll and Badger, 1974). Simple models which have this class division "baked in" are easily able to replicate two-regime structures of income. Yarlagadda and Das (2005) and Das and Yarlagadda (2005), for instance, introduce a model in which trading dynamics differ for agents with wealths on either side of a fixed wealth threshold. Poorer agents engage in bilateral exchange exactly as in the model of Chakraborti and Chakrabarti (2000), while richer agents engage in exchange--with a different saving parameter--against the system-totality, representing forms of leverage only available to the wealthy. Quevedo and Quimbay (2020) also study a trading model in which a fixed fraction of the population acts as "producers," who employ the remainder of the population as "workers." Producers trade wealth and pay their associated workers a portion of the exchanged quantity, creating two differently-shaped Gamma distributions for producer and worker income which, when combined, create a clear two-regime distribution. Lim and Min (2020) consider the case in which the CCM model is partitioned into two classes by a wealth percentile threshold and a "solidarity effect" among agents below said threshold is introduced. If two agents belong to the same class, then exchange proceeds according to the familiar CCM system dynamics. But if the agents belong to different classes then the lower-class agent gathers "partners" equal to some fraction of the size of the class, and wins a fraction of the upper-class agent's wealth with a probability equal to the percent wealth his coalition possesses in the exchange. This solidarity factor turns out to be crucial for the generation of a realistic wealth distribution, as without it the middle income stratum collapses and one obtains a bimodal distribution, as with Laguna _et al._ (2005). Imposing a fixed boundary differentiating the upper class from the lower is not necessarily the best approach here, however, as analysis has shown that the "superthermal" component of the income distribution is highly volatile, fluctuating in size with the stochastic movements of financial markets (Silva and Yakovenko, 2004). A number of models consequently attempt to capture this out-of-equilibrium aspect of the distribution's right tail by setting class boundaries dynamically. Russo (2014) investigates a model without exchange in which a new wealth percentile threshold defining the size of the upper class is chosen from the uniform distribution at each time step. Agents above that threshold then see their wealth augmented by a multiplicative stochastic process, while agents below it have their wealth augmented by an additive stochastic process. A different approach is forwarded by Smerlak (2016), who constructs a Markov process defining transition probabilities between a finite number of stratified classes. Agents in higher classes derive proportionally greater amounts of income from a multiplicative process subject to shocks, and consequently exhibit much greater fluctuations in wealth compared to the majority of agents, who persist at low levels of wealth indefinitely. Finally, we wish to highlight here the unique and striking "social architecture" (SA) model of Wright (2005), which sees agents spontaneously self-organize into three distinct classes. Wright defines an ensemble with three types of agents--employers, employees, and the unemployed--and in each iteration, an agent \(i\) is randomly chosen to be "active." The activities agent \(i\) engages in depends on its status: if \(i\) is an employer, it pays as many of its employees as it can afford; if \(i\) is an employee, it receives a wage and spends it on consumption goods produced by an employer; and if agent \(i\) is unemployed, a random (wealth) agent is chosen to hire \(i\), assuming their level of wealth is sufficient to pay \(i\)'s wages. Although the initial conditions of the simulation posited complete equality of agents (all agents began with equal wealth and no employer or employees), the population quickly restructured itself into a three-class regime with a distribution of wealth characterized by an exponential bulk and a Pareto tail. The exact nature of this distribution becomes clear when disaggregated for class: the wealth of "employee" agents was completely governed by an exponential distribution, while that of "employer" agents was well-fit by a power law. This result is consonant the argument forwarded by Montroll and Shlesinger (1982) and in contradistinction to explanations of the two-regime distribution which rely on endogenous differences between agents. Unfortunately, Wright's model has seen few direct extensions, though a similar self-organizing model was studied in Lavicka _et al._ (2010). ### **Taxation and redistribution** A good deal of attention has also been dedicated to the potential usefulness of RAE models in the analysis of the efficiency of redistributive mechanisms. Early studies such as Guala (2008) and Toscani (2009) considered the effect of a simple "income tax," in which a fixed fraction is withdrawn from each exchange by an external body and uniformly redistributed, on mean-conservative KWE models, which was found to not alter the exponential nature of the steady state distribution. Diniz and Mendes (2012) extend this result to multiple different taxation rules on a CC model, representing both income taxes (taxes on transaction amounts) and wealth taxes (taxes on wealth level). Bouleau and Chorro (2017) contrast the effect of income and wealth taxes on YS Figure 5: Stationary distributions of wealth, income, and class size produced by the SA model of Wright (2005). Simulation was performed with the parameterization \(N=5000\), \(\langle w\rangle=100\), \([w_{a},w_{b}]=[10,90]\), over \(10^{2}\) ”year rule” iterations (\(6\cdot 10^{6}\) time-steps). like models, demonstrating analytically that the income taxes alone are not sufficient to prevent condensation. Similarly, Burda _et al._ (2019) investigate the dynamics of a BM-style model with the parameterization \(J<0\), which would normally cause the system to condense, paired with a redistributive mechanism. A sufficiently strong mechanism succeeded in preventing condensation and recovering a heavy-tailed wealth distribution, with a multimodal critical phase also being observed. A number of non-standard redistribution rules in a YS model were examined by Lima _et al._ (2022). Recently, however, interest within the literature has grown around the problem of identifying optimal tax rates in models, often borrowing techniques from control theory to do so. Bouchaud (2015) extends the BM model to permit a wealth tax capable of reallocating wealth between the private and public sectors, with different growth rate parameters. By maximizing expected economic growth, an optimal tax rate in the interval \((0,1)\) is obtained for growth rate differences within an intermediate range. During _et al._ (2018) develop a finite-horizon model predictive control mechanism for the CPT model to derive the feasible tax regime which minimizes a cost function representing some metric of inequality, and consider various objective functions and redistribution schemes. Zhou and Lai (2022) investigate a novel model of individual wealth growth and formulate both an additive and a multiplicative control mechanism to modulate the excessive growth of the right tail of the wealth distribution of the ensemble. Lastly, Wang _et al._ (2022) pairs a CPT model with an evolutionary description of agents' decision-making competence--which feeds back into their saving propensities--and a model predictive control mechanism to reduce inequality. ### Miscellaneous Though the above list enumerates the most widely-studied modifications to RAE models, it should by no means be considered exhaustive. The flexibility of the random asset exchange framework makes it easy to introduce new system dynamics and isolate the effects of a given modification. Just to name a few examples, Pareschi and Toscani (2014) investigate the effect of variable agent knowledge on the CPT model, obtaining the intriguing result that the most knowledgeable agents tend not to be the richest ones; Trigaux (2005) examines the effect of introducing altruistic behavior to a subpopulation and finds a very strong equalizing effect when combined with redistribution; Coelho _et al._ (2005) and Patricio and Araujo (2021) model the propagation of wealth on a generational network to study the stratifying effect of inheritance; and Dimarco _et al._ (2020) use a class-based framework to characterize the effect of pandemics on wealth inequality.. The RAE literature has also given rise to a number of wholly new analytical techniques. Ballante _et al._ (2020) demonstrate that fitting the distribution of saving propensities to real-time economic data in a generalized CCM model via statistical sampling may be useful as a leading indicator of economic stressors which have the potential to increase inequality. Luquini _et al._ (2020) establish a formal equivalence between KWE models and population-based random search algorithms in computer science, and speculate that said formulation could ultimately be used as a benchmark model in cybernetics. Finally, dos Santos _et al._ (2022) propose a computational technique by which the crossover point between the exponential and Pareto regimes can be identified within data sets of real income distributions, aiding in the empirical study of economic inequality. ## IV Discussion From the ambition and breadth of recent publications such as those mentioned above, it is clear that random asset exchange modeling is being increasingly recognized as a highly versatile tool which has the potential to find wide application even beyond its original use as a descriptive econophysical model. It is also clear that, in seeking to explain the characteristic features of wealth and income distributions, such models have highlighted the existence of a number of more fundamental economic phenomena underlying those features, such as the inherently diffusive nature of exchange economies and the emergence of appar \begin{table} \begin{tabular}{|l||l|l|l|l|} \multicolumn{5}{|c|}{**Notable Papers**} \\ \hline _Model_ & _Theft \& Fraul_ & _Yand Sale_ & _Bouchaud-Mezard_ & _Other_ \\ \hline Canonical & Angle (1986) & Hayes (2002) & Bouchaud \& Mezard & Biham _et al._ (1998) \\ & Bennati (1988) & Iglesias _et al._ (2004) & (2000) & Solomon \& Richmond \\ & Ispolatov _et al._ (1998) & Caon _et al._ (2007) & & (2001, 2002) \\ & Dragüsicus \& & Moukarzel _et al._ (2007) & & \\ & Yakovenko (2000) & & & \\ & Chakraborti \& & & \\ & Chakrabarti (2000) & & & \\ & Chatterjee _et al._ (2003) & & & \\ \hline Non-cons. & Slanina (2004) & Liu _et al._ (2021) & & Heinsalu \& Patriarca \\ & Cordier _et al._ (2005) & Klein _et al._ (2021) & & (2014) \\ & Coelho _et al._ (2008) & & & \\ & Bisi _et al._ (2009) & & & \\ & Bassetti \& Toscani & & \\ & Chen _et al._ (2013) & & & \\ & Schmitt _et al._ (2014) & & & \\ \hline Networks & Chatterjee (2009) & Gusman _et al._ (2005) & Souma _et al._ (2001) & Pianegonda _et al._ \\ & Martinez-Martinez \& & Laguna _et al._ (2005) & Di Matteo _et al._ (2003) & (2003) \\ & Lopez-Ruiz (2013) & Guajardo \& & Scafeta _et al._ (2004) & Sánchez _et al._ (2007) \\ & Aydiner _et al._ (2019) & Moukarzel (2012) & Garlaschelli \& & \\ & Fern\&es \& Tempere & Loffredo (2004) & \\ & (2020) & & Ma _et al._ (2013) & \\ \hline Goods & Chakraborti _et al._ & & & Ausloos \& Pekalski \\ & (2001) & & & (2007) \\ & Chatterjee \& & & \\ & Chakrabarti (2006) & & & \\ \hline Rationality & Chakrabarti \& & & Silver _et al._ (2002) \\ & Chakrabarti (2009) & & & \\ & Tao (2015) & & & \\ & Quevedo \& Quimbay & & \\ & (2020) & & & \\ \hline Strategies & Sun _et al._ (2008) & Nerfer \& Laguna & Degond _et al._ (2014) & Hu _et al._ (2006) \\ & da Silva \& (2021b) & & & \\ & de Figueiredo (2014) & & & \\ \hline Class div. & Yarlagadda \& Das & & Wright (2005) \\ & (2005) & & & Lavička _et al._ (2010) \\ & Lim \& Min (2020) & & & Russo (2014) \\ & & & & Smerlak (2016) \\ \hline Redist. & Guala (2008) & Boghosian (2014a) & Bouchaud (2015) & Zhou \& Lai (2022) \\ & Toscani (2009) & Boghosian _et al._ (2017) & Burda _et al._ (2019) & \\ & Diniz \& Mendes (2012) & Bouleau \& Chorro & \\ & & (2017) & & \\ & Dürring _et al._ (2018) & & & \\ & Lima _et al._ (2022) & & & \\ & Wang _et al._ (2022) & & & \\ & Li _et al._ (2019) & & & \\ \hline \end{tabular} \end{table} Table 2: Notable papers in the random asset exchange literature, disaggregated by formulation type and prominent features. ent power laws from overlapping exponential functions. Furthermore, it has also become apparent in the course of this investigation that the random asset exchange modeling literature has a number of bigger-picture implications. Namely, all of the models discussed indicate that a large proportion of observed economic inequality is the result of luck and the inherently diffusive (entropy-increasing) nature of exchange itself. While some authors have taken this to mean that the "natural," entropy-maximizing level of inequality is by definition fair, such a conclusion is far too strong and veers into the territory of naturalistic fallacies. Instead, the conclusion one ought to draw from this cardinal result of the random asset exchange literature depends on one's own (subjective) beliefs concerning the "ideal" level of inequality--however that is determined--as compared to the level currently prevailing. For proponents of relatively unrestrained capitalism, who have argued that inequality plays an important role in the economy by encouraging people to work harder in the hopes of achieving better economic outcomes, the implications of said result are quite positive: statistics seem to naturally guarantee such inequality without the help of market-distorting conditions such as the formation of monopolies or the institutional of economic thievery! On the other hand, for those policy makers who aim to reduce the degree of inequality in modern, developed economies, the corresponding implication may be somewhat more dismal. For them, the main implication of these models is that altering government policies to make market economies operate more "fairly" by, for example, introducing progressive taxation can only do so much. At the end of the day, large scale regimes of wealth redistribution, such as wealth taxes, may be necessary in order to reduce inequality below the level that is endogenous to exchange-based systems. However, econophysical models are not without their own problems. Most are still incapable of replicating all of the characteristic features of wealth and income distributions. For wealth distributions, as has been discussed, these include non-negligible segments of the population with non-positive wealth and possibly a power law right tail; for income distributions, these include an exponential or log-normal bulk, and an at least apparent power law tail with exponent between -2 and -3. More importantly, while all of the models discussed above serve as excellent demonstrations of the role random chance plays in generating the inequalities observed in market economies, the literature has not yet been able to provide an adequate _explanation_ for the emergence of these distributional features which it posits to be universal. That is to say, it has not yet been able to identify and describe the concrete system dynamics, common to all market economies, which generate the characteristic features of inequality. Models which impose specific distributions on an endogenous parameter throughout the population (thrifitness, size of social network, etc.) clearly have the capability of producing nearly any desired distribution, but such results have far less explanatory power seeing as they merely defer the question. If one observes a given distribution of wealth because there exists an underlying distribution of a certain behavioral parameter, why is this parameter distributed the way it is throughout the population? One ultimately returns to Pareto's unsatisfying explanation for his own law--that economic inequality is purely the result of intrinsic differences between individuals--and finds oneself no closer to actually understanding the crux of the issue. More promise is shown by models which offer, as Reddy (2020) calls it, a "processual" account of inequality (Reddy, 2020). Such models introduce the elements of production and class relationships into the mix as fundamental processes of economic systems. This approach reflects concrete asymmetries in the economy, reduces the degree of reductionism present within the models, and permits the identification of different sections of wealth and income distributions with different social positions. Unfortunately, only a handful of studies in this direction have been performed, with Wright (2005) and Lavicka _et al._ (2010) remaining the two most notable examples. Another significant problem pertains to the relationship between the distribution of wealth and the distribution of income, the nature of which the literature has not consistently grasped. Wealth and income are two linked but quite distinct quantities. Wealth can take a wide variety of forms--money, consumption goods, real estate, debts, and even information or skills can all be considered forms of wealth. Income, on the other hand, typically refers to the amount of "wealth," however it is enumerated, received by an individual in a given time period, prior to expenses. This quantity is of course added to one's existing wealth, but the straightforward relation of income as the time-derivative of wealth only holds under the simplifying assumption that wealth is not subject to any endogenous changes: that is, no articles of wealth are consumed, fluctuate in value, are traded for articles of differing value, etc. For the most part, random asset exchange models are concerned with the distribution of an undifferentiated, non-consumable, exchangeable asset--usually a stand-in for money--throughout an ensemble of agents. Thus, the distributions of said asset throughout the population are best interpreted as wealth distributions, and it is inapt to compare them to empirical distributions of income. In fact, Xu _et al._ (2010) note that reconstructing time-series of agents' income within canonical KWE models actually produces income distributions which are Gaussian, as opposed to exponential, directly contrary to the available data. Once again, a greater focus on developing models in a more processual vein, which explicitly link income to salaries and wages paid out by firms to employees, could be immensely useful in clearing up this confusion. Further progress towards an econophysical explanation for inequality is sorely needed. Per Horowitz _et al._ once again, 61% of American adults believe that there exists too much inequality in the United States today. (Horowitz _et al._, 2020) Of that number, 81% believe that this problem will require either major policy interventions or a complete restructuring of the economy to address. There exists a clear political will, at least in the U.S., to reduce the degree of inequality that has been allowed to develop over the past few decades. But despite that fact, the same Pew survey demonstrated that there exists no consensus on what the major contributors to economic inequality in the U.S. even are. While there are a number of potential explicators commonly cited--such as industrial outsourcing, the country's tax structure, and intrinsic differences between individuals--no single one is viewed by a majority of the population as a decisive factor. Needless to say, determining what policies would be required to create a fairer society and economy necessitates a clearer understanding of the principal processes responsible for generating inequality in the first place, and much work remains to be done before such a satisfactory understanding is reached.
2303.00053
Degenerate Topological Edge States in Multimer Chains
We propose and experimentally realize a class of quasi-one-dimensional topological lattices whose unit cells are constructed by coupled multiple identical resonators, with uniform hopping and inversion symmetry. In the presence of path-induced effective zero hopping within the unit cells, the systems are characterized by complete multimerization with degenerate $-1$ energy edge states for open boundary condition. Su-Schrieffer-Heeger subspaces with fully dimerized limits corresponding to pairs of nontrivial flat bands are derived from the Hilbert spaces. In particular, topological bound states in the continuum (BICs) are inherently present in even multimer chains, manifested by embedding the topological bound states into a continuous band assured by bulk-boundary correspondence. Moreover, we experimentally demonstrate the degenerate topological edge states and topological BICs in inductor-capacitor circuits.
Jun Li, Yaping Yang, C. M. Hu
2023-02-28T19:50:02Z
http://arxiv.org/abs/2303.00053v2
# Degenerate Topological Edge States in Multimer Chains ###### Abstract We propose and experimentally realize a class of quasi-one-dimensional topological lattices whose unit cells are constructed by coupled multiple identical resonators, with uniform hopping and inversion symmetry. In the presence of coupling-path-induced effective zero hopping within the unit cells, the systems are characterized by complete multimization with degenerate \(-1\) energy edge states for open boundary condition. Su-Schrieffer-Heeger subspaces with fully dimerized limits corresponding to pairs of nontrivial flat bands are derived from the Hilbert spaces. In particular, topological bound states in the continuum (BICs) are inherently present in even multimer chains, manifested by embedding the topological bound states into a continuous band assured by bulk-boundary correspondence. Moreover, we experimentally demonstrate the degenerate topological edge states and topological BICs in inductor-capacitor circuits. Topological phases of matter transcend the paradigm of Ginzburg-Landau theory in condensed matter physics, with absence of any symmetry breaking but derived from geometry, and have attracted extensive investigation in various fields over the past few decades [1; 2; 3; 4; 5; 6; 7; 8; 9]. Topological phases are defined by the global wavefunctions of the dispersion bands that pervade the entire system rather than local orbitals, so that they are particularly robust to local perturbations such as defects and impurities. In essence, Band structure is the sufficient condition for the existence of topological phases. Since the first discovery of topological phases in quantum electronic systems [1; 2], novel and exotic topological properties have been developed in diverse platforms with their own unique advantages such as optics [10; 11], acoustics [12; 13], mechanics [14] and electric circuit [15; 16; 17] in classical regimes and ultra-cold atoms [18; 19], trapped-ions [20; 21] and Fock-state lattices [22; 23] in quantum regimes. One-dimensional (1D) topological phases bring some new insights because of their manipulability and experimental accessibility. The Su-Schrieffer-Heeger (SSH) model of polyacetilene [24; 25], as a starting point for 1D topological models based on tight-binding approximation, is a dimerized chain by having two different alternating hopping amplitudes between nearest-neighboring lattice site hosts. Recently, in the context of SSH chains, a variety of extended configurations with new physics and phenomena has been proposed especially non-Hermitian topology [9; 26]. On the one hand, special inconsistent inter-site interactions, like periodically modulated hopping, nonreciprocal hopping, environment-induced coupling and multisite coupling, have been introduced to raise a plethora of distinct topological phenomena including but not limited to the non-Hermitian skin effect [27; 28; 29; 30; 31], non-Hermitian real spectra [32], dissipative and Floquet topological phase transition [33; 34; 35] and trimer topological phases [36; 37; 38; 39]. On the other hand, with respect to on-site potentials, the introduction of on-site gain and loss not only provides a pointcut to combine the non-Hermitiantiy and topological phases for widening topological family [40; 41; 42], but also can drive topologically trivial systems and induce topological phase transitions solely by deliberate design [43; 44; 45; 46; 47]. In this letter, we present a quasi-one-dimensional (quasi-1D) tight-binding configuration without any staggered hopping and on-site potentials. We consider unit cells of multiple identical resonators with uniform coupling between every two sites and the same strength as the inter-cell coupling, i.e., only one kind of coupling strength and resonators in the whole chain. The system then forms complete multimer due to the zero effective intracell hopping induced by special coupling paths. Conceivably, considering the bulk-boundary correspondence (BBC), degenerate topological edge states with full localization exist in finite systems [48]. Interesting, with the increase of resonators in the unit cell, pairs of non-trivial flat bands appear at two fixed frequencies which correspond to fully dimerized subspaces derived from the Hilbert spaces. Moreover, for even multimer chains, topological bound states in the continuum (BICs) [49] naturally form via the bandgap of nontrivial flat bands just covered by a trivial band. We experimentally implement the idea by using AC circuits consisting of uniform capacitors and inductors. We start by considering a tight-binding system consisting of \(n\) (\(n\geq 3\)) identical resonators coupled to each other with the same hopping amplitude \(\kappa\), as shown in Fig. 1 (a). Here, we consider an Hermitian system that the intrinsic and coupling losses of all the resonators are ignored and \(\kappa\) is sufficiently small compared to the frequency of resonators \(\omega_{0}\). the systems can be represented by the Hamiltonian \[H_{n}=\left(\begin{array}{cccc}0&\kappa&\kappa&\cdots\\ \kappa&0&\kappa&\ddots\\ \kappa&\kappa&0&\ddots\\ \vdots&\ddots&\ddots&\ddots\end{array}\right)_{n\times n}\quad, \tag{1}\] characterized that the diagonal elements of the matrix are zero and all the others are \(\kappa\). Interestingly, there are always degenerate states with a fixed frequency independent of \(n\) in the system. Specifically, with reference to \(\omega_{0}\), one of its eigenvalues is \(\lambda_{n}=(n-1)\kappa\) with the normalized eigenvectors \(\left|\psi_{n}\right\rangle=(1/\sqrt{n},1/\sqrt{n},\cdots,1/\sqrt{n})^{\prime}\) while the others are \(\lambda_{i}=-\kappa\) where \(i=1,2,\cdots,n-1\) with the corresponding eigenvectors \(\left|\psi_{i}\right\rangle=(0,\cdots,0,1/\sqrt{2})^{\prime}\) where the \(i\)th element \(\left|\psi_{i}\right\rangle_{i}=1/\sqrt{2}\). In terms of the splitting of eigenvalues, the local effective coupling between the degenerate modes can be seen as zero to some extent. With this supposition, as shown in Figs 1 (b-d), we design a class of quasi-1D lattices with the above coupling multiple resonators as their unit cell. The unit cells are coupled to their nearest neighbor through \(\left[n/2\right]\) independent coupling channels with the same hopping amplitudes \(\kappa\). Considering the zero effective intracell hopping, we can expect that our chains are topologically nontrivial with complete multimerization. In bulk momentum space, the Bloch Hamiltonian of the chain can be written as \[H_{n}(k)=\left(\begin{array}{cccc}0&\kappa&\cdots&\kappa+\kappa e^{-ika}\\ \kappa&0&\cdot\cdot^{\cdot}&\kappa\\ \vdots&\cdot\cdot^{\cdot}&\ddots&\vdots\\ \kappa+\kappa e^{ika}&\kappa&\cdots&0\end{array}\right)_{n\times n} \tag{2}\] where \(a\) is the lattice constant between the units and \(k\) is the Bloch wave number. The Bloch Hamiltonian shows that the Bloch term only exists in all anti-diagonal elements, provided that the diagonal term remains zero. Obviously, the Hamiltonian displays inversion (\(\mathcal{I}\)) symmetry, i.e., \(\mathcal{I}H_{n}(k)\mathcal{I}^{-1}=H_{n}(-k)\). For clarity, the system is classified into two patterns via \(n\) is odd or even in the following analysis. We find the analytical solutions of its energy spectra expressed as \[\omega_{n,o}(k)=\left(\begin{array}{c}-2\\ \vdots\\ n/3-1-(A_{+}+i\sqrt{3}A_{-})/2\\ n/3-1-(A_{+}-i\sqrt{3}A_{-})/2\\ 0\\ \vdots\\ n/3-1+A_{+}\end{array}\right)\kappa \tag{3}\] when \(n\) is odd where \(A_{\pm}=C/B\pm B\), \(B=\left[\sqrt{D^{2}-C^{3}}+D\right]^{1/3}\), \(C=(n-1)\cos ka/3+(n^{2}+3)/9\) and \(D=(n^{2}-n)\cos ka/6+(n/3)^{3}+(n-3)/6\). Surprisingly, there are \(n-3\) flat bands equally divided at \(\omega_{n}=0\) and \(\omega_{n}=-2\kappa\). And only two bandgaps with the width \(G_{1}=\sqrt{(n-2)^{2}+1}-\sqrt{(n-1)^{2}-1+1}\) and \(G_{2}=\sqrt{(n-2)^{2}+1}+n-2\) exist in the energy spectra. For the case where \(n\) is even, the eigenfrequency is given by \[\omega_{n,e}(k)=\left(\begin{array}{c}-2\\ \vdots\\ n/2-1-\sqrt{n^{2}/4+1+n\cos ka}\\ 0\\ \vdots\\ n/2-1+\sqrt{(n^{2}/4+1+n\cos ka}\end{array}\right)\kappa, \tag{4}\] characterized by having \(n/2-1\) flat bands at \(\omega_{n}=0\) and \(\omega_{n}=-2\kappa\) respectively. The \((n/2)\)th band and the top \((n\)th) band are symmetric with respect to \((n/2-1)\kappa\). It is noteworthy that as the parameter \(n\) varies, there consistently exists one bandgap with a width of \(G=n-2\) due to the lower non-flat band precisely overlapping the bandgap of the flat bands. In both cases, the band structure of the bulk Hamiltonian is not symmetric around zero, indicating that our chain is chiral symmetry broken. In detail, except for the top band that exceeds zero, the other bands are always distributed between \(-2\kappa\) and zero. Figure 1: Theoretical tight-binding hopping model. (a) Schematic of \(n\) (\(n=3,4,5\cdots\)) resonators coupled with each others with uniform hopping amplitude \(\kappa\). (b-d) Bulk model with \(n\) sites per unit cell, with uniform hoppings, unit cells framed in yellow dashed box. Figures 2 (a-c) show the band structures in first Brillouin zone for different \(n\). Here, in the presence of inversion symmetry, we introduce the Zak phase, defined as \(Z_{j}=-i\int_{-\pi/a}^{\pi/a}\bra{\psi_{k,j}}\ket{\partial_{k}}\psi_{k,j}\,dk\), to characterize the topology of our 1D multimer system where \(j\) specifies the occupied band index with corresponding Bloch wave functions \(\ket{\psi_{k,j}}\)[50]. We can obtain nonzero quantized Zak phases of bands for various \(n\), indicating the topological nontriviality of our chains. Particularly, the top band possesses a Zak phase of zero while the flat bands always have Zak phases of \(-\pi\). We can block-diagonalize \(H_{n}(k)\) by unitary transformation \(\mathcal{U}_{n}^{-1}H_{n}(k)\mathcal{U}_{n}=H_{n}^{BD}(k)\) to show the separation between flat and nonflat bands clearly. For the simplest case with \(n=4\), the unitary matrix and the block-diagonal Hamiltonian are \[\mathcal{U}_{4}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}1&0&1&0\\ 1&0&-1&0\\ 0&1&0&-1\\ 0&1&0&1\end{array}\right), \tag{5}\] \[H_{4}^{BD}(k)=\left(\begin{array}{cccc}1&2+e^{-ika}&0&0\\ 2+e^{ika}&1&0&0\\ 0&0&-1&e^{-ika}\\ 0&0&e^{ika}&-1\end{array}\right)\kappa, \tag{6}\] respectively. As expected, the both \(2\times 2\) blocks have the same form as the SSH Hamiltonian where the upper one is topologically trivial corresponding to the blue bands and the lower block is topologically nontrivial with complete dimerization corresponding to the flat bands. More generally, the bulk Hamiltonians for larger \(n\) can be divided into a topological trivial dimerized subspace and \(n/2-1\) nontrivial fully dimerized subspaces by the unitary transformation when \(n\) is even. Moreover, the lower trival band for even chain always spans the bandgap between the flat bands by meeting the upper and lower flat bands at the boundary and center of the first Brillouin zone, respectively. Similarly, for odd \(n\), we can get the block-diagonal Hamiltonian composed of a \(3\times 3\) block and \((n-3)/2\) same nontrivial \(2\times 2\) blocks (see Supplemental Material for details [51]) where the \(3\times 3\) subspace owns two nontrivial lower bands in contact with the upper and lower flat bands independently. Considering the BBC, under the open boundary condition, we show the normalized eigenvalue spectra \(\omega_{i}/\kappa\) of finite multimer chains with 60 resonators in Figs. 2 (d-f) and the corresponding wave functions \(\ket{\psi_{i}}\) in Figs. 2 (g-i) for \(n=3\), 4 and 5. Mathematically, the wavefunction solution is not unique for the finite-size chains with \(n>3\), owning to the fact that the rank of the Hamiltonian matrix is smaller than the matrix dimension. The wavefunction distributions depicted in Figs. 2 (h) and 2 (i) exemplify potential numerical solutions for \(n=4\) and \(n=5\), respectively. Correspondingly, there are pairs of degenerate edge states marked by the colored dots at Figure 2: Topological edge states of multimer chains. (a-c) The normalized band structures with quantized Zak phases and (d-f) sorted eigenvalues of topological finite chains (composed of 60 resonators) with (g-i) corresponding representative wave functions for (a, d, g) \(n=3\), (b, e, h) \(n=4\) and (c, f, i) \(n=5\), respectively. Zak phases for the bands labeled by red are \(-\pi\) and 0 by blue in (a-c). The edge and bulk states in (d-f) with the corresponding intensity distributions in (g-i) are represented by color and gray, respectively. Particularly, the wavefunction distributions in panels (h) and (i) represent individual instances of potential numerical solutions for \(n=4\) and \(n=5\), respectively. the exact detuning \(-\kappa\) with the same number as the nontrivial bands. The topological edge modes of odd multimer chain sit in the lower bandgap \(G_{1}\). Remarkably, the topological edge states are clearly embedded in the continuous spectrum of the lower nontrival band and are the so-called topological BICs [49] when \(n\) is even. Because of the complete multimerization, the wave functions of exact \(-1\) energy edge modes are absolutely localized at the two boundary cells without any distribution in the bulk, while the bulk wave functions are diffused throughout the whole chains. We employ periodic inductor-capacitor (LC) circuits featuring flexible hopping channels to experimentally observe the tight-binding modes. Here, the lattice nodes are capacitively coupled to ground and inductively coupled to each other. The multimer chains can be represented by the admittance matrix \(J_{\omega}\) (also termed circuit Laplacian) [17; 42]. The voltage response \(\mathbf{V}(\omega)\) of the nodes to a input current \(\mathbf{I}(\omega)\) at frequency \(\omega\) follows Kirchhoff's law: \(\mathbf{I}(\omega)=J(\omega)\mathbf{V}(\omega)\) where the vectors \(\mathbf{I}(\omega)=\left[I_{1},I_{2},\cdots,I_{s}\right]^{\prime}\) and \(\mathbf{V}(\omega)=\left[V_{1},V_{2},\cdots,V_{s}\right]^{\prime}\) for \(s\) nodes circuit. For our uniform hopping chains, using the same size capacitors \(C\) and inductors \(L\), we have the circuit Laplacian \[J(\omega)=\frac{1}{i\omega}\left[\left(\frac{n}{L}-\omega^{2}C\right)\mathbb{ I}+H\right] \tag{7}\] with \[H=\left(\begin{array}{cccc}0&-1/L&-1/L&\cdots\\ -1/L&0&-1/L&\cdots\\ -1/L&-1/L&0&\cdots\\ \vdots&\vdots&\vdots&\ddots\\ \end{array}\right)_{s\times s}, \tag{8}\] where \(\mathbb{I}\) is the \(s\times s\) unit matrix and \(H\) can represent our theoretical model accurately with the hopping amplitude \(-1/L\). We construct periodic LC circuits with 24 nodes for trimer and tetramer configurations as shown in Fig. 3 (a) and Fig. 4 (a), respectively. By solving the eigenvalues \(E_{i}\) of \(H\) numerically, we can obtain the general admittance eigenspectra dispersion as \(f_{i}=\sqrt{(n/L+E_{i})/C}/(2\pi)\) with degenerate edge states labeled by red shown in Figs. 3 (b) and 4 (b). Note that the inverted nonlinear spectra is due to the negative frequency-dependent hopping amplitude of inductive coupling. In the experiment implementation, we choose the circuit components: \(C=1\) nF with \(\pm 1\%\) tolerance and \(L=1.1\)\(\mu\)H with \(\pm 5\%\) deviation. Details of the sample fabrication and impedance measurements are provided in the Supplemental Material [51]. Measured impedances of nodes 1, 13, and 17 to ground (\(|Z_{1}|\), \(|Z_{13}|\) and \(|Z_{17}|\)) versus the frequency of input circuit are shown in Fig. 3 (c) and Fig. 4 (c) for trimer and tetramer chain, respectively. The peak frequencies of the impedances are in good agreement with the calculated eigenvalues, despite some slight frequency shift of the measured impedance peaks due to component tolerances. In Fig. 3 (c), the highest impedance peak near 9.57 MHz of edge node inside the band gap (about 9.25 - 10.1 MHz) with impedance valleys for bulk nodes denotes the topological modes unambiguously. More intuitively, We measure the impedance distribution of the degenerate topological edge modes with strong locality at both ends at 9.57 MHz shown in Fig. 3 (d). Differently, in Fig. 4 (c), the impedance peak of edge node is accompanied by the impedance peaks of the bulk nodes near the frequency of edge state 10.52 MHz, representing the existence of a topological bound state in a nontopological continuum where the bound edge state Figure 3: Observation of topological edge states in trimer chain. (a) Circuit diagram of the finite experimental trimer chain, unit cells consists of three capacitors \(C\) with identical inductors \(L\) between every two capacitors framed in grey dashed boxes. (b) Calculated admittance eigenspectrum of the LC circuit for \(C=1\) nF and \(L=1.1\)\(\mu\)H. (c) Measured impedances between the nodes (\(|Z_{1}|\), \(|Z_{13}|\) and \(|Z_{17}|\)) and ground vs the frequency of circuit. (d) Location distribution of impedance at the frequency \(f=9.57\) MHz. Figure 4: Observation of topological BICs in tetramer chain. (a)Circuit diagram blueprint with unit cells framed in grey dashed boxes. (b) Calculated admittance eigenvalues of the tetramer LC circuit for \(C=1\) nF and \(L=1.1\)\(\mu\)H. (c) Frequency scan of measured impedances for representative edge (\(|Z_{1}|\)) and bulk (\(|Z_{13}|\) and \(|Z_{17}|\)) nodes. (d) Impedance distribution of topological edge mode at the frequency \(f=10.52\) MHz. are show in Fig. 4 (d). In summary, we theoretically and experimentally demonstrated degenerate topological edge states in a class of topological multimer chains consisting of identical resonators with uniform hopping. By designing deliberate coupling paths, the systems exhibit full multimerization with fully dimerized SSH subspaces corresponding to flat bands that can be separated from their Hilbert spaces. We also show topological BICs by embedding the degenerate topological bound states into a continuous band in even chains naturally. Our scheme is experimental accessiable and can also be implemented in coupled waveguides arrays [39; 52], optical and acoustic coupled cavity arrays [49; 53], cold atoms lattices [54] and three-dimensional circuit quantum electrodynamics [55]. Our work sheds new light on the construction of topological phases. This work is supported by National Key Research and Development Program of China (2021YFA1400600, 2021YFA1400602), the NSERC Discovery Grants and NSERC Discovery Accelerator Supplements (C.-M. H.), the National Natural Science Foundation of China (12274326) and China Scholarship Council (202106260079).
2309.17212
Is the Mg-related GaN blue luminescence deep-level an MgO surface state?
Mg is currently the only p-type dopant in technological use in GaN. Its incorporation into the GaN lattice is difficult. It requires a thermal treatment that only partially activates the Mg. To achieve moderate p-type doping requires high doses of Mg that mostly remain inactive. High p-type doping is thus typically achieved at the cost of certain lattice distortion and the creation of defects. Using low-temperature surface photovoltage spectroscopy, we obtain a wide spectrum of optical transitions within the bandgap of Mg-doped GaN. The results reveal an optical transition from the valence band into a deep trap around 0.49 eV above the valence band, along with what appears to be a complimentary transition from the same trap into the conduction band observed at 2.84 eV (coinciding with the energy of the famous Mg-related GaN blue luminescence). The similar shape of the spectra, their complimentary energies within the GaN gap and their opposite nature (hole vs. electron trap) appear to be more than a coincidence suggesting that this is an Mg-related surface state. The density of charge we calculate for this surface state is about 2x1012 cm-2. We suggest that these small amounts of surface-segregated Mg partially oxidize during the growth and further oxidize during the consecutive Mg-activation heat treatment. This minute quantity of oxidized surface Mg should be about enough to form an Mg-related surface state. Etching the GaN with H3PO4 is shown to affect the photovoltage at the blue-luminescence-related energy. Finally, we show that pure MgO powder produces the same blue luminescence even at the absolute absence of GaN.
Or Haim Chaulker, Yury Turkulets, Ilan Shalish
2023-09-29T13:10:21Z
http://arxiv.org/abs/2309.17212v2
# arXiv:2309.17212v2 ###### Abstract Mg is currently the only p-type dopant in technological use in GaN. Its incorporation into the GaN lattice is difficult. It requires a thermal treatment that only partially activates the Mg. To achieve moderate p-type doping requires high doses of Mg that mostly remain inactive. High p-type doping is thus typically achieved at the cost of certain lattice distortion and the creation of defects. Using low-temperature surface photovoltage spectroscopy, we obtain a wide spectrum of optical transitions within the bandgap of GaN:Mg. The results reveal an optical transition from the valence band into a deep trap around 0.49 eV above the valence band, along with what appears to be a complimentary transition from the same trap into the conduction band observed at 2.84 eV (coinciding with the energy of the famous Mg-related GaN "blue luminescence"). The similar shape of the spectra, their complimentary energies within the GaN gap and their opposite nature (hole vs. electron trap) appear to be more than a coincidence suggesting that this is an Mg-related _surface state_. The density of charge we calculate for this surface state is \(\sim\)2\(\cdot\)10\({}^{12}\) cm\({}^{2}\). We suggest that these small amounts of surface-segregated Mg partially oxidize during the growth and further oxidize during the consecutive Mg-activation heat treatment. This minute quantity of oxidized surface Mg should be about enough to form an Mg-related surface state. Etching the GaN with H\({}_{3}\)PO\({}_{4}\) is shown to affect the photovoltage at the "blue luminescence"-related energy. Finally, we show that pure MgO powder produces the same blue luminescence even at the absolute absence of GaN. pacs: [https://doi.org/10.48550/arXiv.2309.17212](https://doi.org/10.48550/arXiv.2309.17212) ## I Introduction As Si technology is reaching unsurmountable limitations to its further development, GaN has been establishing its foothold as the next microelectronic technologic material.[1] The major technological leap that started the present era of GaN was its p-type doping by Mg.[2, 3] At present, Mg is the only p-type GaN dopant in technological use. Over the three decades that have elapsed, it has been established, both experimentally and theoretically, that Mg incorporation into GaN induces two relatively shallow levels close to the valence band and an additional parasitic deep level.[4] The deep level is most commonly observed in photoluminescence as a wide emission peak more or less over the energy range between 2.6 and 3.0 eV, and has been dubbed the blue luminescence (BL).[5] While the shallow levels have been thoroughly studied both experimentally and theoretically and their explanation fairly established,[6, 7] the origin of the deep level has remained unclear. Both experimental and theoretical studies of the BL deep level seem to corroborate a model suggested by Kaufmann _et al_. of a recombination that takes place _in the bulk_ between a deep donor and the Mg\({}_{\rm Ga}\) acceptor.[8, 9, 10] The clear linkage of the blue luminescence to the introduction of the Mg dopant has led all the previous studies to look for a transition involving a _bulk defect_. In this paper, we report new findings on this BL-related deep level that suggest a radically different scenario than the one presently accepted. One crucial question that has not been answered experimentally is whether the observed optical transition takes place in the bulk or on the surface of the crystal. As a matter of fact, this question has never even been asked, as it was quite obvious that if the transition is associated with the presence of a dopant evenly distributed in the bulk, then the transition naturally takes place in the bulk. Unsurprisingly, the ensuing ab-initio work looked for a candidate bulk scenario to explain the transition and indeed found one. Apparently, the possibility of a surface state has never been imagined. Is there a good reason to consider a Mg dopant-related surface state? And if so, is it possible to answer this question experimentally? Why should we consider a surface state? Mg is not readily incorporated into GaN. It requires a thermal treatment to help it occupy the right place in the lattice, and even then, only 1% of it is actually activated at best.[11] Mg has also been found to segregate at extended defects.[12, 13] The crystal surface is by far the most extended defect there is. Should we not expect at least a minor, limited-extent, segregation at the growth surface of the crystal? In fact, there seems to be a strong driving force to cause Mg to float to the growth surface. Mg oxide is a fairly stable oxide having an enthalpy of formation of -601.7 kJ/mol. The difference from the enthalpy of formation of Mg nitride (+288.7 kJ/mol) must drive the Mg atom to the surface to react with oxygen.[14] The availability of oxygen depends on the vacuum level during the growth. Indeed, molecular beam epitaxy (MBE), which is typically carried out under ultra-high vacuum, produces GaN:Mg that shows no BL emission, while methods such as MOCVD/MOVPE do produce it.[4, 15] These latter methods are typically carried out under rough vacuum at the millitor range, which somewhat reduces the oxidation probability after all but does not eliminate it altogether. One would then expect that not all the Mg that segregates to the
2301.13656
A Survey and Benchmark of Automatic Surface Reconstruction from Point Clouds
We present a comprehensive survey and benchmark of both traditional and learning-based methods for surface reconstruction from point clouds. This task is particularly challenging for real-world acquisitions due to factors like noise, outliers, non-uniform sampling, and missing data. Traditional approaches often simplify the problem by imposing handcrafted priors on either the input point clouds or the resulting surface, a process that can necessitate tedious hyperparameter tuning. Conversely, deep learning models have the capability to directly learn the properties of input point clouds and desired surfaces from data. We study the influence of these handcrafted and learned priors on the precision and robustness of surface reconstruction techniques. We evaluate various time-tested and contemporary methods in a standardized manner. When both trained and evaluated on point clouds with identical characteristics, the learning-based models consistently produce superior surfaces compared to their traditional counterparts$\unicode{x2013}$even in scenarios involving novel shape categories. However, traditional methods demonstrate greater resilience to the diverse array of point cloud anomalies commonly found in real-world 3D acquisitions. For the benefit of the research community, we make our code and datasets available, inviting further enhancements to learning-based surface reconstruction. This can be accessed at https://github.com/raphaelsulzer/dsr-benchmark .
Raphael Sulzer, Renaud Marlet, Bruno Vallet, Loic Landrieu
2023-01-31T14:18:19Z
http://arxiv.org/abs/2301.13656v3
# A Survey and Benchmark of Automatic Surface Reconstruction from Point Clouds ###### Abstract We survey and benchmark traditional and novel learning-based algorithms that address the problem of surface reconstruction from point clouds. Surface reconstruction from point clouds is particularly challenging when applied to real-world acquisitions, due to noise, outliers, non-uniform sampling and missing data. Traditionally, different handcrafted priors of the input points or output surface have been proposed to make the problem more tractable. However, hyperparameter tuning for adjusting priors to different acquisition defects can be a tedious task. To this end, the deep learning community has recently addressed the surface reconstruction problem. In contrast to traditional approaches, deep surface reconstruction methods can learn priors directly from a training set of point clouds and corresponding true surfaces. In our survey, we detail how different handcrafted and learned priors affect the robustness of methods to defect-laden input and their capability to generate geometric and topologically accurate reconstructions. In our benchmark, we evaluate the reconstructions of several traditional and learning-based methods on the same grounds. We show that learning-based methods can generalize to unseen shape categories, but their training and test sets must share the same point cloud characteristics. We also provide the code and data to compete in our benchmark and to further stimulate the development of learning-based surface reconstruction: [https://github.com/raphaelsuzer/dsr-benchmark](https://github.com/raphaelsuzer/dsr-benchmark). surface reconstruction, point clouds, deep learning, mesh generation, survey, benchmark ## 1 Introduction Modern three-dimensional (3D) acquisition technology, such as range scanning or multi-view stereo (MVS) brought the ability to record the world in the form of 3D point clouds. However, point clouds are usually not sufficient to model complex physical processes such as fluid dynamics. Instead, a variety of applications in science and engineering require a representation of objects or scenes in the form of a continuous surface. Therefore, surface reconstruction from point clouds is a key step between acquisition and analysis of surface models and is a long-standing problem in digital geometry processing. In this paper, we survey and benchmark several traditional and learning-based methods that address the problem of surface reconstruction from point clouds. If no prior information about the sought surface is known, surface reconstruction from point clouds is an ill-posed problem, as there are an infinite number of surfaces with different geometry and topology that can pass through, or near, the point samples. Furthermore, acquisition defects in the point cloud, such as non-uniform sampling, noise, outliers or missing data complicate the reconstruction of a geometrically and topologically accurate surface [1]. See Figure 1 for an illustration. Traditionally, surface reconstruction methods made the problem more tractable by using handcrafted priors, imposed on the input, such as point density, level of noise or outliers, and on the output, such as smoothness, topological properties or the shape category. In contrast, recent methods introduced by the deep learning community can learn point cloud defects or shape patterns directly from training data and therefore promise to reconstruct more accurate surfaces without the need for manual parameter tuning. However, so far deep surface reconstruction (DSR) methods have mostly been applied on datasets with a small number of different object categories. Such datasets are not representative for real-world applications, where algorithms have to reconstruct surfaces containing a large variety of shapes unseen during training. Furthermore, DSR methods are often applied on uniformly sampled point clouds. Likewise, such point clouds are not representative for real-world acquisitions, as they do not model non-uniformity or missing data stemming _e.g._ from occlusions, or transparent and low texture areas. The ability to reconstruct shapes, either from unseen shape classes or from point clouds with unseen defects is rarely studied in a systematic manner for DSR methods. To this end, we propose several experiments to benchmark algorithms for surface reconstruction from point clouds. We make use of a variety of publicly available shape datasets with object surfaces of different complexities. The objects are represented by a true surface \(\mathcal{S}\), which is a boundary-free 2-manifold, _i.e._ each point on the surface has a neighborhood that is homeomorphic to an open subset of the Euclidean plane. We synthetically scan the objects to produce point clouds with real characteristics. Having access to the true surfaces allows us to measure the geometric and topological reconstruction quality of the benchmarked methods. We also verify our findings on real-world point clouds. We compare novel learning-based algorithms to tradi tional test-of-time methods to specifically study the influence of learned priors incorporated into the surface reconstruction process. We thereby pay special attention to the generalization capability of methods to unseen domains. Our main contributions are as follows: * We review methods for surface reconstruction from point clouds from over three decades up to recent learning-based methods. We contrast popular test-of-time with novel DSR methods. * We benchmark traditional and learning-based methods on the same ground across several experiments, using openly available shape datasets and point clouds generated with synthetic scanning. ## 2 Related work ### _Surveys_ There exists only few works that survey the broad field of surface reconstruction from point clouds [1, 2, 3, 4], most of them predating the advance of learning-based surface reconstruction [1, 2, 4]. Surface reconstruction methods are often grouped into interpolating or approximating methods [5]. Interpolating methods "connect" all points of the input point cloud, or a subset thereof, usually by linearly interpolating between pairs of points. Approximating methods often define one or several smooth functions approximating the point cloud globally or locally. See Figure 2 for an illustration. Berger _et al_. [1] and Cazals & Giesen [2] provide detailed reviews for approximating and interpolating surface reconstruction methods, respectively. To the best of our knowledge, only one survey includes learning-based methods [3]. However, this survey predates important developments for learning-based methods, such as the incorporation of local information [6, 7, 8, 9, 10, 11, 12]. In this work, we review both interpolating and approximating methods and focus on novel ideas in learning-based surface reconstruction. While many reconstruction methods can be distinguished by the prior assumptions they impose [1], we argue that a variety of successful methods combine different priors. This makes grouping by priors difficult. We thus organize methods into two groups: surface-based and volume-based approaches. This breakdown closely relates to the two main classes of mathematical representations of a surface: parametric and implicit. ### _Benchmarks_ To date, benchmarks for surface reconstruction from point clouds are rare. Many methods use custom datasets to evaluate their approach, usually generated by uniformly sampling point clouds from ground truth shapes of existing shape collections [6, 7, 8, 11, 12, 13]. However, the characteristics of the sampled point clouds often differ across publications, which hampers the ability to fairly compare the results of different works. Furthermore, the point clouds often lack common defects of real acquisitions, such as missing data or outliers. One notable exception is the benchmark of Berger _et al_. [14]. The authors develop a synthetic range scanning procedure to produce scans with realistic artifacts, such as noise, non-uniformity and misaligned scans and create point clouds from shapes with non-trivial topology and details of various feature sizes. While providing interesting results, the benchmark predates learning-based surface reconstruction and only considers traditional approximating methods. In the benchmarks proposed in this paper, we reuse their synthetic range scanning procedure and their five test shapes, as they provide realistic and challenging input for both learning-based and traditional algorithms. We also implement our own synthetic scanning procedure for MVS-like point clouds. We use the synthetic scanning to scan existing large shape datasets to create training datasets with true surfaces and point clouds with realistic characteristics. A problem related to surface reconstruction is the generation of point clouds from 2D information such as overlapping images. There exists a variety of benchmarks using data captured in a laboratory environment [15, 16] or in the wild [17, 18, 19]. These benchmarks often use a low quality image acquisitions as reconstruction input. Simultaneously, a higher quality acquisition, _e.g_. from LiDAR scans, serves as reference. One problem with this approach is that, even for high quality acquisition techniques, it is difficult to produce complete ground truth point clouds. This issue is sometimes addressed by decreasing the ground truth domain to specific evaluation areas, in which reliable information is available either from recorded points or sightlines between points and sensors [16, 18]. However, in contrast to true surfaces, reference point clouds, do not allow to calculate topologic metrics such as the number of components or differential metrics such as surface normals. Furthermore, most learning-based methods require closed reference surfaces instead of reference point clouds for training. ## 3 Surface definition, representations, properties and reconstruction In this section, we first provide a definition of a surface and its mathematical and digital representations. We then discuss important surface properties. Finally, we establish the connection between mathematical surface representations, and the grouping of surface reconstruction algorithms used in our survey. ### _Definition_ A surface can be defined as an orientable, continuous 2-manifold in \(\mathbb{R}^{3}\), with or without boundaries [5, 20, 21]. These properties are important for surface visualisation and processing, and we will discuss them further down. Mathematically, there are two main classes of surface representations: _parametric_ and _implicit_. ### _Representations_ _Parametric surfaces_ are defined by a function \(\mathbf{f}:\Omega\mapsto\mathcal{S}\) that maps a parameter domain \(\Omega\in\mathbb{R}^{2}\) to the surface \(\mathcal{S}=\mathbf{f}(\Omega)\in\mathbb{R}^{3}\). However, for complex surfaces it is not feasible to find a single function that can parameterise \(\mathcal{S}\). Therefore, the parameter domain \(\Omega\) is usually split into subregions for which an individual function is defined [5]. The most common way is to segment \(\Omega\) into triangles, which are planar by definition. A set of triangles approximating \(\mathcal{S}\) can be efficiently stored and processed as a triangle surface mesh \(\mathcal{M}=(\mathcal{V},\mathcal{E},\mathcal{F})\), with triangle facets \(\mathcal{F}\), edges \(\mathcal{E}\) and vertices \(\mathcal{V}\). _Implicit surfaces_ are defined by the level-set \(c\) of a scalar valued function \(F:\mathbb{R}^{3}\mapsto\mathbb{R}\): \[\mathcal{S}_{c}=\{\mathbf{x}\in\mathbb{R}^{3}\mid F(\mathbf{x})=c\}. \tag{1}\] The most common choice of the implicit function \(F\) is either a signed distance or an occupancy function. A signed distance function (SDF) gives the distance from a 3D point \(\mathbf{x}\) in space to the surface; with points in the interior signed a negative value, and points on the exterior signed a positive value. An indicator or occupancy function (OF) usually has a value of 1 inside the surface and \(0\) outside. The \(c\)-level-set of \(F\) then yields the surface \(\mathcal{S}\), where \(c=0\) in the case of a signed distance function and \(c=0.5\) in the case of an occupancy function. Similar to the parametric case, the implicit function domain is often split into sub-regions, such as voxels, octree-nodes or tetrahedra, and constant functions are defined in each sub-region. ### _Properties_ The reconstructed surface \(\mathcal{S}^{r}\) should be close in terms of geometry and topology to the real surface \(\mathcal{S}\) from which the point cloud \(\mathcal{P}\) is sampled. To facilitate subsequent geometric operations on \(\mathcal{S}^{r}\), such as sampling or deforming the surface, a mesh reconstruction \(\mathcal{M}\) is also desirable. \(\mathcal{S}^{r}\) and \(\mathcal{M}\), respectively, should have the following properties (see Figure 2 for illustrations): * **Watertight:** A geometric surface is closed if it is boundary-free. A mesh \(\mathcal{M}\) is closed--or boundary-free--if no edge is incident to exactly one facet. However, a reconstructed surface of a real scene necessarily has a border defined _e.g._ by the limit of the scan coverage. One may still reconstruct a closed surface by intersecting it with the boundary of the domain in which Figure 1: **Difficulties in surface reconstruction from point clouds: In each plot, we show the real surface —, point samples \(\bullet\), and possible reconstructions —. The correct topology and geometry of the real surface are not known from the point samples (a,b). The point samples may also include acquisition defects such as noise (c). The goal of any surface reconstruction algorithm is finding a good approximation of the real surface, in terms of its geometry and topology. Learning-based surface reconstruction can learn shape patterns or sampling errors such as the one exemplified here, and use the learned knowledge during reconstruction for a better approximation.** Figure 2: **Approximating and interpolating surfaces from point clouds: A surface generated from point samples can either interpolate (top row) or approximate (bottom row) the samples. Theoretically, there exist an infinite number of surfaces with different geometry and topology that can pass through, or near, the samples. We show eight different surfaces — reconstructed from the same point cloud \(\bullet\)\(\bullet\)\(\bullet\) in (a) - (h). The point cloud can be seen as a sampling of a part of a real surface. All reconstructed surfaces are watertight, as they are either closed and boundary-free, or their only boundary is the intersection with the domain boundary \(\square\). The surface in (d) is non-manifold in the center-vertex. All other surfaces are manifold. Except for (h), all surfaces are comprised of only one component. In contrast to the point cloud depicted here, in our benchmark, we mainly consider point clouds sampled from closed surfaces.** **f** or \(F\) is defined: _e.g._ the convex hull or bounding box of \(\mathcal{P}\). However, this procedure may not be desirable, as it can hinder simple geometric analysis such as the calculation of surface area. Instead, we define a surface as watertight if it is boundary free, _except_ for a possible intersection with the domain boundary. * We consider real and geometric surfaces to be 2-manifolds, _i.e._ each point on the surface has a neighborhood that is homeomorphic to an open subset of the Euclidean plane. A mesh \(\mathcal{M}\) is manifold if it is edge- and vertex-manifold, and intersection-free. * _Edge-manifold:_ For each edge \(\mathcal{E}\), the set of facets \(\mathcal{F}\) sharing this edge form a topological (half-)disk. This means that no edge can be incident to more than two facets. * _Vertex-manifold:_ For each vertex \(\mathcal{V}\), the set of facets sharing this vertex form a topological (half-)disk. This means that facets with a common vertex form an open or closed fan, _i.e._ there are no dangling facets. * **Intersection-free:**\(\mathcal{M}\) is intersection free if all pairs of facets not sharing an edge or vertex do not intersect. * **Orientable:**\(\mathcal{M}\) is orientable if one can define a consistent continuous orientation of each facet. This means that the order of the vertices of all facets is either clockwise or counter-clockwise and a common edge of two adjacent facets has opposite orders on the two sides. The watertight property is useful for simulations such as fluid dynamics. Manifoldness and orientability are often required for mesh storing and processing, in particular because they are a prerequisite for the widely-used half-edge data structure [22, 23]. Furthermore, intersection-free and orientable surfaces lead to a well-defined notion of inside and outside, which is important for mesh visualization and a variety of geometric operations. ### _Reconstruction_ Surface reconstruction from point clouds is the process of constructing a continuous surface of which discrete point samples have been acquired. In our survey, we group methods for surface reconstruction from point clouds into two groups: surface- and volume-based. _Surface-based_ reconstruction methods consists in finding (a set of) parameterised surfaces \(\mathcal{S}^{r}\) that approximate the point cloud \(\mathcal{P}\), either in the form of triangles or larger two-dimensional (2D) patches, or by deforming parameterised enclosing envelops such as meshed spheres. The main challenge for surface-based methods using a single function \(\mathbf{f}\) is that the topology of \(\Omega\) has to be equivalent to the topology of \(S\), which is usually unknown. The main challenge for surface-based methods with individual functions for sub-regions of \(S\), on the other hand, is to guarantee a consistent transition between each region. Hence, these methods often struggle to produce an intersection-free, manifold and watertight surface. _Volume-based_ methods, on the other hand, segment a subset of \(\mathbb{R}^{3}\) into interior (inside) and exterior (outside) subspaces. The surface is implicitly defined as the interface between the two subspaces. Most, but not all algorithms in this class formulate the problem as finding an implicit function. Surfaces from volume-based methods are guaranteed to be watertight and intersection-free, but not necessarily manifold [2]. While surface-based methods can directly yield a mesh, _e.g._ by triangulating \(\Omega\), volume-based methods usually require an additional processing step. If the implicit field is discretized with tetrahedra, one can simply use a process which is sometimes called triangle-from-tetrahedra (TFT). TFT builds a triangle mesh from all triangles that are adjacent to one inside- and one outside-tetrahedra. Another option is the algorithm of Boissonnat and Oudot [24] that iteratively samples \(F\) along lines from inside to outside to find points that lie on \(S\) and builds a triangle mesh from these points. One of the most popular methods for mesh extraction from an implicit field is Marching Cubes [25], which (i) discretizes the implicit function into voxels, (ii) constructs triangles inside each voxel that have at least one inside and one outside vertex and (iii) extracts a triangulation as the union of all triangles. Recently, mesh extraction has also been addressed by the deep learning community. Neural meshing [26] specifically addresses the case where an implicit function is represented by a neural network, and aims to extract meshes with fewer triangles compared to Marching Cubes from such a function. In both, surface- and volume-based groups, there are methods that come with theoretical guarantees about the topology and geometry of the reconstruction in the absence of noise and when the point sampling is dense enough [2]. However, in this paper, we are mostly interested in the robustness of methods to defect-laden input point clouds from \(3\)D scanning. ## 4 Survey In this section, we review important surface- and volume-based surface reconstruction methods and discuss their robustness against different point cloud defects. We also show that learning-based approaches are often related to more traditional methods. ### _Surface-based reconstruction_ #### 4.1.1 Interpolating approaches Advancing-front techniques.: Most traditional surface-based approaches linearly interpolate between the point samples \(\mathcal{P}\), or a subset thereof. This can be done efficiently by triangulating triplets of points which respect the empty ball property _i.e._ no other point lies within their circumsphere. Triangulating all triplets of \(\mathcal{P}\) that have this property leads to the 3D Delaunay tetrahedralisation (3DT) of \(\mathcal{P}\). The Ball Pivoting algorithm [27] is a greedy approach to find local triplets of points that form a triangle which is part of the surface. The first step is to (i) define a ball with constant radius, related to the density of \(\mathcal{P}\) and to (ii) select a seed triplet of points. The ball must touch all three points and have no other point in its interior. The points then form the first surface triangle. Then, (iii) the ball pivots around an edge of the triangle until it touches a new point, forming a new surface triangle. Once all possible edges have been processed the algorithm starts with a (iv) new seed triangle until all points of \(\mathcal{P}\) have been considered. The algorithm has later been refined to be more robust to non-uniform sampling [39, 40]. The Ball Pivoting algorithm and its related variations are often called advancing-front techniques. Their main drawback is that they are not robust to point cloud defects such as noise or point clouds with large missing parts. Selection-based: Similar to advancing-front techniques, the idea to iteratively build the triangulation from initial candidate triangles has also been explored in learning-based methods [30, 31]. PointTriNet [31] (i) starts with an initial set of seed triangles from a \(k\)-nearest neighbor graph of \(\mathcal{P}\). Then, (ii) a first network takes in neighboring points and triangles of each seed triangle, and estimates its probability to be part of the surface. (iii) Triangles with high probability are selected to be part of the final surface and (iv) a second network proposes new candidate triangles constructed from two points of already selected surface triangles and neighboring points. The proposed new candidates are, again, processed by the first network and the algorithm continues for \(n\) user-defined iterations. The loss function is based on Chamfer distance between input points and the reconstructed surface, which allows the method to be trained without the need for ground truth meshes. IER-meshing [30] also (i) starts with a large set of seed triangles from a \(k\)-nearest neighbor graph. It then defines a so-called intrinsic-extrinsic ratio (IER), as the quotient of geodesic and Euclidean distance between points of a triangle. (ii) This ratio is estimated by an multilayer perceptron (MLP) from learned point features per triangle and supervised with IER's from a ground truth mesh. (iii) Only triangles with an IER close to \(1\) (_i.e._ Euclidean distance \(\approx\) geodesic distance) are considered to be part of the surface and (iv) selected based on handcrafted heuristics. Both aforementioned methods have shown to be robust against small amounts of noise in the input point cloud. However, their reconstructed surfaces are neither manifold nor watertight. Tangent plane and other projection methods: Another class of surface-based interpolating approaches are tangent plane methods. This class includes the algorithm of Boissonnat [41], which is according to Cazals and Giesen [2] probably the first algorithm to address the surface reconstruction problem. The basic idea is to (i) find a tangent plane for each sample point, (ii) project the points local neighborhood on the tangent plane, (iii) construct 2D Delaunay triangulations of the projected points and (iv) merge the local reconstructions. A shortcoming of such an approach is that tangent planes are difficult to use in areas with high curvature or thin structures [11]. To this end, the idea of using local 2D Delaunay triangulations of projected points has been refined in a recent learning-based approach [11]. Instead of tangent planes, DSE-meshing [11] uses _logarithmic maps_, local surface parametrizations around a point \(p\), based on geodesics emanating from \(p\). This method (i) classifies geodesic neighbors of each point in \(\mathcal{P}\) from a set of \(k\)-nearest neighbors. Then, (ii) an MLP approximates a logarithmic map parametrization to gain a 2D embedding of the geodesic neighbors. Lastly, (iii) neighboring logarithmic maps are mutually aligned and triangulated. This step allows the method to reconstruct surfaces with fewer non-manifold edges, compared to methods that process triangles independently. However, the surface is still not watertight and the method has not been tested for reconstruction from noisy point clouds. #### 4.1.2 Patch-fitting Patch-fitting methods are related to tangent plane approaches. Instead of interpolating the initial point set, a new triangulation patch is formed. AtlasNet [29] is based on this idea and was one of the first learning-based surface reconstruction methods. Small 2D triangulated patches are transformed to fit \(\mathcal{P}\) based on transformations predicted by an MLP. Similar to interpolating approaches, this method cannot guarantee to fill all gaps between patches, which results in a non-watertight and potentially self-intersecting surface. #### 4.1.3 Surface deformation One of the only classes of surface-based approaches that can guarantee a watertight surface are deformation-based methods. Sharf _et al._[28] introduced a method that (i) iteratively expands an intial mesh contained within the input point cloud along the face normal directions, and (ii) moves the mesh vertices to fit the input point cloud using moving \begin{table} \begin{tabular}{l l l l l l} \multicolumn{6}{l}{**Method**} \\ \hline \multicolumn{6}{l}{_Surface-based_} \\ \hline BPA & [27] & & & local & triangle mesh \\ Sharf _et al._ & [28] & & & both & triangle mesh \\ AtlasNet & [29] & ✓ & & local & triangle mesh \\ IER & [30] & ✓ & & both & triangle mesh \\ PointTriNet & [31] & ✓ & & local & triangle mesh \\ DSE & [11] & ✓ & & local & triangle mesh \\ **P2M** & [32] & & & & both & triangle mesh \\ \hline \multicolumn{6}{l}{_Volume-based_} \\ \hline \multicolumn{6}{l}{_SPSR_} & [33] & ✓ & & & both & implicit field \\ Labatut _et al._[34] & & & ✓ & & global & triangle mesh \\ ONet & [35] & ✓ & & & global & implicit field \\ DeepSDF & [13] & ✓ & & & global & implicit field \\ IM-Net & [36] & ✓ & & & global & implicit field \\ **ConvONet** & [6] & ✓ & & & both & implicit field \\ **ICR** & [37] & (�) & (�) & & global & implicit field \\ **LIG** & [8] & ✓ & ✓ & & local & implicit field \\ **DGNN** & [10] & ✓ & ✓ & & both & triangle mesh \\ **SAP** & [38] & ✓ & & & both & implicit field \\ P2S & [9] & ✓ & & & both & implicit field \\ **SAP** & [38] & (�) & & & both & implicit field \\ **POCO** & [12] & ✓ & (�) & & local & implicit field \\ \hline \end{tabular} \end{table} Table I: **Overview of surface- and volume-based surface reconstruction methods:** We show an overview of surface- and volume-based surface reconstruction methods, both non-learning and learning-based, together with their input requirements (_normals_, _sensor pose_) and output type (_triangle mesh_ or _implicit field_). Attributes denoted in brackets are optional. Methods with a _local_ receptive field divide the point cloud into smaller sub-regions and define individual functions or surface patches for each sub-region. Methods with a _global_ receptive field consider the entire point cloud at once. Methods denoted with _both_ combine local and global receptive fields. We test methods in **bold** in our benchmark. least squares. The method is shown to be robust against missing data, but requires careful parameter tuning to be robust against noise or outliers. Point2Mesh (P2M) [32] is also based on the aforementioned idea, but avoids the need for tuning parameters by hand. The method takes as input a convex hull or a low resolution Poisson reconstruction [33] of \(\mathcal{P}\), and shrink-wraps this initial surface to best fit the point cloud. The process is guided by multiple local convolutional neural networks (CNNs) that share weights. The idea is that the weight sharing between the CNNs acts as a prior that identifies symmetric features in the shape while being able to ignore unsystematic, random defects in the point cloud. One problem with this approach is that the topology of the initial surface stays constant during reconstruction. If the correct topology of the surface is not known, it cannot be recovered. For example, if the sought surface has holes, they cannot be reconstructed from a convex hull initialisation. This poses a limitation for reconstructing arbitrary objects in the wild. ### _Volume-based reconstruction_ #### 4.2.1 Interpolating approaches Volume-based interpolating approaches commonly start by constructing a \(3\)DT of \(\mathcal{P}\). In \(\mathbb{R}^{3}\) a Delaunay triangulation (or tetrahedralization) subdivides the convex hull of \(\mathcal{P}\) with tetrahedra. The \(3\)DT is created in such a way that no point of \(\mathcal{P}\) is contained in the circumspheres of any tetrahedra. For well distributed point clouds it can be constructed in \(O(n\log n)\)[42]. The Delaunay triangulation does not directly generate the surface, as it connects points in any direction. However, if the sampling \(\mathcal{P}\) of \(\mathcal{S}\) is dense enough a subcomplex of the \(3\)DT is guaranteed to include a surface \(\mathcal{S}^{r}\) closely approximating the geometry and topology of \(\mathcal{S}\)[2]. One of the simplest ways to recover this subcomplex from a \(3\)DT is to (i) prune all tetrahedra with circumspheres larger than a user specified constant radius \(\alpha\) and then (ii) keeping only the boundary triangles. This leads to a so-called \(\alpha\)-shape [43]. Similar to the Ball Pivoting algorithm the radius of the ball (here \(\alpha\)) depends on the point density. For error free and dense samplings, alpha-shapes and some other interpolation methods [2, 41, 44] provide provable guarantees that the reconstructed surface is topologically correct [2]. Another way to recover a surface from a \(3\)DT is inside-outside labelling [10, 10, 34, 45, 46, 47, 48, 49, 50, 51, 52, 53]. Here, all tetrahedra of a \(3\)DT of \(\mathcal{P}\) are (i) labelled as either _inside_ or _outside_ with respect to \(\mathcal{S}^{r}\), and (ii) the surface is defined as the interface between tetrahedra with different labels. This guarantees to produce intersecting-free and watertight surfaces. The inside-outside labelling is usually implemented through a global energy minimized with graph-cuts. Inside-outside potentials are computed using visibility information and spatial regularization is achieved through surface smoothness or low area priors in the energy. This approach has been shown to be robust against most kinds of acquisition defects of moderate levels [50, 51, 34] and is capable of reconstructing (very) large scale scenes [49]. Delaunay-Graph Neural Network (DGNN) [10] is a learning-based method that replaces the handcrafted potentials in the aforementioned energy with a graph neural network (GNN). The GNN takes local geometric attributes and visibility information as input and operates locally on small subgraphs of the \(3\)DT. The locality makes the method scale to large scenes. The method of Luo _et al._[54] proceeds similarly, but without the use of visibility information and a global energy formulation. Instead, the GNN processes the \(3\)DT of entire objects at once, which can hamper scalability. #### 4.2.2 Implicit functions Arguably the largest class of surface reconstruction algorithms represent the surface with an implicit function (cf. Equation 1). One of the first methods that used implicit functions for surface reconstruction was presented in Hoppe _et al._[20]. Hoppe _et al._ (i) calculate tangent planes at each input point of \(\mathcal{P}\), using principal component analysis (PCA) of the local neighborhood. They then (ii) approximate an SDF by mapping an arbitrary point \(x\in\mathbb{R}^{3}\) to its signed distance to the closest tangent plane. (iii) The surface is defined as the \(0\)-level-set of the SDF. The local tangent plane estimation makes the process sensitive to low density sampling and noise, and computationally expensive. Poisson surface reconstruction.: The most popular approach for surface reconstruction based on implicit functions is Poisson Surface Reconstruction (PSR) [55]. The idea is that the Laplacian of an indicator function \(\chi\), whose \(c\)-level-set approximates the unknown surface \(\mathcal{S}\), should equate the divergence of a vector field \(\vec{N}\) associated with \(\mathcal{P}\): \[\Delta\chi=\nabla\cdot\vec{N}. \tag{2}\] The vector field \(\vec{N}\) is defined by the oriented normals of \(\mathcal{P}\). To define \(\chi\) the algorithm (i) builds an octree on \(\mathcal{P}\) and (ii) sets up a system of hierarchical functions, locally supported in each octree node, and (iii) globally solved by using a sparse linear system, which makes the method time and memory efficient. Dirichlet conditions can be imposed on the bounding box of the surface with \(\chi=0\) to ensure that the surface is closed. The approach is known to inherently produce smooth surfaces, but also over-smooth the surface in parts. The later introduced Screened Poisson Surface Reconstruction (SPSR) [33] can reconstruct much sharper surfaces by constraining Equation 2 to pass through \(\mathcal{P}\). Additionally, it introduces the choice of Neumann boundary conditions which allows the surface to intersect the boundary of the domain in which \(F\) is defined. This is useful for open scene reconstruction. Recently the method has been revisited again, to impose Dirichlet constraints on a tight envelope around \(\mathcal{P}\), enabling better reconstructions in areas of missing data [56]. Poisson surface reconstruction produces watertight meshes and has shown to be robust against almost all kinds of acquisition defects of moderate levels. However, all Poisson-based approaches require well oriented normals as input, which can pose a significant limitation in practice. Neural implicit functions: The most common approach to surface reconstruction with deep networks is to model \(F\) in Equation 1 with a neural network. This was first done in the pioneering works of Mescheder _et al._[35], Park _et al._[13], and Chen & Zhang [36]. In the case of Occupancy Networks (ONet) [35], \(F\) is modelled with a simple fully connected network (FCN) architecture. The network takes as input a point cloud \(\mathcal{P}\) and one or several test points \(\mathbf{x}\) and outputs the occupancy of the test points in relation to the surface from which \(\mathcal{P}\) was sampled. The conditioning on the input point cloud slightly changes the formulation of Equation 1 to: \[\mathcal{S}=\left\{\mathbf{x}\in\mathbb{R}^{3}\mid F_{\theta}(\mathbf{x}, \mathcal{P})=c\right\}\,. \tag{3}\] To estimate the network weights \(\theta\), the network is trained with batches \(\mathcal{B}\) of \(K\) objects using a simple binary cross entropy (BCE) loss: \[\mathcal{L}_{\mathcal{B}}\left(\theta\right)=\frac{1}{\left|\mathcal{B}\right| }\sum_{i=1}^{\left|\mathcal{B}\right|}\sum_{j=1}^{K}\text{BCE}\left(F_{\theta }\left(\mathbf{x}_{ij},\mathcal{P}_{i}\right),o_{ij}\right)\, \tag{4}\] where \(o_{ij}\) is the ground truth occupancy of test point \(\mathbf{x}_{ij}\). To compute the ground truth occupancy \(o_{ij}\), the training objects have to be available in the form of watertight surfaces. A common approach is to use large shape collections, such as ShapeNet [57] for training. Similar ideas have been introduced in IM-Net [36] and DeepSDF [13] to model an occupancy or signed distance function with a neural network. Instead of an encoder-decoder architecture as in ONet, the authors of DeepSDF [13] introduce an auto-decoder which is trained to find a shape code \(z\) that best explains an objects shape. This slightly changes Equation 3 and Equation 4, where the point cloud input \(\mathcal{P}\) is replaced by a shape code \(z\) in the form of a 256-dimensional vector. The DeepSDF architecture then allows to reconstruct a complete signed distance field (and thus the shape), given a shape code \(z\). However, to find the shape code for a specific shape during inference, at least a few ground truth signed distance values are necessary. This can be a significant limitation in practice. A common downside of the first DSR networks based on neural implicit fields is their simple fully connected network architecture. This architecture does not allow the incorporation of local point cloud information [6] and often leads to oversmoothing or inaccuracies of the inferred surface. To this end, occupancy networks have later been refined by prepending \(2\)D or \(3\)D U-Nets [58, 59] before the fully connected occupancy network, to better incorporate local information. The idea is to (i) extract point features from local neighborhoods and (ii) aggregate these features in \(2\)D or \(3\)D grid cells. The U-Nets are then used to (iii) integrate local and global information using multiple down- and upsamplings. (iv) Finally, the fully connected ONet is used to compute test point occupancies. The approach is called Convolutional Occupancy Networks (ConvONet) [6]. Just as for the fully connected architectures, the network can be trained with test points \(\mathbf{x}\) with known occupancy values \(o\). In the same work, the authors also introduce an overlapping sliding-window approach in which a single trained ConvONet can be used to reconstruct entire indoor scenes. However, this approach requires to carefully scale the scene, such that the sliding window captures parts of the scene with comparable surface features during training and inference. Furthermore, for large-scale scenes, a sliding-window approach can be very time-consuming. Local Implicit Grids (LIG) and DeepLS [7] also split input point clouds into overlapping subregions, and treat each subregion separately. The methods infer local shape codes \(z\) for parts of objects or scenes. These local shape codes have the additional benefit that they can represent parts from several different object classes. For example, a flat part-surface may belong to a table top or to a TV screen. This makes the methods less prone to overfit on specific shape categories used during training. However, the methods are largely based on IM-Net and DeepSDF. This means they also require a sort of ground truth test point during inference to optimize for the shape codes. Additionally, similar to the sliding window method of ConvONet, the region size (_i.e._ part size) has to be tuned. Using the same encoder architecture as ConvONet, Shape As Points (SAP) [38] introduces the combination of neural implicit fields with a differentiable Poisson solver. The method estimates (i) oriented normals as well as \(k\) point offsets for each input point, to correct and densify the point cloud \(\mathcal{P}\). (ii) The resulting point cloud of size \(k|\mathcal{P}|\) is fed to a differentiable Poisson solver [33] that computes an indicator grid, _i.e._\(\hat{\chi}\) evaluated on all nodes of a regular voxel grid. (iii) This indicator grid is supervised with a ground truth indicator grid \(\chi\). The ground truth indicator grid is created prior to training, from a Poisson reconstruction of a dense and error free point cloud, sampled from a ground truth mesh. A simple mean square error (MSE) loss is used for training the network: \[\mathcal{L}=\left|\hat{\chi}-\chi\right|^{2} \tag{5}\] The entire pipeline is differentiable which allows to update point offsets, oriented normals and the network parameters during training (with batches of shapes). During inference, the computed indicator grid can simply be converted to a mesh using marching cubes. In contrast to the original Poisson Surface Reconstruction, SAP allows to incorporate learned priors and does not need \(\mathcal{P}\) to be equipped with oriented normals. In general, all of the methods based on voxel grids in this paragraph require the size of the initial voxels to be constant during training, because the resolution of the convolution layers depends on the voxel grid. This poses problems for training on point clouds with different densities. A dense voxel grid can be memory intensive and long to train, while a coarse voxel grid can oversmooth the input and lead to loss of information. Another way to combine local and global information, that avoids the use of grids was introduced in Points2Surf (P2S). P2S uses both a local test point neighborhood sampling, and a global point cloud sampling which are both processed using MLPs and combined to predicted a signed distance for the test point. The \(k\)-nearest neighbor sampling makes this method less sensitive to point density, at the cost of increasing computational complexity, since the local neighborhood sampling has to be performed for each test point during inference. Point Convolution for Surface Reconstruction (POCO) only relies on local neighborhoods and computes a latent vector per point using a point convolution backbone. The occupancy of a test point \(x\) is then predicted using attention-based weighing of neighboring latent vectors. This approach can focus the parameters of the learned implicit function to be used close to the surface. However, it also requires neighborhood sampling during inference. Similar to most other DSR methods, POCO is trained on object point clouds with a fixed number of points for easy mini-batching. However, to make the method more robust to point clouds with higher density during inference, the authors use a procedure called test-time augmentation. During inference, the latent vectors of each input point \(p\) are computed several times, from different local subsamples and then averaged. Another approach to use neural implicit surface representations is to "train" (or optimize) the weights of a deep neural network per shape [37, 38]. The idea is to leverage inherent symmetries of deep neural networks to act as priors in the reconstruction process, similar to the surface deformation based Point2Mesh discussed above. To this end, Gropp _et al._[37] designed a simple fully connected network representing a signed distance function. To encourage the reconstruction of a smooth \(0\)-level-set, given an input point cloud \(\mathcal{P}\), they design a loss function which (i) should vanish on \(\mathcal{P}\) and (ii) which gradients \(\Delta_{\mathcal{P}}F\) should be of unit \(2\)-norm and similar to the normals of \(\mathcal{P}\). The method is called Implicit Geometric Regularisation (IGR). SAP also has an optimization-based variant where (i) the indicator grid, computed with the differential Poisson solver from the input point cloud \(\mathcal{P}\) is used to compute a mesh. (ii) The mesh is then sampled, which allows to calculate a Chamfer loss between the sampled and input point cloud and, again, update the network weights, point offsets and oriented normals. (iii) This process is repeated until a user defined stopping criterion. The optimization-based variants of SAP and IGR can be trained per shape, without the need for ground truth meshes for supervision. However, in this optimization-based setting, they cannot learn and incorporate shape priors from a training set. An upside of all DSR methods based on neural implicit representations is that they can store an implicit function, potentially conditioned on a point cloud, in the weights of a neural network. Especially DSR architectures that are entirely grid-less can directly relate their degrees of freedom to represent the surface. This can be more flexible compared to voxel, octree, or tetrahedral representations. Being a relatively new discovery, the full potential of neural network-based surface representations has probably yet to be explored. ## 5 Benchmark setup In this section, we describe our set up of a series of experiments for benchmarking several surface reconstruction algorithms discussed in the previous section. We first describe how we generate realistic point clouds by using synthetic range and MVS scanning procedures. We then describe the datasets we used and several experiments to evaluate the performance of reconstruction methods. Finally, we provide an overview of the competing methods. Synthetic scanning for point cloud generation: In an ideal setting, we would evaluate methods on real point cloud acquisitions together with their true surfaces. However, generating true surfaces of real objects requires error free and dense input point clouds or substantial manual intervention. Therefore, such a dataset is difficult to produce. MVS benchmarks [15, 16, 17, 18, 19] commonly use image acquisitions for the reconstruction input and a highly complete and precise acquisition, _e.g._ from multiple stationary Light Detection and Ranging (LiDAR) scans as reference. We make use of such datasets for evaluation. Using such a dataset for training surface reconstruction networks requires reconstructing a watertight surface from the high-quality acquisition. However, even with high-quality acquisitions, parts of the object or scene may be missing due to occlusions, for example. These issues ultimately lead to inconsistencies in the ground truth and make this source of data unreliable to train DSR networks. Additionally, existing datasets of point cloud acquisitions and reliable ground truth surface information only consist of a handful of objects or scenes. Instead, training and evaluation of learning-based surface reconstruction is often done on point clouds sampled from synthetic surfaces stemming from large shape collections. However, such point clouds are not representative for real-world acquisitions, as they do not model non-uniformity or missing data stemming _e.g._ from occlusions, or transparent and low texture areas. To this end, we resort to synthetic scanning to produce point clouds from synthetic surfaces in our benchmark. In contrast to directly sampling the surfaces, synthetic scanning can produce point clouds with realistic defects, such as anisotropy and missing data from (self-)occlusion, see Figure 3. At the same time, the synthetic surfaces provide reliable information for training and evaluation. Synthetic range scanning: We use the range scanning procedure from the surface reconstruction benchmark of Berger _et al._[14]. To this end, we modified their provided code to export the camera positions of the scanning process along with the point cloud. We also add outliers to the produced point clouds by uniformly sampling the bounding box of the object. The scanning procedure produces uniform, evenly spaced point clouds. We choose five different scanner settings to scan each test shape: (i) a low resolution setting replicates point clouds obtained from long range scanning and (ii) a high resolution setting produces point clouds with close to no defects. Three further settings produce high resolution point clouds with challenging defects such as (iii) noise, (iv) outliers or (v) noise and outlier defects combined. See the supplementary material for details. Because Berger _et al._'s provided code pipeline is too time and memory extensive, we cannot generate a dataset sufficiently large for training DSR methods. Thus, we only use this dataset for testing. We refer the reader to the original benchmark paper [14] for further details about the scanning pipeline. Synthetic MVS: To mimic MVS acquisitions, we synthetically scan objects by placing virtual sensors on two bounding spheres around an object and shooting rays to the circumsphere of the object. Sensor positions (ray origin) and ray target points are uniformly sampled on the surface of the spheres. A 3D point is then given as the intersection of the ray and the objects surface. Our goal is not to mimic an MVS pipeline but rather produce point clouds with similar characteristics. We depict our scanning procedure in Figure 4: **Synthetic scanning procedure:** We randomly place sensors on bounding spheres with multiple radii around the object (a). To produce MVS like point clouds, we consider rays aiming at uniformly sampled points on the circumsphere of the object (b). This produces non-uniform point clouds with missing data similar to real MVS point clouds. For synthetic range scanning, we use Berger _et al._’s [14] pipeline, which considers ray targets arranged on a uniform grid aiming at the object (c). This produces uniform point clouds with missing data similar to real range scanning point clouds. Figure 3: **Synthetic and real point clouds:** Surface reconstruction methods are often tested on uniform surface samplings (d). Instead, we test methods on synthetic MVS (e) and synthetic range scans (f). In contrast to uniform surface sampling, synthetic scanning can produce realistic point cloud defects, such as missing data from occlusion, often present in real scans (b,c). Figure 4. We produce two different scans with our approach: (i) sparse point clouds with \(3,000\) points per object and Gaussian noise on the point position with zero mean and standard deviation \(0.005\) as in [6], and (ii) dense point clouds with \(10,000\) points per object of which 10% are outliers and Gaussian noise on the point position with zero mean and standard deviation \(0.005\). For both versions we scan from \(10\) different sensor positions. ### _Datasets_ We consider a variety of datasets to evaluate the versatility and precision of different reconstruction methods. We use closed surfaces from ShapeNet, ModelNet and Berger _et al._, as they are widely available. ShapeNet and ModelNet are sufficiently big to train surface reconstruction networks. Most learning-based methods require reliable inside/outside querying of the models for training. To this end, we make the models watertight using ManifoldPlus [60]. Note that we also use the train sets to tune the parameters of learning-free methods. The watertight surfaces of the test sets allow for a reliable quantitative evaluation of the reconstructions. For qualitative evaluation, we also test on real scans [15, 16, 19] which further allows us to evaluate the reconstruction of open surfaces. All surfaces are scaled to be contained inside the unit cube. In the following we give additional details for each dataset used in our benchmark. See the supplementary material for example shapes. ShapeNet: As is common practice in related studies, we use Choy _et al._'s [61] 13 class subset of ShapeNet as well as its train/val/test split. We generate point clouds with \(3,000\) and \(10,000\) points using our synthetic MVS-like scanning. ModelNet10: We use ModelNet10 shapes as a second object shape dataset. Its shapes are less complex than ShapeNet's, with more flat surfaces and fewer details. Additionally, the number of training shapes is smaller (\(4k\) vs \(30k\) objects). We use the full train set and the test sets for the \(6\) out of \(10\) classes which are not represented in ShapeNet (see supplementary material for details). We generate point clouds with \(3,000\) points with our synthetic MVS-like scanning. Berger _et al._: We select five shapes from the benchmark of Berger _et al._. These shapes include challenging characteristics such as details of various sizes or a non-trivial topology, which makes them more difficult to reconstruct than ModelNet shapes. We generate point clouds between \(3,000\) and \(10,000\) points using our synthetic MVS and range scanning procedures. Real MVS and range scans: We select a range scan from Tanks and Temples [19], and two MVS point clouds from DTU [16] and from Middlebury [15]. We subsample these point clouds to \(50,000\) points. ### _Experimental Setup_ We show a summary of our experimental setup on Table II. In the following, we provide details for each experiment. In-distribution (E1): First, we train and evaluate methods on ShapeNet using all 13 categories and sparse point clouds with \(3,000\) points and Gaussian noise with zero mean and standard deviation \(0.005\). With this experiment, we evaluate the capacity of learning methods to complete missing data of sparse point clouds and eliminate noise. Out-of-distribution (unseen point cloud characteristics) (E2): We evaluate the models trained in E1 on test shapes scanned with a different setting than the train shapes. We use dense point clouds with \(10,000\) points of which 10% are outliers. We add the same noise as in E1. Here, we investigate whether learning methods are able to generalize to different point cloud characteristics. Out-of-distribution (unseen shape categories, less complex) (E3): We evaluate the models trained in E1 on shapes from unseen categories but with the same point cloud characteristics. We use six categories of ModelNet which are not present in the ShapeNet training set. In this experiment, we investigate whether learning methods generalize to unseen categories. Out-of-distribution (unseen shape categories, similar complexity) (E4): This experiment is similar to E3, but the test set is comprised of five shapes from Berger _et al._ which do not correspond to ShapeNet's categories, but have similar complexity. Out-of-distribution (unseen shape categories, more complex (E5): This experiment is similar to E3 and E4, but we retrain all methods on the simpler shapes from ModelNet10. Here, we assess whether learning methods can generalize from simple shapes to more complex ones, a difficult out-of-distribution setting. Optimization (E6): We evaluate several recently developed optimization-based methods, and two traditional test-of-time optimization-based methods. We use the Berger _et al._ dataset for this experiment. Out-of-category vs. optimization (E7): We compare learning- and optimization-based methods on the same dataset. For this we run optimization-based methods on MVS scans of the Berger _et al._ shapes and compare the results to experiment E4. Out-of-distribution vs. optimization (E8): Finally, we compare learning- and optimization-based methods on real MVS and range scanning point clouds. For learning-based methods we use the models from E1. ### _Surface reconstruction methods_ We briefly describe the optimization- and learning-based methods that we will benchmark below. For a more complete description of these methods and their related concepts we refer the reader to our survey in Section 4. Note that while some of the optimization-based methods are based on deep networks, and we call them DSR methods, they do not learn shape priors from a training set. Instead, the networks are "trained" (or optimized) for each new point cloud to reconstruct a surface and rely on novel regularization techniques to increase their robustness to noise, outliers and missing data. Conversely, while some traditional methods are not based on a deep network architecture, we tune their (hyper)parameters on the training set by using a grid search over different parameter combinations. When we need to extract a surface from an implicit field, we use marching cubes [62] with a resolution of \(128^{3}\). \begin{table} \begin{tabular}{c c c} \hline \hline **Experiment** & **Training set** & **Test set** \\ \hline \hline \begin{tabular}{c} **Experiment** \\ **1** \\ **2** \\ **3** \\ **4** \\ **5** \\ **6** \\ **7** \\ **8** \\ **9** \\ **10** \\ **11** \\ **12** \\ **13** \\ **14** \\ **15** \\ **16** \\ **17** \\ **18** \\ **19** \\ **20** \\ **21** \\ **22** \\ **23** \\ **24** \\ **25** \\ **26** \\ **27** \\ **28** \\ **29** \\ **20** \\ **21** \\ **22** \\ **23** \\ **24** \\ **25** \\ **26** \\ **27** \\ **28** \\ **29** \\ **20** \\ **21** \\ **22** \\ **23** \\ **24** \\ **25** \\ **26** \\ **27** \\ **28** \\ **29** \\ **20** \\ **21** \\ **22** \\ **23** \\ **24** \\ **25** \\ **26** \\ **27** \\ **28** \\ **29** \\ **29** \\ **28** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** ** \\ **29** **29** \\ **29** \\ **29** **29** \\ **29** \\ **29** \\ **29** \\ **29** \\ **29** **\\ **29** \\ **29** **29** \\ **29** \\ **29** \\ **29** **\\ **29** \\ **29** **\\ **29** **\\ **29** **\\ **29** **\\ **29** ** **\\ **29** ** \\ **29** **29** \\ **29** **29** \\ **29** **\\ **29** ** **29** \\ **29** ** \\ **29** ** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** ** \\ **29** **29** **\\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** **\\ **29** **29** ** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** **29** \\ **29** ** **29** \\ **29** ** **29** \\ **29** **2 #### 5.3.1 Optimization-based methods IGR [37]: Implicit Geometric Regularisation (IGR) is a DSR method, operating directly on the point cloud using a simple fully connected network architecture that estimates an indicator function from point positions and normals. We optimize the network weights for \(100,000\) iterations for each scan/shape. LIG [8]: Local Implicit Grids (LIG) trains an autoencoder to encode crops of a signed distance function gained from ground truth shapes. For inference, only the decoder part of the autoencoder is retained. Then, crops of the input point cloud with oriented normals are augmented with \(10\) new points along each normal, representing ground truth signed distance information. An initial latent vector is then decoded to produce an SDF and iteratively optimized so that the augmented point cloud crop best matches the SDF. A post-processing removes falsely-enclosed volumes. As code for training is unavailable, we only use the optimization part, with a pretrained model on ShapeNet (without noise). We use the sensor position to orient jet-estimated normals [63]. P2M [32]: Point2Mesh (P2M) is an optimization-based method which iteratively moves vertices of an initial mesh to fit a point cloud. SAP [38]: Shape As Points (SAP) has a supervised learning- and an optimization-based variant. In the learning variant, the method estimates the oriented normals as well as \(k\) point offsets for each input point, to adjust and densify the point cloud. The resulting point cloud of size \(k\mid\mathcal{P}\mid\) is then used by a differentiable Poisson solver [33] to compute an indicator grid, which is supervised with a ground truth indicator grid computed prior to training. The entire pipeline is differentiable which allows for updating point offsets, oriented normals and the network parameters. SPSR [33]: Screened Poisson Surface Reconstruction (SPSR) is a classic non learning-based method which approximates the surface as a level-set of an implicit function estimated from point positions and normal information. We use the sensor position to orient jet-estimated normals [63]. We chose an octree of depth 10 and Dirichlet boundary condition. We also use the provided surface trimming tool for post-processing, but could not find parameters that consistently improve the reconstructed surface. Labatut _et al._[34]: Labatut _et al._ is a graph-cut-based method for range scans that makes use of visibility information. Because there is no official implementation of the algorithm, we reimplemented it ourselves. To compare with optimization-based methods, we use the parametrization suggested by the authors: point weights \(\alpha_{vis}=32\) and \(\sigma=0.01\); regularization strength \(\lambda=5\). #### 5.3.2 Learning-based methods ConvONet [6]: Convolutional Occupancy Networks (ConvONet) is a DSR method that first extracts point features and averages them on cells of three 2D grids, or one 3D grid (variant). 2D or 3D grid convolutions then create features capturing the local geometry. Last, the occupancy of a query-point is estimated with a fully connected network from interpolated features stored on each node of the 2D or 3D grid. SAP [38]: In the optimization variant, the method starts as the learning-based variant described above. Then, the estimated indicator grid is used to compute a mesh and points are sampled on the mesh to calculate a Chamfer loss between the mesh and input point cloud. DGNN [10]: This method uses a graph neural network to estimate the occupancy of Delaunay cells in a point cloud tetrahedralization from cell geometry and visibility features. A graph-cut-based optimization then reinforces global consistency. POCO [12]: Point Convolution for Surface Reconstruction (POCO) extracts point features using point cloud convolution [64], then estimates the occupancy of a query point with a learning-based interpolation from nearest neighbors. SPSR [33]: See method description above. For the learning-based experiments, we perform a grid search over octree depth \(d=\{6,8,10,12\}\) and boundary conditions \(b=\{\text{dirichlet, neumann, free}\}\). We use the parametrization with the best mean volumetric IoU for reconstructions of the training set. Labatut _et al._[34]: See method description above. For the learning-based experiments, we perform a grid search over regularization strength \(\lambda=\{1.5,2.5,5,10\}\), and point weights \(\alpha=\{16,32,48\}\) and \(\sigma=\{0.001,0.01,0.1,1\}\). We use the parametrization with the best mean volumetric IoU for reconstructions of the training set. ### _Evaluation metrics_ We want the reconstructed surface \(\mathcal{S}^{r}\) to be as close as possible to the real (or ground truth) surface \(\mathcal{S}\) in terms of geometry and topology. To measure this "closeness" we use several metrics. #### 5.4.1 Geometric metrics We evaluate the geometric quality of reconstructions with the volumetric intersection over union (IoU), symmetric Chamfer distance (CD) and normal consistency (NC). Volumetric IoU: In the following, let \(\mathcal{S}^{g}\) and \(\mathcal{S}^{r}\) be the set of all points that are inside or on the ground truth and reconstructed surface, respectively. The volumetric IoU is defined as: \[\text{IoU}\left(\mathcal{S}^{g},\mathcal{S}^{r}\right)= \frac{\left|\mathcal{S}^{g}\cap\mathcal{S}^{r}\right|}{\left| \mathcal{S}^{g}\cup\mathcal{S}^{r}\right|}\,.\] We approximate volumetric IoU by randomly sampling \(100,000\) points in the union of the bounding boxes of \(\mathcal{S}^{g}\) and \(\mathcal{S}^{r}\). Chamfer distance: To compute the Chamfer distance and normal consistency, we sample a set of points \(\mathcal{P}^{g}\) and \(\mathcal{P}^{r}\) on the facets of the ground truth mesh and the reconstructed mesh, respectively, with \(\left|\mathcal{P}^{g}\right|=\left|\mathcal{P}^{r}\right|=100,000\). We approximate the symmetric Chamfer distance between \(\mathcal{S}^{g}\) and \(\mathcal{S}^{r}\) as follows: \[\text{CD}(\mathcal{S}^{g},\mathcal{S}^{r})= \frac{1}{2\left|\mathcal{P}^{g}\right|}\sum_{x\in\mathcal{P}_{g}} \min_{y\in\mathcal{P}^{r}}\left\|x-y\right\|_{2}\] \[+ \frac{1}{2\left|\mathcal{P}^{r}\right|}\sum_{y\in\mathcal{P}^{r}} \min_{x\in\mathcal{P}_{g}}\left\|y-x\right\|_{2}\,.\] Normal consistency: Let \(n(x)\) be the unit normal of a point \(x\). We set this normal to be the normal of the facet from which \(x\) was sampled. Let \(\langle\cdot,\cdot\rangle\) the Euclidean scalar product in \(\mathbb{R}^{3}\). Normal consistency is defined as: \[\text{NC}(\mathcal{S}^{g},\mathcal{S}^{r})= \frac{1}{2|\mathcal{P}^{g}|}\sum_{x\in\mathcal{P}^{s}}\left\langle n (x),n\left(\operatorname*{argmin}_{y\in\mathcal{P}^{r}}||x-y||_{2}\right)\right\rangle\] \[+ \frac{1}{2|\mathcal{P}^{r}|}\sum_{y\in\mathcal{P}^{r}}\left\langle n (y),n\left(\operatorname*{argmin}_{x\in\mathcal{P}^{g}}||y-x||_{2}\right) \right\rangle\.\] #### 5.4.2 Topological metrics We evaluate the topological quality of reconstructions through the number of components, the number of non-manifold edges and the number of boundary edges. Number of components: If not stated otherwise, the ground truth surfaces of our datasets have exactly one component. In consequence, the reconstructed surfaces should also have one component. Number of boundary edges: The surfaces of all ground truth objects in our datasets are closed. We verify this by measuring the number of boundary edges of the reconstructed meshed surface which should be zero. Note that if boundary edges only appear on the intersection of the reconstruction with its bounding box we still classify the reconstruction as watertight, according to the definition in Section 3.3. Number of non-manifold edges: The surfaces of all ground truth objects in our datasets are 2-manifolds. We verify this by measuring the number of non-manifold edges of the reconstructed meshed surface which should be zero. #### 5.4.3 Runtimes To evaluate the scalability of methods, we measure the average time it takes to reconstruct a surface of ShapeNet from 3,000 points. ## 6 Experiments ### _Learning-based surface reconstruction from synthetic MVS point clouds (E1 - E5)_ We examine the precision and versatility of novel supervised-learning methods and two traditional methods for which training sets were used for tuning parameters. All evaluated methods perform well when reconstructing shapes from known categories and known point cloud characteristics (E1). The learning-based methods show a significantly superior performance of at least 5% over SPSR and Labatt _et al._ (see Table III). The methods based on neural implicit fields (POCO, SAP and ConvONet) produce visually and quantitatively the best reconstructions (see Figure 5, first column). DGNN does not perform as well as most other learning methods in this experiment. The sparse point clouds used in this experiment do not contain point samples on all details. However, due to the interpolating nature of DGNN surface details cannot be reconstructed without input points. In E2, domain shifts results in worse performance, both quantitatively and qualitatively for all methods except SPSR. SPSR shows robustness against outliers and benefits from the higher point density. Most learning methods do not produce satisfying results (see Figure 5, second column). The reconstruction of SAP is too smooth and lacks details, but does not show as severe defects as the reconstructions of other learning-based methods. Labatt _et al._ suffers from the low regularization weight tuned for the outlier free point clouds and could benefit from higher regularization to remove erroneous floating components from outliers. When reconstructing out-of-category ModelNet shapes (E3), the neural implicit field methods exhibit visually the best reconstructions. SAP and POCO produce quantitatively the best reconstructions (see Table III). The interpolating method DGNN performs better than ConvONet. In E4, we reconstruct shapes from Berger _et al._ which have similar complexity than the shapes from ShapeNet used for training. The only learning methods able to leverage information from the common point cloud characteristics to improve the test results are DGNN and POCO. In E5, most methods overfit the simpler ModelNet shapes when retrained and used to reconstruct the more complex ShapeNet shapes. Even SPSR slightly suffers from tuning parameters on ModelNet. The best reconstructions on ModelNet are achieved with an octree depth of \(d=8\) (instead of \(d=10\) on ShapeNet) leading to worse results on ShapeNet: \(77.1\) vIoU in E1 vs. \(74.6\) vIoU in E5. The parameter tuning of Labatt _et al._ stays unchanged. DGNN is the only method that does not overfit on ModelNet and yields the best results, both quantitatively and qualitatively. In fact, it performs as well as when trained on ShapeNet directly. ConvONet is only able to outperform traditional methods when the training and test sets share the same point cloud characteristics _and_ shape categories. SAP produces much better reconstructions and is the learning-based method with the highest robustness against outliers. It is also the only method explicitly predicting normals. As a result SAP reconstructs surfaces with the highest mean normal consistency over all experiments. The local learning and global regularisation approach of DGNN produces competitive results in all experiments, except for the outlier setting of E2. DGNN is the learning-based method producing surfaces with highest mean IoU over all experiments. The local attention-based learning mechanism of POCO leads to the best results when the task does not involve reconstruction from unseen domains. It provides the most faithful reconstructions in three experiments in which point cloud characteristics are identical in train and test set (E1, E3, E4). However, POCO is heavily affected by outliers (E2), which can be explained by its purely local approach. POCO also tends to overfit on simple training shapes (E5). The reconstructions of POCO, as well as the ones of SAP contain boundary edges only in areas where the reconstructions intersect the bounding box _i.e._ they are still watertight. SPSR proves robust to various defects and shape characteristics, providing fair results, with the highest mean IoU and Chamfer distance across the board. However, its reconstructions are the least compact, _i.e._ they have the highest number of components. Labatt _et al._'s parametrization proves slightly less robust, as the method is affected by outliers. Its mean IoU is higher than that of any learning method, and its reconstructions are the most compact surfaces with an average number of components of 2.7. However, it is also the only Figure 5: **Learning-based reconstructions (E1 to E5):** In each column we show learning-based reconstructions of experiments E1 to E5. DGNN [10], SAP [38] and SPSR [33] provide visually the best results with exhibiting dominant defects. method that produces a significant amount of non-manifold edges. ### _Optimization-based surface reconstruction from synthetic range scanning point clouds (E6)_ This experiment evaluates the precision and versatility of non-learning methods. The benchmarked approaches consist in neural network based methods optimizing a function to fit an input point cloud and rely on novel regularization techniques to increase their robustness to noise, outliers and missing data. Furthermore, we benchmark the two traditional methods SPSR and Labatut _et al._ with standard parameter settings. We reconstruct surfaces of Berger _et al._ from synthetic range scanning point clouds with various different defects. We show numerical results in Table IV and visualisations in the supplementary material. Almost all reconstructions provided by the two traditional methods are much more truthful than the DSR methods, with a mean volumetric IoU almost \(10\) points higher across all point cloud defects. IGR does visually not provide a good result on the exemplary shape, especially on thin surface parts. Quantitatively, the method provides the best reconstruction for the neural networks based methods in the absence of outliers, and even the best overall reconstruction for the noisy high resolution scans. LIG does not provide good reconstructions for any of the settings. This can be explained by its pretrained model on defect-free uniform high density point clouds. Furthermore, its post-processing makes the reconstructions non-watertight. P2M provides geometrically fair reconstructions and the topologically best reconstructions with a low number of components, and watertight and manifold surfaces for all reconstructions. SAP provides fair reconstructions in the absence of outliers. None of the neural network based methods is robust against outliers. As in the learning-based experiments, SPSR generates high quality reconstructions for all input defects, and achieves the best mean normal consistency. Labatut _et al._ achieves the best mean IoU and mean Chamfer distance while providing the reconstructions with the lowest number of components. However, the reconstructions of Labatut _et al._ are the only ones with a significant number of non-manifold edges. ### _Learning- and optimization-based surface reconstruction from synthetic MVS point clouds (E7)_ To directly compare learning- and optimization-based reconstructions on the same dataset, we also reconstruct the Berger _et al._ shapes from synthetic MVS scans (cf. E4) with the optimization-based methods. Thus, for learning-based methods, we use the models trained on synthetic MVS scans from ShapeNet (cf. E4) and we optimize non-learning \begin{table} \begin{tabular}{l c c c c c c|c c c c c c} \hline \hline & \multicolumn{6}{c|}{**Volumetric IoU (\%)** [\(\uparrow\)]} & \multicolumn{6}{c}{**Normal consistency (\%)** [\(\uparrow\)]} \\ **Method** & E1 & E2 & E3 & E4 & E5 & Mean & E1 & E2 & E3 & E4 & E5 & Mean \\ \hline **ConvONet2D**[6] & 85 & 47.3 & 79.3 & 65.1 & 68.3 & 69 & 92.7 & 76.4 & 90 & 78 & 87.8 & 85 \\ **ConvONet3D**[6] & 84.8 & 15.1 & 83.6 & 76.4 & 51 & 62.2 & 93 & 71.8 & 93.1 & 87.2 & 82.5 & 85.5 \\ **SAP**[38] & 88.7 & 59.8 & 89.2 & 78.3 & 54.9 & 74.2 & 93.5 & **86.7** & 94.1 & 89 & 87.1 & **90.1** \\ **DGNN**[10] & 84.5 & 38.1 & 87 & 82.9 & **84.4** & 75.4 & 85.4 & 68.8 & 88.5 & 85.2 & 85.5 & 82.7 \\ **PCOCO**[12] & **89.5** & 8.7 & **90.6** & **83.9** & 40.9 & 62.7 & **93.6** & 75.6 & **94.2** & **89.5** & 82.9 & 87.1 \\ **SPSR**[33] & 77.1 & **80.7** & 80.7 & 77.6 & 74.6 & **78.1** & 87.7 & 83.2 & 89.1 & 86.3 & **88** & 86.9 \\ **Labatut _et al._[34] & 80.3 & 60.4 & 83.9 & 79.4 & 80.3 & 76.9 & 81 & 73 & 84.6 & 80.8 & 81 & 80.1 \\ \hline \hline \multicolumn{10}{c}{**Chamfer distance (per-point ave. \%)** [\(\downarrow\)]} & \multicolumn{6}{c}{**Number of components** [\(\downarrow\)]} \\ **Method** & E1 & E2 & E3 & E4 & E5 & Mean & E1 & E2 & E3 & E4 & E5 & Mean \\ \hline **ConvONet2D**[6] & 0.553 & 7.51 & 0.997 & 1.43 & 0.979 & 2.29 & 1.6 & 34.8 & 2.55 & 3.6 & 3.2 & 9.16 \\ **ConvONet3D**[6] & 0.546 & 10.9 & 0.76 & 0.887 & 2.44 & 3.1 & 1.37 & 13.6 & 1.6 & 2.6 & 1.5 & 4.13 \\ **SAP**[38] & 0.437 & 2.09 & 0.547 & 0.734 & 0.924 & 0.946 & 2.71 & 86 & 3.45 & 5.6 & 10.5 & 21.7 \\ **DGNN**[10] & 0.549 & 2.54 & 0.635 & 0.586 & **0.55** & 0.973 & 1.31 & 16.1 & 1.13 & 1 & 1.31 & 4.16 \\ **POCO**[12] & **0.416** & 10.5 & **0.516** & **0.579** & 1.32 & 2.67 & 2.32 & 178 & 2.82 & 2 & 16.3 & 40.2 \\ **SPSR**[33] & 0.801 & **0.659** & 0.873 & 0.786 & 0.886 & **0.801** & 9.26 & 185 & 11.1 & 8 & 3.24 & 43.3 \\ **Labatut _et al._[34] & 0.665 & 6.97 & 0.747 & 0.671 & 0.665 & 1.94 & **1.22** & **9.02** & **1.05** & **1** & **1.22** & **2.7** \\ \hline \hline \multicolumn{10}{c}{**Number of boundary edges** [\(\downarrow\)]} & \multicolumn{6}{c}{**Number of non-manifold edges** [\(\downarrow\)]} \\ **Method** & E1 & E2 & E3 & E4 & E5 & Mean & E1 & E2 & E3 & E4 & E5 & Mean \\ \hline **ConvONet2D**[6] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ **ConvONet3D**[6] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ **SAP**[38] & **0** & 0.00923 & **0** & **0** & **8.44 & 1.69 & **0** & **0** & **0** & **0** & **0** & **0** \\ **DGNN**[10] & **0** & **0** & **0** & **0** & **0** & **0** & 1.35 & 2.24 & 0.646 & 0.4 & 1.69 & 1.26 \\ **POCO**[12] & **0** & 121 & **0** & **0** & 41.7 & 32.5 & **0** & 0.00154 & **0** & **0** & 0.000308 \\ **SPSR**[33] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ **Labatut _et al._[34] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ \hline \hline \end{tabular} \end{table} TABLE III: **Numerical results for learning-based experiments (E1 to E5):** We show the numerical results of the learning experiments E1 to E5. SPSR [33] is the only method that produces surfaces with a high volumetric intersection over union and a low Chamfer distance in each experiment. Therefore, its surfaces have the highest mean volumetric IoU and the lowest mean CD. However, SPSR also produces the least compact surfaces on average (_i.e._ surfaces with the highest number of components). Labatut _et al._[34] produces the most compact surfaces. DGNN [10] has the highest mean volumetric IoU of the tested learning methods. SAP [38] has the lowest mean CD of the tested learning methods and the highest normal consistency. ConvONet and SPSR are the only methods that produce surfaces without boundary and non-manifold edges. methods per shape using standard settings. We show the numerical results in Table V and visualisations in the supplementary material. The learning-based methods DGNN and POCO benefit from the training on point clouds with the same characteristics as in the test set and reconstruct more truthful surfaces than the optimization-based methods. Similar to E6, Labatut _et al._ produces the best results among the optimization-based methods. ### _Learning- and optimization-based surface reconstruction from real point clouds (E8)_ Finally, we reconstruct surfaces from real MVS and range scanning point clouds. Again, for learning-based methods, we use the models trained on synthetic MVS scans from ShapeNet (cf. E4) and we optimize non-learning methods per point cloud. We show the reconstructions in Figure 6. The MVS point cloud from Middlebury (Figure 6a) is contaminated with a large amount of varying noise. SAP is the only learning method which reconstructs a smooth surface without missing details (Figure 6d). However, it suffers from small amounts of topological noise in the form of holes. The optimization-based method P2M provides a visually good reconstruction with few defects (Figure 6i). In Figures 6m and 6y, optimization-based methods handle the additional domain shift to an open scene better compared to learning-based methods. The two traditional methods SPSR and Labatut _et al._ provide the visually best results on average. This experiment also shows that our findings on synthetic point clouds coincide with those on real-world point clouds, validating our experimental setup. ### _Runtimes_ On Table VI, we report detailed runtimes for the methods tested in the learning-based experiments. SAP is the fastest of all reconstruction methods. DGNN also shows fast runtimes, while POCO is slow, due to its extensive use of neighborhood sampling. We also compare runtimes of P2S. We were not able to include this method in experiments E1 to E5 due to its long runtime for training and inference. ### _Summary and analysis_ In the right circumstances, learning-based methods can produce highly detailed surfaces while remaining robust to noise and missing data. However, this requires training on large sets (30k shapes in our experiments) of sufficiently complex surfaces and associated point clouds. Even if learning methods can generalize to unseen shape categories to some extent, the training and test sets must share the same point cloud characteristics. This suggests that these methods mainly learn priors related to the acquisition characteristics of the input point clouds, and less on the shapes themselves. However, learning-based methods do not produce satisfying results when the training shapes are too simple, or when the point clouds include unknown defects, such as outliers (seeTable VII). Mixing traditional and learning-based methods, as in SAP or DGNN, results in higher robustness to domain shifts and leads to short reconstruction times. Except for IGR, novel optimization-based methods are not robust to acquisition defects and they rarely provide better results compared to the two traditional methods SPSR and Labatut _et al._. \begin{table} \begin{tabular}{l c c c c c c c|c c c c c c} \hline \hline & \multicolumn{6}{c|}{**Volumetric IoU (\%)** [\(\uparrow\)]} & \multicolumn{6}{c}{**Normal consistency (\%) [\(\uparrow\)]} \\ **Method** & LR & HR & HRN & HRO & HRNO & Mean & LR & HR & HRN & HRO & HRNO & Mean \\ \hline **IGR** & [37] & 80.8 & 92.5 & **83.6** & 63.7 & 62.7 & 76.7 & 88 & **96.3** & 83.9 & 77.8 & 71.5 & 83.5 \\ **LIG** & [8] & 46.9 & 50.3 & 63.9 & 66 & 63.8 & 58.2 & **88.7** & 92.2 & **89** & 77 & 75.2 & 84.4 \\ **P2M** & [32] & 75.2 & 83.3 & 75.5 & 71.3 & 67.8 & 74.6 & 86.3 & 92.2 & 88.1 & 84.5 & 82.1 & 86.6 \\ **SAP** & [38] & 75.6 & 89.1 & 72.4 & 55.3 & 34.9 & 65.4 & 83.4 & 94.8 & 61.6 & 74.5 & 55.3 & 73.9 \\ **SPSR** & [33] & 77.7 & 90.2 & 82.8 & 90.3 & **82.1** & 84.6 & 88.1 & 96 & 88.1 & **96.2** & **85.8** & **90.9** \\ **Labatut _et al._** & [34] & **81.3** & **93.4** & 80.1 & **93.4** & 79.1 & **85.5** & 87.6 & 96 & 66.3 & 94.9 & 66.5 & 82.3 \\ \hline \multicolumn{12}{c}{**Chamfer distance** (per-point ave. \%) [\(\downarrow\)]} & \multicolumn{6}{c}{**Number of components** [\(\downarrow\)]} \\ **Method** & LR & HR & HRN & HRO & HRNO & Mean & LR & HR & HRN & HRO & HRNO & Mean \\ \hline **IGR** & [37] & 0.674 & 0.322 & **0.554** & 7.96 & 7.72 & 3.45 & 6.8 & **1.2** & 35.2 & 44 & 97.4 & 36.9 \\ **LIG** & [8] & 0.745 & 0.581 & 0.781 & 7.89 & 7.8 & 3.56 & **1** & **1** & **1** & 1.6 & **1** & 1.12 \\ **P2M** & [32] & 0.817 & 0.473 & 0.729 & 1.53 & 2.13 & 1.13 & **1.2** & **1** & **1.2** & 1.4 & 1.6 & 1.28 \\ **SAP** & [38] & 0.852 & 0.32 & 0.701 & 3.99 & 3.93 & 1.96 & 73.2 & 85.6 & 937 & 1.8e+03 & 1.96e+03 & 971 \\ **SPSR** & [33] & 0.794 & 0.369 & 0.572 & 0.362 & **0.607** & 0.541 & **1.2** & 1.6 & 3.6 & 3.8 & 20.2 & 6.08 \\ **Labatut _et al._** & [34] & **0.635** & **0.314** & 0.608 & **0.339** & 0.641 & **0.507** & **1** & **1** & **1.2** & **1** & **1.08** \\ \hline \multicolumn{12}{c}{**Number of boundary edges** [\(\downarrow\)]} & \multicolumn{6}{c}{**Number of non-manifold edges** [\(\downarrow\)]} \\ **Method** & LR & HR & HRN & HRO & HRNO & Mean & LR & HR & HRN & HRO & HRNO & Mean \\ \hline **IGR** & [37] & **0** & **0** & **0** & **0** & **0** & **0** & 0.8 & 0.8 & 5.2 & 4.2 & 2.2 \\ **LIG** & [8] & 69 & 42.8 & 17.2 & **0** & **0** & 25.8 & **0** & **0** & **0** & **0** & **0** \\ **P2M** & [32] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ **SAP** & [38] & **0** & **0** & **0** & **0** & **449** & 89.8 & **0** & **0** & **0** & **0** & **0** \\ **SPSR** & [33] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **0** \\ **Labatut _et al._** & [34] & **0** & **0** & **0** & **0** & **0** & **0** & **0** & **1** & **5.8 & 24.4 & 3.8 & 22 & 11.4 \\ \hline \hline \end{tabular} \end{table} TABLE IV: **Numerical results for optimization-based reconstructions (E6): Optimization-based reconstruction of the Berger _et al._ shapes from synthetic range scans. LR is a low resolution scan, HR a high resolution scan with noise, HRO a high resolution scan with noise, HRO a high resolution scan with noise and outliers. The methods are optimized per shape and per scan using standard settings as mentioned in the corresponding publications.** Figure 6: **Learning- and optimization-based reconstructions (E8):** We show reconstructions of _Temple Ring_ from Middlebury ((b) to (l)), _Truck_ from Tanks And Temples ((n) to (x)) and _scan1_ from the DTU dataset ((z) to (aj)). The learning methods (top rows) were trained on synthetic MVS scans from ShapeNet. Optimization-based methods (bottom rows) are optimized per shape using standard settings. The two traditional methods SPSR [33] and Labatut _et al._[34] provide visually the best results. Their reconstructions are only affected by the heavy noise of the _Temple Ring_ MVS point cloud. ## 7 Conclusion Surface reconstruction from point clouds is a well studied subject in the field of digital geometry processing. However, constant developments in acquisition techniques and novel ideas for surface reconstruction and analysis bring forward new challenges. In this paper, we survey the field of surface reconstruction from point clouds and benchmark several related methods. We revisit traditional test-of-time approaches for surface reconstruction and detail how they inspired novel approaches. We evaluate traditional and novel optimization and learning-based methods on various tasks and datasets. We show that novel optimization-based methods are not as robust against defects as traditional methods. For in-distribution point clouds with characteristics similar to the ones of the training set, learning methods provide more accurate reconstructions than traditional approaches. However, real-world scenes often include a multitude of different and highly complex objects, and their acquisitions may contain a variety of defects. Most learning methods require shapes of similar complexity in training and test sets and they are not robust to out-of-distribution acquisition defects. These limitations of learning-based methods hinder the reconstruction of point clouds in the wild. Generating or finding adequate training data that includes a large variety of complex shapes scanned with realistic defects is a difficult task. Future work in learning-based surface reconstruction should focus on training on point clouds with realistic acquisition defects, _e.g._ from common sensors and acquisition settings, or on increasing the methods' robustness to unseen defects. ## Acknowledgments This work was partially funded by the ANR-17-CE23-0003 BIOM grant.
2309.05616
Orthogonality relations for conical functions of imaginary order
Orthogonality relations for conical or Mehler functions of imaginary order are derived and expressed in terms of the Dirac delta function. This work extends recently derived orthogonality relations of associated Legendre functions.
Job Feldbrugge, Nynke M. D. Niezink
2023-09-11T17:05:00Z
http://arxiv.org/abs/2309.05616v1
# Orthogonality relations for conical functions of imaginary order ###### Abstract Orthogonality relations for conical or Mehler functions of imaginary order are derived and expressed in terms of the Dirac delta function. This work extends recently derived orthogonality relations of associated Legendre functions. associated Legendre functions; orthogonal functions; Dirac delta distribution ## 1 Introduction The associated Legendre (or Ferrers) functions of the first kind \(P_{\lambda}^{\mu}(x)\) and second kind \(Q_{\lambda}^{\mu}(x)\), with the complex degree \(\lambda\) and complex order \(\mu\), are generalizations of the Legendre polynomial \(P_{l}(x)\), where the degree \(\lambda=l\) is an integer and the order vanishes, and the associated Legendre polynomial \(P_{l}^{m}(x)\), where the degree \(\lambda=l\) is an integer and the order \(\mu=m\) is an integer with \(|m|\leq l\). The Legendre polynomial and function are widely used in applied mathematics and theoretical physics. The polynomial, for example, appears in the study of the Newtonian potential [1], spherical harmonics, and the energy eigenstates of the hydrogen model in quantum mechanics as in, _e.g._, the textbook [2]. Its generalization to complex degrees and orders plays a pivotal role in many applications including the (generalized) Mehler-Fock transform [3]. The Legendre function \(P_{\lambda}^{\mu}\) with imaginary order \(\mu\) arises in the study of modified Poschl-Teller potentials in quantum mechanics [4], backscattering of radiation in plasmas [5], topological black holes [6], two-particle Hamiltonians [7], and Yang-Mills matrix models [8]. The associated Legendre polynomial satisfies the orthogonality condition \[\int_{-1}^{1}\frac{P_{l}^{m}(x)P_{l}^{n}(x)}{1-x^{2}}dx=\frac{(l+m)!}{m(l-m)!} \delta_{m,n}\,, \tag{1}\] for \(m,n\neq 0\). Kalmykov and Shvets [5] found a similar relation for the associated Legendre function of the first kind with imaginary order, \[\int_{-1}^{1}\frac{P_{1}^{iq}(x)P_{1}^{iq^{\prime}}(x)}{1-x^{2}}\mathrm{d}x= \frac{2\sinh(\pi q)}{q}\delta(q-q^{\prime})\,, \tag{2}\] with real \(q,q^{\prime}\). Hutasiot _et al._[6] generalized this relation to integer degree, \[\int_{-1}^{1}\frac{P_{l}^{iq}(x)P_{l}^{iq^{\prime}}(x)}{1-x^{2}}\mathrm{d}x= \frac{2\sinh(\pi q)}{q}\delta(q-q^{\prime})\,, \tag{3}\] for \(l=0,1,2,\ldots\). Finally, by an elegant algebraic derivation, Bielski [9] demonstrated that these identities are special cases of more general orthogonality relations. In particular, Bielski evaluated the integrals \[\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)P_{\lambda}^{iq^{\prime}}(x)}{1-x^{2}} \mathrm{d}x\,,\quad\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)Q_{\lambda}^{iq^{ \prime}}(x)}{1-x^{2}}\mathrm{d}x\,,\quad\int_{-1}^{1}\frac{Q_{\lambda}^{iq}(x )Q_{\lambda}^{iq^{\prime}}(x)}{1-x^{2}}\mathrm{d}x\,, \tag{4}\] for general complex degree \(\lambda\), in terms of the Dirac delta functions \(\delta(q-q^{\prime})\) and \(\delta(q+q^{\prime})\). In this paper, we evaluate the related orthogonality relations for the conical functions, \[I_{1}^{q,q^{\prime}} =\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)P_{\lambda}^{iq^{\prime}} (x)^{*}}{1-x^{2}}\mathrm{d}x\,, J_{1}^{q,q^{\prime}} =\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)P_{\lambda}^{iq^{\prime}} (-x)^{*}}{1-x^{2}}\mathrm{d}x\,, \tag{5}\] \[I_{2}^{q,q^{\prime}} =\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)Q_{\lambda}^{iq^{\prime}} (x)^{*}}{1-x^{2}}\mathrm{d}x\,, J_{2}^{q,q^{\prime}} =\int_{-1}^{1}\frac{P_{\lambda}^{iq}(x)Q_{\lambda}^{iq^{\prime}} (-x)^{*}}{1-x^{2}}\mathrm{d}x\,,\] (6) \[I_{3}^{q,q^{\prime}} =\int_{-1}^{1}\frac{Q_{\lambda}^{iq}(x)Q_{\lambda}^{iq^{\prime}} (x)^{*}}{1-x^{2}}\mathrm{d}x\,, J_{3}^{q,q^{\prime}} =\int_{-1}^{1}\frac{Q_{\lambda}^{iq}(x)Q_{\lambda}^{iq^{\prime}} (-x)^{*}}{1-x^{2}}\mathrm{d}x\,, \tag{7}\] when \(\lambda=-\frac{1}{2}+i\nu\) for real \(\nu\), in terms of the Dirac delta functions \(\delta(q-q^{\prime})\) and \(\delta(q+q^{\prime})\), following the derivation described in Bielski [9]. We evaluate the \(I_{i}^{q,q^{\prime}}\) integrals from Bielski's result (4), and use these to derive the \(J_{i}^{q,q^{\prime}}\) integrals. The Legendre function with degree \(\mathrm{Re}[\lambda]=-\frac{1}{2}\) is known as the conical or Mehler function, first analyzed in the study of conics in electrostatics [10]. The conical function plays a prominent role in the Mehler-Fock transformation and, with imaginary degree, forms the energy eigenstates of the modified Poschl-Teller or Rosen-Morse barrier in quantum mechanics [11]. For a detailed study of the conical function with imaginary order, see [12]. These orthogonality relations are of special interest when normalizing the continuum spectrum of the modified Poschl-Teller model in quantum mechanics. The relations also apply to many other physical models (see, _e.g._, [8]), as the inner product in quantum mechanics includes a complex conjugation, _i.e._, \[\langle\psi_{1}|\psi_{2}\rangle=\int\psi_{1}(x)^{*}\psi_{2}(x)\mathrm{d}x\,, \tag{8}\] using Dirac's bra-ket notation. ## 2 Relevant properties of the associated Legendre function To aid the evaluation of the integrals (5)-(7), we briefly summarize several useful properties of associated Legendre functions. The associated Legendre functions \(P^{\mu}_{\lambda}(x)\) and \(Q^{\mu}_{\lambda}(x)\) are solutions of the general Legendre equation \[\frac{\mathrm{d}}{\mathrm{d}x}\left[(1-x^{2})\frac{\mathrm{d}w(x)}{\mathrm{d}x }\right]+\left[\lambda(\lambda+1)-\frac{\mu^{2}}{1-x^{2}}\right]w(x)=0\,, \tag{9}\] and are related by the equation \[Q^{\mu}_{\lambda}(x)=\frac{\pi}{2\sin(\pi\mu)}\left(P^{\mu}_{\lambda}(x)\cos( \pi\mu)-\frac{\Gamma(\lambda+\mu+1)}{\Gamma(\lambda-\mu+1)}P^{-\mu}_{\lambda} (x)\right)\,, \tag{10}\] where \(\Gamma\) denotes the gamma function [13, 14, 15, 16, 17] (_e.g._, equation (14.9.2) in [17]). The Legendre function of the first kind is often expressed as \[P^{\mu}_{\lambda}(x)=\frac{1}{\Gamma(1-\mu)}\left(\frac{1+x}{1-x}\right)^{\mu /2}\,{}_{2}F_{1}\left(-\lambda,\lambda+1;1-\mu,\frac{1-x}{2}\right)\,,\quad-1 <x<1, \tag{11}\] in terms of the hypergeometric function \[{}_{2}F_{1}(a,b;c,x)=\frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\sum_{n=0}^{\infty} \frac{\Gamma(a+n)\Gamma(b+n)}{\Gamma(c+n)}\frac{x^{n}}{n!}\,, \tag{12}\] (_e.g._, equation (14.3.1) in [17]). Moreover, the Legendre function satisfies the reflection condition [13, 14, 15, 16, 17] \[P^{\mu}_{\lambda}(-x)=P^{\mu}_{\lambda}(x)\cos(\pi(\lambda+\mu))-\frac{2}{\pi }Q^{\mu}_{\lambda}(x)\sin(\pi(\lambda+\mu))\,, \tag{13}\] (_e.g._, equation (14.9.10) in [17]). For the conical functions, with \(\lambda=-\frac{1}{2}+i\nu\), conjugation of the degree leaves the Legendre functions invariant, \[P^{\mu}_{\lambda}(x)=P^{\mu}_{\lambda^{*}}(x)\,,\quad Q^{\mu}_{\lambda}(x)=Q^ {\mu}_{\lambda^{*}}(x)\,, \tag{14}\] as the Legendre equation is identical, for \(\lambda(\lambda+1)=\lambda^{*}(\lambda^{*}+1)=-\frac{1}{4}-\nu^{2}\). ## 3 Orthogonality relations of conical functions The Legendre function \(P^{iq}_{\lambda}(x)\) and \(P^{iq^{\prime}}_{\lambda}(x)\) satisfy the differential equations \[\frac{\mathrm{d}}{\mathrm{d}x}\left[(1-x^{2})\frac{\mathrm{d}P^{ iq}_{\lambda}(x)}{\mathrm{d}x}\right]+\left[\lambda(\lambda+1)+\frac{q^{2}}{1-x^{2 }}\right]P^{iq}_{\lambda}(x)=0\,, \tag{15}\] \[\frac{\mathrm{d}}{\mathrm{d}x}\left[(1-x^{2})\frac{\mathrm{d}P^{ iq^{\prime}}_{\lambda}(x)}{\mathrm{d}x}\right]+\left[\lambda(\lambda+1)+\frac{q^{ \prime 2}}{1-x^{2}}\right]P^{iq^{\prime}}_{\lambda}(x)=0\,. \tag{16}\] Multiplying the first equation by \(P^{iq^{\prime}}_{\lambda}(x)\) and the second equation by \(P^{iq}_{\lambda}(x)\), subtracting the resulting two equations, and integrating the resulting identity from \(a\) to \(b\) where \(-1<a<b<1\), we obtain upon integration by parts \[\int_{a}^{b}\frac{P^{iq}_{\lambda}(x)P^{iq^{\prime}}_{\lambda}(x)}{1-x^{2}} \mathrm{d}x=\frac{\left[P^{iq}_{\lambda}(x)(1-x^{2})\frac{\mathrm{d}P^{iq^{ \prime}}_{\lambda}(x)}{\mathrm{d}x}-P^{iq^{\prime}}_{\lambda}(x)(1-x^{2})\frac {\mathrm{d}P^{iq}_{\lambda}(x)}{\mathrm{d}x}\right]^{b}_{a}}{q^{2}-q^{\prime 2}}\,. \tag{17}\] Letting \(a\to-1\) from above and \(b\to 1\) from below, we write the integral of interest as the difference between two limits \[\int_{-1}^{1}\frac{P^{iq}_{\lambda}(x)P^{iq^{\prime}}_{\lambda}( x)}{1-x^{2}}\mathrm{d}x =\lim_{b\to 1^{-}}\frac{P^{iq}_{\lambda}(b)(1-b^{2})\frac{ \mathrm{d}P^{iq^{\prime}}_{\lambda}(b)}{\mathrm{d}b}-P^{iq^{\prime}}_{ \lambda}(b)(1-b^{2})\frac{\mathrm{d}P^{iq}_{\lambda}(b)}{\mathrm{d}b}}{q^{2}- q^{\prime 2}}\] \[-\lim_{a\to-1^{+}}\frac{P^{iq}_{\lambda}(a)(1-a^{2})\frac{ \mathrm{d}P^{iq^{\prime}}_{\lambda}(a)}{\mathrm{d}a}-P^{iq^{\prime}}_{ \lambda}(a)(1-a^{2})\frac{\mathrm{d}P^{iq}_{\lambda}(a)}{\mathrm{d}a}}{q^{2}- q^{\prime 2}}\,. \tag{18}\] Bielski [9] evaluates these limits and shows that \[\int_{a}^{b}\frac{P^{iq}_{\lambda}(x)P^{iq^{\prime}}_{\lambda}(x )}{1-x^{2}}\mathrm{d}x =-\frac{2\Gamma(iq)\Gamma(-iq)\sin(\pi\lambda)}{\Gamma(1+\lambda- iq)\Gamma(-\lambda-iq)}\delta(q-q^{\prime})\] \[=\left[\frac{\pi}{\Gamma(1-iq)\Gamma(1+iq)}+\frac{\sin^{2}(\pi \lambda)\Gamma(iq)\Gamma(-iq)}{\pi}\right.\] \[\left.+\frac{\pi\Gamma(iq)\Gamma(-iq)}{\Gamma(1+\lambda-iq)\Gamma (-\lambda-iq)\Gamma(1+\lambda+iq)\Gamma(-\lambda+iq)}\right]. \tag{19}\] Using the fact that for conical functions \[P^{iq}_{\lambda}(x)^{*}=P^{-iq}_{\lambda^{*}}(x)=P^{-iq}_{\lambda}(x)\,, \tag{20}\] we find \[I^{q,q^{\prime}}_{1} =\frac{\cosh(2\pi q)+\cosh(2\pi\nu)}{q\sinh(\pi q)}\delta(q-q^{ \prime})\] \[\quad+\frac{2\pi\cosh(\pi\nu)}{q\sinh(\pi q)\Gamma(\frac{1}{2}-i \nu-iq)\Gamma(\frac{1}{2}+i\nu-iq)}\delta(q+q^{\prime})\,. \tag{21}\] The integrals \(I^{q,q^{\prime}}_{2}\) and \(I^{q,q^{\prime}}_{3}\) follow directly from relation (10), \[I^{q,q^{\prime}}_{2} =\frac{i\pi}{2\sinh(\pi q^{\prime})}\left[\cosh(\pi q^{\prime})I^{ q,q^{\prime}}_{1}-\frac{\Gamma(\frac{1}{2}-i\nu-iq^{\prime})}{\Gamma(\frac{1}{2}-i \nu+iq^{\prime})}I^{q,-q^{\prime}}_{1}\right] \tag{22}\] \[=\frac{i\pi(\sinh(2\pi q)+\sinh(2\pi\nu))}{2q\sinh(\pi q)}\delta( q-q^{\prime})\] \[\quad+\frac{i\pi^{2}\sinh(\pi\nu)}{q\sinh(\pi q)\Gamma(\frac{1}{2} -i\nu-iq)\Gamma(\frac{1}{2}+i\nu-iq)}\delta(q+q^{\prime})\,, \tag{23}\] \[I_{3}^{q,q^{\prime}} =\frac{i\pi}{2\sinh(\pi q)}\left[\frac{\Gamma(\frac{1}{2}+i\nu+iq)}{ \Gamma(\frac{1}{2}+i\nu-iq)}I_{2}^{-q,q^{\prime}}-\cosh(\pi q)I_{2}^{q,q^{ \prime}}\right] \tag{24}\] \[=\frac{\pi^{2}(\cosh(2\pi q)+\cosh(2\pi\nu))}{4q\sinh(\pi q)} \delta(q-q^{\prime})\] \[\quad+\frac{\pi^{3}\cosh(\pi\nu)}{2q\sinh(\pi q)\Gamma(\frac{1}{ 2}-i\nu-iq)\Gamma(\frac{1}{2}+i\nu-iq)}\delta(q+q^{\prime})\,. \tag{25}\] Using the reflection equation (13), we obtain the orthogonality relations for \(J_{1}^{q,q^{\prime}}\), \[J_{1}^{q,q^{\prime}} =-i\sinh(\pi(\nu+q^{\prime}))I_{1}^{q,q^{\prime}}+\frac{2}{\pi} \cosh(\pi(\nu+q^{\prime}))I_{2}^{q,q^{\prime}} \tag{26}\] \[=\frac{2\pi i}{q\Gamma(\frac{1}{2}-i\nu-iq)\Gamma(\frac{1}{2}+i \nu-iq)}\delta(q+q^{\prime})\,. \tag{27}\] Again applying equation (10), we obtain \[J_{2}^{q,q^{\prime}} =\frac{i\pi}{2\sinh(\pi q^{\prime})}\left[\cosh(\pi q^{\prime})J_ {1}^{q,q^{\prime}}-\frac{\Gamma(\frac{1}{2}-i\nu-iq^{\prime})}{\Gamma(\frac{ 1}{2}-i\nu+iq^{\prime})}J_{1}^{q,-q^{\prime}}\right] \tag{28}\] \[=\frac{\cosh(\pi(q-\nu))}{q\sinh(\pi q)}\delta(q-q^{\prime})\] \[\quad+\frac{\pi^{2}}{q\tanh(\pi q)\Gamma(\frac{1}{2}-i\nu-iq) \Gamma(\frac{1}{2}+i\nu-iq)}\delta(q+q^{\prime})\,, \tag{29}\] and \[J_{3}^{q,q^{\prime}} =\frac{i\pi}{2\sinh(\pi q)}\left[\frac{\Gamma(\frac{1}{2}+i\nu+iq )}{\Gamma(\frac{1}{2}+i\nu-iq)}J_{2}^{-q,q^{\prime}}-\cosh(\pi q)J_{2}^{q,q^{ \prime}}\right] \tag{30}\] \[=-\frac{i\pi^{3}}{2q\Gamma(\frac{1}{2}-i\nu-iq)\Gamma(\frac{1}{2 }+i\nu-iq)}\delta(q+q^{\prime})\,. \tag{31}\] Note that Bielski's formulas do not directly imply the \(J_{i}^{q,q^{\prime}}\) integrals. These expressions satisfy the following conjugation conditions which we can obtain from their definitions, \[\left(I_{1}^{q,q^{\prime}}\right)^{*} =I_{1}^{q^{\prime},q}\,, \left(I_{3}^{q,q^{\prime}}\right)^{*} =I_{3}^{q^{\prime},q}\,, \tag{32}\] \[\left(I_{2}^{q,q^{\prime}}\right)^{*} =I_{2}^{-q,-q^{\prime}}\,, \left(J_{2}^{q,q^{\prime}}\right)^{*} =J_{2}^{-q,-q^{\prime}}\,,\] (33) \[\left(J_{1}^{q,q^{\prime}}\right)^{*} =J_{1}^{q^{\prime},q}\,, \left(J_{3}^{q,q^{\prime}}\right)^{*} =J_{3}^{q^{\prime},q}\,. \tag{34}\] From the derived equations, we find that \(\left(I_{2}^{q,q^{\prime}}\right)^{*}=-I_{2}^{q^{\prime},q}\) and \(\left(J_{2}^{q,q^{\prime}}\right)^{*}=J_{2}^{q^{\prime},q}\). ## Acknowledgements The work of JF is supported by the STFC Consolidated Grant 'Particle Physics at the Higgs Centre,' and, respectively, by a Higgs Fellowship and the Higgs Chair of Theoretical Physics at the University of Edinburgh. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
2309.03267
Plasmoid identification and statistics in two-dimensional Harris sheet and GRMHD simulations
Magnetic reconnection is a ubiquitous phenomenon for magnetized plasmas and leads to the rapid reconfiguration of magnetic field lines. During reconnection events, plasma is heated and accelerated until the magnetic field lines enclose and capture the plasma within a circular configuration. These plasmoids could therefore observationally manifest themselves as hot spots that are associated with flaring behavior in supermassive black hole systems, such as Sagittarius A$^\ast$. We have developed a novel algorithm for identifying plasmoid structures, which incorporates watershed and custom closed contouring steps. From the identified plasmoids, we determine the plasma characteristics and energetics in magnetohydrodynamical simulations. The algorithm's performance is showcased for a high-resolution suite of axisymmetric ideal and resistive magnetohydrodynamical simulations of turbulent accretion discs surrounding a supermassive black hole. For validation purposes, we also evaluate several Harris current sheets that are well-investigated in the literature. Interestingly, we recover the characteristic power-law distribution of plasmoid sizes for both the black hole and Harris sheet simulations. This indicates that while the dynamics are vastly different, with different dominant plasma instabilities, the plasmoid creation behavior is similar. Plasmoid occurrence rates for resistive general relativistic magnetohydrodynamical simulations are significantly higher than for the ideal counterpart. Moreover, the largest identified plasmoids are consistent with sizes typically assumed for semi-analytical interpretation of observations. We recover a positive correlation between the plasmoid formation rate and a decrease in black-hole-horizon-penetrating magnetic flux. The developed algorithm has enabled an extensive quantitative analysis of plasmoid formation in black hole accretion simulations.
Jesse Vos, Hector Olivares, Benoit Cerutti, Monika Moscibrodzka
2023-09-06T18:00:01Z
http://arxiv.org/abs/2309.03267v2
# Plasmoid identification and statistics in two-dimensional Harris sheet and GRMHD simulations ###### Abstract Magnetic reconnection is a ubiquitous phenomenon for magnetized plasma and leads to the rapid reconfiguration of magnetic field lines. During reconnection events, plasma is heated and accelerated until the magnetic field lines enclose and capture the plasma within a circular configuration. These plasmoids could therefore observationally manifest themselves as hot spots that are associated with flaring behaviour in supermassive black hole systems, such as Sagittarius A*. We have developed a novel algorithm for identifying plasmoid structures, which incorporates watershed and custom closed contouring steps. From the identified plasmoids, we determine the plasma characteristics and energetics in magnetohydrodynamical simulations. The algorithm's performance is showcased for a high-resolution suite of axisymmetric ideal and resistive magnetohydrodynamical simulations of turbulent accretion discs surrounding a supermassive black hole. For validation purposes, we also evaluate several Harris current sheets that are well-investigated in the literature. Interestingly, we recover the characteristic power-law distribution of plasmoid sizes for both the black hole and Harris sheet simulations. This indicates that while the dynamics are vastly different, with different dominant plasma instabilities, the plasmoid creation behaviour is similar. Plasmoid occurrence rates for resistive general relativistic magnetohydrodynamical simulations are significantly higher than for their ideal counterpart. Moreover, the largest identified plasmoids are consistent with sizes typically assumed for semi-analytical interpretation of observations. We recover a positive correlation between the plasmoid formation rate and a decrease in black-hole-horizon-penetrating magnetic flux. These results demonstrate the efficacy of the newly developed algorithm which has enabled an extensive quantitative analysis of plasmoid formation for black hole accretion simulations. keywords: accretion, accretion discs - black hole physics - magnetic reconnection - MHD - methods: numerical ## 1 Introduction Flaring events, at the X-ray and infrared wavelengths, are known to occur on a daily basis for the supermassive black hole (SMBH) at the center of the Milky Way, Sagittarius A\({}^{*}\) (hereafter Sgr A\({}^{*}\), Baganoff et al., 2001; Genzel et al., 2003; Eckart et al., 2004; Witzel et al., 2021). The SMBH has an estimated mass of \(M\approx 4\times 10^{6}\ M_{\odot}\) and lies at a distance of \(D\approx 8\) kpc as was established by long-term monitoring programs of the source and dynamics of orbiting stars (Ghez et al., 2008; Gillessen et al., 2009, 2017; Gravity Collaboration et al., 2018, 2019; Do et al., 2019). At sub-mm / mm wavelengths, Sgr A* is known to be a stochastically (\(O\) (10% over hours) variable source that is associated with the (stereotypical) _quiescent_ accretion state. While flares at NIR/X-ray wavelengths correspond to significant increases in flux, _"flaring"_ events at mm-wavelengths are typically hard to disentangle from the background variability (EHTC et al., 2022; Wiglgus et al., 2022). Recently, it was shown that mm-wavelength light curves observed with the Atacama Large Millimeter/submillimeter Array suggest orbital motion of a hotspot quickly after an X-ray flare (Wielgus et al., 2022). Previously, this was also established in the NIR band (Gravity Collaboration et al., 2018). The physical mechanism that causes these flares is currently not well-understood, but a number of working theories associate them with strongly magnetized anisotropies in the accretion flow (Broderick and Loeb, 2005, 2006; Gravity Collaboration et al., 2020; Dexter et al., 2020; Porth et al., 2021; Vos et al., 2022, 2023; Ripperda et al., 2022). One such scenario that may explain these flares and the formation of hot spots is the formation of plasmoids as part of a magnetic reconnection event (e.g., Ripperda et al., 2020; Ripperda et al., 2022; El Mellah et al., 2023). This is a phenomenon that occurs in a vast number of astrophysical sources, including pulsar wind nebulae, magnetars, black hole and neutron star magnetospheres, or relativistic jets of active galactic nuclei (Kagan et al., 2015). Magnetic reconnection (Uzdensky, 2022, for a review) can broadly be thought of as a rapid reconfiguration of the magnetic field geometry at the interface of opposite polarity magnetic fields that results in the formation of a magnetic island with a typical circular magnetic field morphology. After the closing of the magnetic field lines, plasma is trapped within the magnetic field structure, creating what is known as a plasmoid. The reconfiguration is often accompanied by particle acceleration to high (non-thermal) energies (Werner et al., 2017) - effectively converting electromagnetic energy into particle kinetic energy (thermal and non-thermal). A theoretical description for the large-scale dynamics of magnetic reconnection in idealized configurations was established by Sweet (1958) and Parker (1957). This picture is, however, too simplistic for our purposes as it does not deal with plasmoid formation. To model the plasmoid-unstable regime, one has to adopt a numerical approach via particle-in-cell (PIC) or magnetohydrodynamical (MHD) simulations. Both methods will be outlined in detail in the following paragraphs. Fully kinetic PIC methods generally assume a collisionless description that consists of ion-electron, electron-positron (pair), or ion-pair plasma (Kagan et al., 2015, for a review). These methods are considered first principle as they naturally impose both a spatial ("skin depth" \(c/\omega_{p}\)) and temporal (\(\omega_{p}^{-1}\)) scale via the plasma oscillation frequency \(\omega_{p}=\sqrt{4\pi nq^{2}/w_{m}}m\), where \(n\), \(q\), \(m\), \(w_{m}\) are the particle number density, charge, mass, and enthalpy, respectively. While MHD methods only describe the plasma's bulk motion and characteristics, PIC methods track the velocities, trajectories, and energies of individual particles. Collisionless plasma studies have been conducted to investigate various physical scenarios; isolated (Harris) current sheets in 2D (Cerutti et al., 2012; Nalewajko et al., 2015; Kagan et al., 2016; Sironi et al., 2016; Petropoulou & Sironi, 2018) and 3D (Sironi & Spitkovsky, 2014; Cerutti et al., 2014; Guo et al., 2016; Werner & Uzdensky, 2017), configurations investigating magnetic turbulence (Comisos & Sironi, 2019; Borgogno et al., 2022; Bacchini et al., 2022), and (general-relativistic) accretion simulations describing plasma within the magnetosphere of compact objects (for black holes; Parfrey et al., 2019; Cinquand et al., 2020, 2021; El Mellan et al., 2022, 2023, or neutron stars; Chen & Beloborodov, 2014; Philippov & Spitkovsky, 2018; Guepin et al., 2020; Cerutti & Giacini, 2021). Although PIC methods are instrumental in, e.g., understanding the origin of non-thermal emission, they remain confined to microscopic plasma scales, which makes interpretation at astrophysically large scales difficult. General relativistic magnetohydrodynamical (GRMHD) methods have been extensively and successfully used to describe the macroscopic picture of accretion onto SMBHs (for M87*; EHTC et al., 2019, 2021; EHTC et al., 2021, 2022, 2021, for Sgr A*; EHTC et al., 2022, 2022). Almost exclusively, one assumes an ideal GRMHD description that is an inadequate framework for capturing magnetic reconnection and the formation of plasmoids (Ripperda et al., 2020), as the plasmoid-instability is triggered due to numerical limits rather than consistently resolving the underlying current sheet. Resistive GRMHD does give a scale to the current sheet and makes it resolvable (Ripperda et al., 2019, and reference therein) by means of imposing a constant resistivity (\(\eta\)) in the simulations. While the physical resistivity is likely spatially and temporally variable, a uniform scalar resistivity already helps to consistently capture the dynamics associated with magnetic reconnection in the accretion flow. Even though not physically or numerically well-constrained, we point out that magnetic reconnection and plasmoid formation does occur in ideal GRMHD, where numerical limits effectively impose the minimally achievable resistivity. In this work, we investigate plasmoid formation from fast relativistic reconnection for plasmoid-forming astrophysical plasma in both ideal and resistive GRMHD. To be able to assess the plasmoid formation dynamics, we need to address another, equally important aspect which is that plasmoid structures are difficult to isolate from their surroundings. Therefore, we have developed a novel analysis algorithm for detecting them. It deviates significantly from plasmoid-finding methods employed previously for GRMHD simulations (Nathanail et al., 2020) and is more akin to the methods employed in PIC studies by Sironi et al. (2016); Hakobyan et al. (2019, 2021). However, as in a fluid MHD description one does not have the luxury of individual particle trajectories, we apply our analysis fully in post-processing which gives it more flexibility. Using our methodology, we investigate the differences in occurrence rate, morphology, size, and typical plasma parameters of plasmoids in both ideal and resistive GRMHD for a newly created suite of 2.5D simulations with exquisite resolution. To showcase the validity and high fidelity of the algorithm we also apply it to a set of Harris current sheet simulations that are equally well-resolved. The paper is structured as follows. An in-depth description of the methods we use to simulate and identify these features are outlined in Section 2. The results and their interpretation are presented in Section 3. The discussion and conclusion can be found in Sections 4 and 5. ## 2 Methods In the following sections, we will describe the algorithm that identifies the plasmoids and outline the two setups we investigate. ### Relativistic MHD primer: ideal and resistive The plasma flows of both the Harris sheet and BH accretion disc are simulated within the framework of the Black Hole Accretion Code (BHAC, Porth et al., 2017; Olivares et al., 2019), which solves the (resistive; Ripperda et al., 2019) MHD equations in stationary spacetimes. These equations are defined as; \[\nabla_{\mu}(\rho u^{\mu})=0, \tag{1}\] \[\nabla_{\mu}T^{\mu\nu}=0,\] (2) \[\nabla_{\mu}{}^{\star}F^{\mu\nu}=0, \tag{3}\] where \(\nabla_{\mu}\) denotes the covariant derivative, \(\rho\) the rest-mass density, \(u^{\mu}\) the fluid four-velocity, \(T^{\mu\nu}\) the energy-momentum tensor (containing both ideal fluid and electromagnetic fields), and *\(F^{\mu\nu}\) the (Hodge) dual of the Faraday tensor. BHAC is a versatile code that sets the speed of light \(c\) to unity and utilises Lorentz-Heaviside units, which effectively incorporates the \(\sqrt{4\pi}\) factors into the electromagnetic quantities. In this work, we utilize both ideal and resistive MHD. The main difference between both these approaches is the way they handle the evolution of the electric field, which is denoted by \[\mathbf{E}=-\mathbf{v}\times\mathbf{B}+\eta\mathbf{J}. \tag{4}\] Note that the resistivity is denoted by \(\eta=1/\sigma_{c}\) where \(\sigma_{c}\) is the conductivity. While in resistive MHD the electric field (\(\mathbf{E}\)) includes an explicit calculation of the resistive Ohm's law to get an expression for \(\mathbf{J}\), in ideal MHD it is inferred directly from the magnetic field (via \(\mathbf{E}=-\mathbf{v}\times\mathbf{B}\), also known as the "frozen-in condition"). Effectively, one assumes the plasma to be perfectly conducting (\(\sigma_{c}\rightarrow\infty=\eta\to 0\)) in the ideal MHD limit, which is a useful and macroscopically valid approximation in large parts of the accretion disc domain but not when it comes to the formation of plasmoids and other non-ideal effects. More specifically, the resistivity \(\eta\) is not exactly zero in the ideal case (except for infinite resolution), but rather determined numerically by the underlying resolution (or cell size \(\Delta x\)) which implies that \(\eta_{\rm{ide}}\propto\Delta x^{k}\) with \(k\approx 2\) depending on the accuracy of the fluid evolution scheme (Ripperda et al., 2022). The physical interpretation of the resistivity \(\eta\) is that it acts as a proxy for kinetic effects within the plasma. We investigate plasmoid formation from fast relativistic plasmoid-dominated reconnection. Whether the plasma becomes plasmoid unstable is determined by the Lundquist number \(S=L^{\prime}v_{a}/\eta\), with typical length of the current sheet \(L^{\prime}\) and the Alfven velocity \(v_{a}\) (see section 2.3 for definition). In order to trigger the fast reconnection and tearing- or plasmoid-unstable regime, the Lundquist number needs to satisfy \(S>S_{\rm crit}\) where \(S_{\rm crit}\sim 10^{4}\)(Loureiro et al., 2007; Bhattacharjee et al., 2009; Uzdensky et al., 2010). Note that the Lundquist number is largely determined by the underlying resistivity (\(\eta=5\cdot 10^{-5}\)) which is set as a constant and uniform quantity in our resistive simulations. Then, if we estimate probable values of \(L^{\prime}\approx 1\) and \(v_{a}\approx c=1\), we find \(S=2\times 10^{4}\) which lies above the threshold. At first glance, for the ideal simulations, one might think that as \(\eta_{\rm side}\) is very small it reaches a sufficiently high Lundquist number. Even though this is the case, the resulting current sheet will always be under-resolved (as it is determined by the underlying resolution) and typically has a width comparable to a singular grid cell (Ripperda et al., 2020). This indicates that the tearing-instability is not triggered in the same way as for the resistive simulation and will likely result in differences in plasmoid formation statistics. ### Plasmoid identification The starting point of our plasmoid identification routine lies in finding a quantity that lays bare the intrinsically circular magnetic field geometry. A natural choice for this identification quantity would then fall to the magnetic flux function; \[\Psi_{\rm B} \stackrel{{\rm def}}{{=}} \int\sqrt{-g}B^{\prime}\ d\theta, \tag{5}\] \[\stackrel{{\rm def}}{{=}} \int B^{x}\ dy-\int B^{y}\ d\chi \tag{6}\] where \(\sqrt{-g}\) is the metric determinant. Note that \(\sqrt{-g}\,B^{\prime}\) corresponds to the magnetic field in the Eulerian frame and that the magnetic flux function \(\Psi_{\rm B}\) corresponds to \(A_{\phi}\), except for a minus sign discrepancy (Sironi et al., 2016). The magnetic flux function \(\Psi_{\rm B}\) is a good choice as its isocontours will follow the inplane magnetic field lines (i.e., \({\bf B}\cdot\nabla\Psi_{\rm B}=0\)). More specifically, as plasmoids are characterized by their circular magnetic field configuration, the plasmoid center will correspond to the local maxima or minima in \(\Psi_{\rm B}\) ("O-points"). Due to our methodology, the base \(\Psi_{\rm B}\) structure is not the ideal starting-point of the pipeline. We, therefore, work with the following quantity; \[\widetilde{\Psi}_{\rm B}=\widetilde{\Psi}_{\rm B}-\Psi_{\rm B}, \tag{7}\] where \(\widetilde{\Psi}_{\rm B}\) scalar denotes the (image) averaged flux function at a given time. The removal of the averaged flux function allows for clearer identification of plasmoids in \(\widetilde{\Psi}_{\rm B}\). As we now have a suitable medium from which we can start to identifying the plasmoids, we will need a method that is able to classify the magnetic island structure reliably. For this purpose, we have developed an algorithm that consists of four steps: 1. All simulations contain a lot of fine-structure in the magnetic flux function. This makes it hard to differentiate between (magnetic) turbulence and more global features that correspond to a presence of a plasmoid. Therefore, to make certain we filter out much of the turbulence, we apply a blurring (Gaussian or flat) kernel to the flux function (\(\widetilde{\Psi}_{\rm B}\)). This also gives us control over the size of features we want to be sensitive to. The blurring step, however, requires (manual) fine-tuning depending on resolution and nature of the setup. Interestingly, to extract the global structure from highly turbulent primary needs the most blurring relatively, while the GRMHD simulation are well-served with a fairly light blurring method. 2. Following the blurring step, we identify the local minima or maxima that will correspond to the plasmoid's center. 3. Then, we apply a watershed algorithm (well-described in, e.g., Beucher & Meyer, 2018) to isolate the domain of interest around the local minimum. We have chosen an implementation that is based on Vincent & Soille (1991). The watershed segmentation is then used to make an informed cut-out of the domain that will contain a single (local) maxima, so that we have control over what is being fitted while simultaneously improving the quality of the fit. 4. Lastly, we draw the maximally possible contour within the isolated segment. Utilizing the inherent symmetry in the systems, we sample the space efficiently by means of a binary search from opposite sides (i.e., left and right from center along \(\hat{x}\) for the Harris sheet and inner and outer radii along \(\hat{r}\) for the GRMHD setups). The resulting contour enables us to gauge the plasmoid's size and orientation, and enables calculations of the plasma quantities associated with the plasmoid and its direct vicinity. In Fig. 1, one finds a schematic summary of the points discussed above. Additionally, it becomes clear that both setups differ fundamentally from one another and therefore warrant a different configuration of the algorithm. The main differences will be summarized below. (i) As the Harris sheet setups have periodic boundaries, one needs to be careful to catch plasmoids that are on the boundary. (ii) Additionally, capturing both "big" and "small" plasmoids in the Harris sheet setup required two different approaches, mainly concerning the blurring kernel. For the big features, one has to apply to a relatively small kernel many times (several hundred times works well in our experience) to not flatten out the global structure too much. To capture the smallest features, one only has to apply the small blurring kernel a few times. One acquires the master set by combining the output from both described configurations following the ULS criterion. (iii) For GRMHD, one has to take into account that resolution is concentrated near the black hole and in the equatorial plane and therefore has non-uniform cell-sizes. (iv) Due to this non-uniform grid layout (for the GRMHD simulations), applying a kernel blur manifests itself differently in various regions of the simulation domain. When applying a relatively small blurring kernel, this effect is minor and manageable. If this is not sufficient, then we interpolate the data to an uniform grid structure. Lastly, we would like to note more explicitly how plasmoids are identified using the magnetic flux function in other works. In essence, one identifies plasmoids via the so-called "O"- and "X"-points. O-points corresponds to the local minima and maxima of the magnetic flux function and denote the center of a plasmoid. X-points are saddle points and lie in between O-points. Along a current sheet one therefore expects these points to succeed one another. One typically finds the extrema by calculating the Hessian matrix of the magnetic flux function (Servidio et al., 2009; Servidio et al., 2010; Zhdankin et al., 2013; Kadowaki et al., 2018; Zhou et al., 2020) via; \[H_{ij}^{\Psi_{\rm B}}({\bf x})=\frac{\partial^{2}\Psi_{\rm B}({\bf x})}{ \partial x_{i}\partial x_{j}}. \tag{8}\] Then, one calculates the matrix determinant of the Hessian (\(|H^{\Psi_{\rm B}}|\)) to find the critical points that correspond to \(|H^{\Psi_{\rm B}}({\bf x})|=0\) at a given coordinate \({\bf x}\). The eigenvalues of the Hessian then determined if we have an O-point if it is a local minima (positive definite Hessian) or maxima (negative definite Hessian). For an X-point, one finds both positive and negative eigenvalues of the Hessian (Servidio et al., 2010). However, in our methodology, there is no need to explicitly calculate the Hessian to identify the O- and X-points as these are naturally picked up by the watershed algorithm. The X-points, which are typically harder to identify (Zhdankin et al., 2013), will lie on the border of a watershed segment. For the O-points, we straight-forwardly calculate the local extrema in a segment. Finding the critical points in these turbulent maps is a complicated endeavour, as is also illustrated by the computationally intensive mitigation techniques employed in Servidio et al. (2010). Our methodology works around this problem in a relatively natural manner, but this implies that we do not know the exact orientation of the current sheet as the X-point locations are not calculated (other than being on the watershed segment's border). Additionally, one can end up with two O-points per watershed segment, but this is straightforwardly mitigated by the contour-finding algorithm as it only selects the contour enclosing the O-point in question. Even though we may sacrifice some accuracy, our methodology saves us from having to employ (relatively) computationally and memory intensive mitigation strategies and will therefore provide a significant speed-up with respect to, e.g., Servidio et al. (2010). ### Harris sheet configuration To validate the methodology for a well-known case, we investigate a relativistic 2D Harris sheet in resistive MHD. The implementation is broadly based on what was prescribed for the Geospace Environmental Modeling (GEM) challenge (Birn et al., 2001; Birn & Hesse, 2001, also in Goedbloed et al., 2010). We start with a (wide) rectangular box with periodic boundary conditions on all sides and initialize two sheets of matter on top of an uniform background density that is scaled with \(\rho_{0}\); \[\rho=\rho_{0}\left[\cosh^{-2}\left(\frac{y+L_{y}/2}{\delta}\right)+\cosh^{-2} \left(\frac{y-L_{y}/2}{\delta}\right)+f_{\rm Bg}\right]\,, \tag{9}\] where \(L_{x}\), \(L_{y}\), \(f_{\rm Bg}\), and \(\delta\) are the box half-size in \(\hat{x}\) and \(\hat{y}\), the background factor, and the layer half-thickness, respectively. The initial values that were used for these parameters (and others) are denoted in Table 1. We assume an uniform resistivity; \(\eta=5\cdot 10^{-5}\), and an initialized magnetic and electric field according to \[B^{x}=\begin{cases}\ B_{0}\tanh\left(\frac{y-L_{y}/2}{\delta}\right)+B_{0} \epsilon_{p}&\text{for}\quad y>0\\ -B_{0}\tanh\left(\frac{y-L_{y}/2}{\delta}\right)+B_{0}\epsilon_{p}&\text{for} \quad y<0\end{cases}\,, \tag{10}\] \[B^{y}=B_{0}\epsilon_{p}, \tag{11}\] \[B^{z}=0, \tag{12}\] \[E^{x}=E^{y}=E^{z}=0. \tag{13}\] Here, \(\epsilon_{p}\) denotes a (1%) white noise perturbation to the magnetic field that varies between \(-0.01\) and \(0.01\). This perturbation is similar to what is introduced (more naturally) for PIC simulations. Note Figure 1: A schematic decomposition of the plasmoid identification algorithm. In the _top_ panels (a-d), we display a snapshot of a GRMHD simulation (\(\pi\)N3 at \(T=3000\leavevmode\nobreak\ r_{\rm g}/c\)) at various points in the pipeline. In the _bottom_ panels (e-h), we find the same but for one of the Harris sheet cases (lb at \(T=2.93\leavevmode\nobreak\ t_{\rm c}\)). In the left column (panels a&e), one finds the base magnetic flux function \(\nabla_{\rm B^{-}}\) the starting point. To apply the watershed (panels c&g), one needs to make sure that the plasmoid corresponds to a local minimum which is done with the quantity -\(\vec{\nabla}_{\rm B}\) (panels b&g). The last column (panels d&h) showcases how the maximal contour is found for the watershed segment and how the plasmoid’s width and height are determined (between the orange diamonds). The evaluated O-point is denoted by the black circles. Other O-points in the displayed simulation domain are denoted by the open grey circles. that we do not apply the typical guiding magnetic field perturbation that guides the initial plasmoids to the edges and creates a well-controlled reconnection region in the middle of the simulation domain (as perscribed for the GEM challenge, also in Keppens et al., 2013). To acquire pressure equilibrium at initialization we define the fluid pressure to be \[p=\frac{B_{2}^{2}}{2}\frac{\rho}{\rho_{0}}. \tag{14}\] Additionally, we define the length and time scale as a function of system length \(L=2\,L_{x}\), so that \((x,y)\in[-0.5L,0.5L]\times[-0.125L,0.125L]\) for Hb and \((x,y)\in[-0.5L,0.5L]\times[-0.25L,0.25L]\) for Hs with a typical time unit of \(t_{c}=L/c\) (see Table 1). For completeness, we note that the computational length unit is \(l=1\) with corresponding time-scale \(l/c=1\), which both reduce to unity due to the geometrical unit assumption (\(G=c=1\)). If one were interested in relating the initial layer half-thickness \(\delta\) (see Table 1) to the resistivity \(\eta\), then one finds that \(\delta/\eta=1000\) (\(\delta/\eta=2000\)) for Hb (Hs). Nevertheless, we will connect it to a more intrinsic plasma-physical timescale in our unit set in the following paragraph. This is typically determined by the upstream Alfven velocity \(v_{\alpha}\), which is defined as \[v_{\alpha}=\frac{B}{\sqrt{\rho h+B^{2}}}=\frac{\sqrt{\sigma}}{\sqrt{1+\sigma}}, \tag{15}\] where \(h=1+\hat{\gamma}p/(\hat{\gamma}-1)\rho\) is the specific enthalpy with adiabatic index \(\hat{\gamma}=13/9\) and \(B=\sqrt{B^{2}}\,B_{i}\) denotes the magnetic field strength. Additionally, the (\(\mathrm{T}\mathrm{o}^{+}\)) magnetization is defined as \(\sigma=B^{2}/\rho h\). While we will primarily use the light-crossing time, it is worthwhile to connect it to the Alfven and (resistive) diffusion timescales of the system, which then become \(\tau_{\alpha}\approx L^{\prime}/v_{\alpha}\) and \(\tau_{d}\approx L^{\prime 2}/\eta\) with \(L^{\prime}\) being the current sheet's length (Ripperda et al., 2019). Figure 2 gives an overview of the evolution of the Harris sheet (for the Hb case). From the magnetization (\(\sigma\)) panels, we find that \(\sigma\sim 5\) near the sheet, which indicates an upstream Alfven velocity \(v_{\alpha}\sim c\). Then, one can determine the Lundquist number via \(S=\tau_{d}/\tau_{a}\), but it becomes clear \(\tau_{d}\) is very large and \(\tau_{\alpha}\sim L^{\prime}\) which indicates that \(S\) will be similarly large. Lastly, we would like to note that all boundaries are fully periodic (similar to Keppens et al., 2013; Takamoto, 2013; Cerutti et al., 2013, 2014, and some quasi-periodic works in Sironi and Spitkovsky, 2014; Petropoulou and Sironi, 2018). This implies that no matter is lost so that evolution eventually saturates after having formed several'monster' plasmoids that effectively act as a reservoir spanning a large part of simulation domain. Up to a point, each sheet will evolve independently and uniquely due to the minor non-uniform perturbation to the initialized magnetic field, but when the primary plasmoids become too large the sheets are influenced by one another. Another approach has outflowing boundaries at the short sides of the box corresponding to the y-boundaries in our simulation (Loureiro et al., 2012; Sironi et al., 2016). This tends to give less chaotic current sheets and allows for longer evolution times as, for periodic boundaries, the large plasmoids will eventually affect the opposing current sheet. The periodic Harris sheet simulations are primarily meant to have another verification case for the identification algorithm, but tend to display more complex behavior than what is found for the outflowing variety, especially combined with a global magnetic field perturbation (so that \(\mathrm{sign}(x)\cdot u_{x}\gtrsim 0\); Loureiro et al., 2012; Sironi et al., 2016). Nevertheless, we did make sure that the magnetization was comparable to the GRMHD simulations. ### GRMHD configuration To evolve the accretion disc surrounding the BH we utilize the Modified Kerr-Schild (MKS) coordinate system (that is clearly described in McKinney and Gammie, 2004; Porth et al., 2017). As the Kerr-Schild (KS) metric is well-documented (Misner et al., 1973), we will only comment on the modification from the standard KS coordinates \((t,r,\theta,\phi)\), which is done via; \[r=R_{0}+e^{s}, \tag{16}\] \[\theta=\vartheta+\frac{h}{2}\sin(2\vartheta). \tag{17}\] Here, \(s\) and \(\vartheta\) are the code's internally used coordinates, which can be converted to KS coordinates with the listed relations. We will exclusively show results in KS coordinates \(r\) and \(\theta\). All our GRMHD simulation use user-defined parameters \(h=0.25\) and \(R_{0}=0\), which implies that the resolution of the underlying grid will be more concentrated in the equatorial plane. Before continuing, we would like to outline a few specifics about the \(3+1\) split that is employed in BHAC. The line element is described as follows; \[ds^{2}=-\alpha^{2}dt^{2}+\gamma_{ij}(dx^{i}+\beta^{i}dt)(dx^{j}+\beta^{j}dt), \tag{18}\] with \(\alpha\), \(\beta\), \(\gamma\) denoting the lapse, shift, and geometric part of the metric (\(g^{\mu\nu}\)), where Roman characters \(i,j\in\{1,2,3\}\) and Greek characters \(\mu,\nu\in\{0,1,2,3\}\). The metric determinant is then defined as \(\sqrt{-g}=\alpha\sqrt{\gamma}\). Consistent with the conventions introduced in Porth et al. (2017), we denote electromagnetic quantities in the Eulerian frame with capitalized letters while lower-case letters denote quantities in the co-moving fluid (or plasma) frame. With Eulerian frame, we imply an Eulerian observer that is moving with four-velocity \(n_{\mu}=\{-\alpha,0,0,0\}\) (or contravariantly; \(n^{\mu}=\{1/\alpha,\beta^{i}/\alpha\}\)). In this work, we will only consider Magnetically Arrested Disc (MAD, Igumenschchev et al., 2003; Narayan et al., 2003) models which are initialized via the vector potential \[A_{\phi}\propto\max\left(\frac{\rho}{\rho_{\mathrm{max}}}\left(\frac{r}{r_{ \mathrm{in}}}\right)^{3}\sin^{3}\theta\exp\left(-\frac{r}{400}\right)-0.2,\ 0\right). \tag{19}\] The simulations are initialize with a torus that is in hydrodynamic equilibrium (Fishbone and Moncrief, 1976), except for a perturbation to the fluid pressure \(p\), and is threaded by a single poloidal magnetic field loop (that is initialized via \(\mathbf{B}=\nabla\times\mathbf{A}\) with \(\mathbf{A}=(0,0,A_{\phi})\)). The inner and pressure maximum radii of the torus that determine the size and available matter are set to \(r_{\mathrm{in}}=20r_{\mathrm{g}}\) and \(r_{\mathrm{max}}=41r_{\mathrm{g}}\) for a black hole spin of \(a_{*}=0.9375\). The scaling of the vector potential is set so that \(\beta=p/p_{\mathrm{mag}}=100\), with \(p\) being the gas pressure and \(p_{\mathrm{mag}}\) the magnetic pressure. Other user-defined parameters of the evaluated configurations can be found in Table 2. For completeness, we will note that the less magnetized accretion scenario is known as the Standard And Normal Evolution model (hereafter SANE, De Villiers et al., 2003; Narayan et al., 2012; Sadowski et al., 2013). ### Energetics An important objective in this work is to quantify if plasmoids are able to produce flaring events or create hot spots that would stand out with respect to the background. Therefore, we associate the electromagnetic, kinetic, and thermal fluid energies with their correspond ing components of the stress-energy tensor \(T^{\mu\nu}\); \[\epsilon_{\rm em}=-T_{\rm EM}^{\ \ t}\ \ t=-(b^{2}+e^{2})(u^{\mu}u_{t}+ \frac{1}{2}g^{\ t}_{\ t})+b^{\ t}b_{t}+e^{\epsilon}\epsilon_{t} \tag{20}\] \[+\frac{u_{t}\epsilon_{\rm em}b_{t}}{\sqrt{p}}\left(u^{\mu}\eta_{t} ^{A\beta\mu}\epsilon+u_{t}\eta^{A\beta\mu}\right),\] \[\epsilon_{\rm kin}=-T_{\rm PAKE}^{\ \ \ t}\ \ t=-(u_{t}+1)\,\rho u^{t},\] (21) \[\epsilon_{\rm th}=-T_{\rm PAKE}^{\ \ \ t}\ =-(\epsilon+p)u^{t}u_{t}-p. \tag{22}\] Here, the hereto unexplained quantities are \(\epsilon\), \(p\), and \(\eta^{\nu\lambda\beta\kappa}\), which are the specific internal energy, the fluid pressure, and the fully antisymmetric symbol, respectively. \(\epsilon_{\rm em}\) denotes the electro-magnetic energy density (Qian et al., 2017), \(\epsilon_{\rm kin}\) the kinetic energy density, and \(\epsilon_{\rm em}\) the thermal energy density (McKinney et al., 2012; Ripperda et al., 2019). The subscripts "EM", "PAKE" and "EN" correspond to the electro-magnetic, free particle, and enthalpy terms of the stress-energy tensor \(T^{\mu\nu}\) (primarily following McKinney et al., 2012). The free thermokinetic energy (denoted as "MAKE" in McKinney et al., 2012) is the sum of \(\epsilon_{\rm kin}\) ("PAKE") and \(\epsilon_{\rm th}\) ("EN"). This is important to note because \(\epsilon_{\rm kin}\) is predominantly negative in our GRMHD simulation, which can be interpreted from the geometric Bernoulli criterion (\(u_{t}\leqslant-1\)) corresponding to unbound matter. The term (\(u_{t}+1\)) will therefore be negative (positive) when the fluid element is unbound (bound) and as \(\rho u^{t}\) is positive we will end up with a negative \(\epsilon_{\rm kin}\) for bound matter that is typically found within the accretion disc. Lastly, note that the minus-sign in front of \(T^{\mathbf{t}}_{t}\) is due to the metric signature \((-,+,+,+)\) and is needed to get positive values. Next, we define the covariant surface average (denoted by a bar, \(\bar{\cal Q}\), over a given fluid variable) by \[\bar{\cal Q}=\frac{\int\sqrt{\cal G}\,dx^{1}dx^{2}}{S} \tag{23}\] with the surface \(S\), in an arbitrary coordinate system, denoted as \[S=\int\sqrt{\cal G}\,dx^{1}dx^{2}. \tag{24}\] The \(\gamma\) corresponds to the geometric part of the metric as explained in section 2.4. Note that by surface average we imply that we take the average of a given quantity that is enclosed by a plasmoid-describing contour found by the algorithm. All quantities are calculated in the Eulerian (or laboratory) frame. ## 3 Results ### Harris sheet #### 3.1.1 General evolution In Fig. 2, a well-developed and representative state of the Hb case is shown. Before this state is reached, the current sheet needs to evolve for some time before it becomes ("plasmoid." or "tearing-")unstable enough, as the sheet becomes thinner, to break up and form the first magnetic islands. This first tearing mode creates the first plasmoids that are known as "primary" plasmoids (see, e.g., Loureiro et al., 2007; Uzdensky & Loureiro, 2016; Comisso et al., 2016; Petropoulou & Sironi, 2018) and have significantly different plasma characteristics than the ones that are created at later times in the secondary tearing-unstable regions of the sheets. First, they have a higher density and, second, they possess a characteristic magnetic field profile with a lower magnetic field strength at the center than in the rings further on the outside. Overall, this results in a lower overall magnetization, but also a relatively lower magnetic field strength in relation to the surface. Their composition is primarily determined by the initial conditions. Following the initial break-up of the layer (at \(\sim\)1.56 \(t_{\rm c}\) for Hb and \(\sim\)5.27 \(t_{\rm c}\) for Hs), a continuous and steady creation of "secondary" plasmoids has started in the reconnection layer between the primary islands that remains active till the very end of the simulation window. These plasmoids do probe the underlying plasma characteristics and are relatively unaffected by the initial conditions. Two animations are attached to Fig. 2 which show both a window correponding to the figure and the entire simulation domain over time. Following the formalism by Uzdensky, Loureiro & Schekochinin (2010) (hereafter ULS) that when a plasmoid coalesces with a larger plasmoid, then the smaller one is considered to be part of the larger body, and is therefore no longer taken into account. In practice, however, the small plasmoid will retain it's structure for some time depending on the size (ranging several 0.05 \(t_{\rm c}\)) before conforming to the global structure of the primary plasmoid. This is clearly illustrated in Fig. 2 and accompanying animations, the coalescence of the plasmoid on the left-hand side (at \(X=0.135\,L\) and is roughly 0.02 \(L\) in width initially) takes approximately \({\cal O}(0.1\,t_{\rm c})\) from the moment of impact to being fully absorbed by the primary plasmoid. When two plasmoids of similar size coalesce, then this timescale tend to be even longer and significant perturbation is needed before one of the two loses it's structure. Generally, it is not simple to enforce the ULS criterion, which is reflected by the two-step approach outlined in section 2.2. Starting with secondary plasmoids, the minimum size for which we identify this population is set to \(\sim\)\(10^{-4}\,L\) (0.005 \(I\)), but in practice the algorithm tends to detect a plasmoid when it starts to deviate from the straight current sheet configuration (i.e., gain some width). Overall, we find that the secondary plasmoids are identified with a very high fidelity. The primary plasmoids are typically much harder to identify as they are the end point of the inverse cascade (or plasmoid coalescence) and, therefore, act as highly turbulent plasma reservoirs that will never relax as smaller plasmoids keep colliding and merging into it. These continuous perturbations also give rise to some magnetic reconnection within the primary plasmoid structure. As described in section 2.2, we need to apply an aggressive blurring kernel to identify the global primary plasmoid structure, but we still want to pick up on the distinct plasmoid structure if they have not fully merged. This implies that two plasmoids that have a similar magnetic flux signature (an example is seen at \(X=0.34L\)) are still picked up as two separate entities even though one can argue that they are actually part of one global body, especially when following the ULS criterion. At the interface of these two plasmoids one often finds new plasmoids forming. Naturally, all previously mentioned points become less pronounced at lower resolutions as one is resolving the current sheets less well which results in less formed plasmoids and less fine-structure. The end of the evaluated window (at 4.1 \(t_{\rm c}\) for Hb and 8.79 \(t_{\rm c}\) for \begin{table} \begin{tabular}{c c c c c} \hline \hline _Name_ & _Type_ & \(\eta\) & _Effective Resolution_ & _AMR_ \\ _GRMHD_ & & \(N_{r}\), \(N_{\theta}\) & _levels_ \\ \hline iM3 & Ideal & - & \(2048\times 2048\) & 3 \\ iM4 & Ideal & - & \(4096\times 4096\) & 4 \\ iM5 & Ideal & - & \(8192\times 8192\) & 5 \\ rH3 & Resistive & \(5\cdot 10^{-5}\) & \(2048\times 2048\) & 3 \\ rH4 & Resistive & \(5\cdot 10^{-5}\) & \(4096\times 4096\) & 4 \\ rH5 & Resistive & \(5\cdot 10^{-5}\) & \(8192\times 8192\) & 5 \\ \hline \hline \end{tabular} \end{table} Table 2: The model names and corresponding resolutions of the GRMHD simulations. These simulations are all run with a dimensionless black hole spin \(a_{*}=0.9375\), adiabatic index \(\hat{\gamma}=13/9\), and simulation domain \(r\in[1.185r_{\rm g},\,1500r_{\rm g}]\), \(\theta\in[0,\,\pi]\). The density floor and magnetization ceiling are set to \(\rho_{\rm min}=10^{-4}\) and \(\sigma_{\rm max}=10^{3}\), respectively. Figure 2: Representative state for the evolution of the Harris sheet for the Hb case corresponding to \(T=2.46\,t_{c}\). Rows (a) till (g) show the density \(\rho\), ‘hot’ magnetization \(\sigma=B^{2}/\rho h\), plasma \(\beta=p/(B^{2}/2)\), electro-magnetic energy density \(\epsilon_{\rm em}\), kinetic energy density \(\epsilon_{\rm kin}\), thermal energy density \(\epsilon_{\rm kin}\), and magnetic flux function \(\Psi_{\rm B}\). The _purple_ contours denote plasmoid detections corresponding to local maxima in the flux function (\(\Psi_{\rm B}\)), while _green_ contours correspond to local minima. The evolution over time is displayed in two animations; one for the zoom-in corresponding to this figure and another displaying the entire simulation domain, which can be found in the following repository; [https://doi.org/10.5281/zenodo.8318522](https://doi.org/10.5281/zenodo.8318522). Hs) is determined by the amount of interference the current sheets have on one another. Beyond these times, the few primary plasmoids become of sufficient size that they start to incorporate the opposing current sheet. This brings about an interesting new turbulent mode that is similar to the ABC structure described in Lyutikov et al. (2016). Magnetic reconnection is then no longer confined to the current sheets but occurring at interfaces between the primary plasmoids that now have lost their elliptical shape and have become more hexagonal in shape. The simulation has a closer resembles to a turbulent box simulation than to the initial double current sheet configuration. This is beyond the scope of this work and therefore we chose the evaluated time windows to correspond to a clear current sheet structure. #### 3.1.2 Plasmoid statistics Figure 3 displays two-dimensional histograms with various plasmoid quantities as a function of width for both Harris sheet (Hs and Hb) cases. First, we would like to point out that the distributions show the same general trends. Starting with the surface-averaged density (\(\bar{\rho}\)) panels, one finds a main triangular distribution that spans \(-1.25<\log_{10}\bar{\rho}<-0.25\). In addition to the main distribution, there is a secondary channel corresponding to \(-0.25<\log_{10}\bar{\rho}<0.25\) that corresponds to the densest plasmoids which also seem to occur over the entire width range. These plasmoids are, to summarize, in part due to minor misclassifications and due to the simulation conditions quickly after break-up of the initial layer. For the former, we find plasmoids that often correspond to fluctuation in the magnetic flux function within a large primary plasmoid. This population corresponds with a small plasmoid half-width. For the latter scenario, there are a number of high density reservoirs of matter that will eventually contract into the primary plasmoid population and will generally correspond to a large plasmoid half-width. Returning to the "true" plasmoid population, spanned by \(-1.25<\log_{10}\bar{\rho}<-0.25\), we find that the smallest detected plasmoids have a half-width \(w\approx 2\cdot 10^{-4}\,L\) for both the Hb and Hs cases. This lower limit is partially set by an identification requirement that either the width or height of the contour spans at least 5 cells (which equates to a minimal width or height of \(\Delta x\approx 0.02\,l\)) for the evaluated data. For the surface-averaged magnetization (\(\bar{\sigma}\)), we find that the main population spans \(-2.5<\log_{10}\bar{\sigma}<0.5\). As is also seen in Fig. 2, the secondary plasmoids have a remarkably similar \(\sigma\) profile with the outer shells being more magnetized than the interior (similar to findings in Petropoulou & Sironi, 2018). Nevertheless, we do find a trend where the \(\bar{\sigma}\) rises with half-width, up to \(w\approx 6.3\cdot 10^{-3}L\). For \(\log_{10}w/L>-2\), the \(\bar{\sigma}\) mean plateaus and even seems to decrease slightly for the largest plasmoids. After the growth phase (in \(\log_{10}w/L\leq-2\)), it seems that the increase in density and magnetic field strength is roughly matched. Lastly, for \(\bar{\beta}\), we find a similar but inverse trends to what we described for \(\bar{\sigma}\). The part of the distribution with the largest plasmoids (\(w\sim 0.1\,L\)) seems to deviate significantly from the main population and possesses a relatively high \(\bar{\beta}\gtrsim 10^{3}\). This happens because at the center of the plasmoid the magnetic field strength becomes very small due to the circular configuration. This generates some very high \(\beta\) values that in turn affects the surface-averaged quantity (\(\bar{\beta}\)). For the energies (\(\bar{\epsilon}_{\rm em}\), \(\bar{\epsilon}_{\rm kin}\), and \(\bar{\epsilon}_{\rm th}\)), we find that the thermal energy (\(\epsilon_{\rm th}\)) is the leading term in the total energy budget of the plasmoids with a mean (denoted by the green line) that remains fairly constant (\(0.0<\log_{10}\bar{\epsilon}_{\rm th}<0.25\)) as a function of half-width (\(w\)). At smallest \(w\), it appears the second term is the electro-magnetic energy (at \(\bar{\epsilon}_{\rm em}\approx 10^{-1.5}\)) that steadily becomes more significant for increasing width. As the kinetic energy (\(\epsilon_{\rm kin}\)) is closely tied to the velocity of the plasmoid, we find that it can actually become a competing term for the electro-magnetic energy, especially in the active reconnection regions and merging (or colliding) plasmoids (see Fig. 2). The distribution of \(\bar{\epsilon}_{\rm th}\) and \(\bar{\epsilon}_{\rm kin}\) are wide indicating significant variance, while \(\bar{\epsilon}_{\rm em}\) closely follows the distribution of \(\bar{\sigma}\) and seems to show a more consistent trend. This trend is explained by secondary plasmoids becoming more magnetized with time, until they grow up to a size of \(w\sim 0.01L\), after which they generally encounter a primary plasmoid and are absorbed after which the growth in magnetization (\(\bar{\epsilon}_{\rm em}\)) stagnates. The high variance in \(\bar{\epsilon}_{\rm kin}\) is explained by the fact that acceleration of plasmoids only happens in very localized regions - predominantly in active reconnection regions and just before plasmoids coalesce. As soon as the secondary are absorbed by the primary plasmoids, \(\bar{\epsilon}_{\rm th}\) will be the leading term by a significant factor. Even though \(\bar{\epsilon}_{\rm th}\) is still most dominant in the secondary plasmoid, both \(\bar{\epsilon}_{\rm kin}\) and, especially, \(\bar{\epsilon}_{\rm em}\) can become close in significance. Lastly, we would like to briefly comment on the differences between the two cases; Hb and Hs. So far, we have mainly talked about the Hb, in the left-most panels of Fig. 3. Nevertheless, we find that all findings are also applicable to Hs. The description of both simulations is outlined in Table 1, where we find that the main differences lie in the initial layer (half-thickness (\(\delta\)) that is twice as wide and resolution that is lower by a factor two. This also explains why the evolution starts later for Hs; it takes longer for the perturbations to create a sufficiently thin current sheet to activate the tearing instability. Additionally, simulation box length (in \(\hat{x}\), long side) is halved and it contains more matter due to the thicker initial layers when compared to the Hb case. As these are only minor differences, we find that the evolution is similar, which is also reflected by the results here, except that the primary plasmoids seem to span a greater part of the simulation domain for Hs. To gain insight into the dependence of the plasmoid dynamics on starting conditions, a more detailed study is needed, but that lies beyond the scope of this work. #### 3.1.3 Plasmoid distribution functions Figures 4 and 5 display the probability density function (\(f\)) of plasmoid half-width and the absolute surface averaged magnetic flux function (\(|\bar{\Psi}_{\rm B}|\)), respectively. A distribution is calculated at each time starting at the beginning of the evaluated window at \(T=1.76\,t_{c}\) for Hb (\(T=5.47\,t_{c}\) for Hs) in dark blue up to \(T=4.1\,t_{c}\) (\(T=8.79\,t_{c}\)) in bright yellow. Starting with Fig. 4, we find there is reasonable variation between the probability density function over time, but a consistent image emerges as well. Generally speaking, at the smallest plasmoid half-widths (up to \(w/L\approx 10^{-3}\)), we find a plateau followed by a steady decrease in occurrence frequency as the plasmoids become larger, up to the largest plasmoids that span a tenth of the simulation domain (\(w\sim 0.1\,L\)). Mainly via the slope \(p=-4\log f/\mathrm{d}\log(w/L)\), we will be able to quantify the growth rate of plasmoids in the system. The scaling laws have been studies in detail in the past (Uzdensky et al., 2010; Loureiro et al., 2012; Huang & Bhattacharjee, 2012; Sironi et al., 2016). The density function of plasmoid width was predicted and verified to scale according to \(f(w)\sim w^{-2}\)(Uzdensky et al., 2010; Loureiro et al., 2012), while for magnetic flux both \(f(\Psi_{\rm B})\sim\Psi_{\rm B}^{-2}\) (following the same works) or \(f(\Psi_{\rm B})\sim\Psi_{\rm B}^{-1}\)(Huang & Bhattacharjee, 2012) were established. The main difference between scaling found by Uzdensky et al. (2010) and Huang & Bhattacharjee (2012) lies in how they treat the relative velocity between plasmoids. While Uzdensky et al. (2010) assumed it to be \(\sim\)\(v_{a}\), Huang & Bhattacharjee (2012) evaluates a size-dependent relative velocity (see also Sironi et al., 2016). As our simulations have no guiding magnetic field perturbation (or outflowing boundaries), relative velocities between plasmoids are stochastically determined and relatively low, so we expect a greater similarity with Huang & Bhattacharjee (2012). Overall, we find that \(\Psi_{\rm B}\) and \(w\) do not scale with the same \(p\), which is contradictory with earlier works (Loureiro et al., 2012; Sironi et al., 2016). However, there are clear explanations for this perceived discrepancy that will be outlined in the next paragraphs. For half-width, we find \(p=1.81\pm 0.05\) for the Hb case and \(p=1.48\pm 0.06\) for the Hs case. Overall, for \(w\), we find that that we recover a scaling that is close to \(f(w)\sim w^{-2}\) corresponding to \(p=2\). For the mean trend in magnetic flux (in dark grey), we find \(p=0.64\pm 0.10\) for the Hb case and \(p=0.59\pm 0.06\) for the Hs case. However, the trend described by the smallest values per bin (in light grey) is \(p\approx 1\), which indicates agreement with Huang & Bhattacharjee (2012). The evolution of the distributions is characterized by a relative over-representation of large plasmoids, with \(|\Psi_{\rm B}/B_{0}L|\in[5\cdot 10^{-2},10^{-1}]\), that expands itself both to the left (lower \(|\Psi_{\rm B}|\), smaller plasmoids) and right (higher \(|\Psi_{\rm B}|\), larger plasmoids) over time. The smallest plasmoids have the lowest magnetic fluxes (as is also verified in Fig. 2) and the largest plasmoid will increase in \(|\Psi_{\rm B}|\) over time. This evolution also creates the sizable one-sigma error (visually made worse by the log-scale) as the density function evolves significantly over time. So, in short, the magnetic flux distributions evolve with \(p=1\) over time (especially for a low \(|\Psi_{\rm B}|\)), but this relation is affected by a high \(|\Psi_{\rm B}|\) population (that is present from the start). This population is there because of the periodic boundary conditions and would not be over-represented when utilizing outflowing boundary conditions, which was done by the comparative studies. It is important to note that our simulation setup differs substantially on at least two fronts from the previously mentioned scaling law studies, namely that it is relatively unperturbed and that it has no outflowing boundaries. With unperturbed, we mean that there is no guiding magnetic field perturbation present, which recreates a clean reconnection layer in the middle of the box and guides the primary plasmoids to the edge of the simulation domain (also discussed in detail in section 2.3). In practice, this implies that; (i) coalescence of plasmoids is a relatively more prominent growth channel in our simulations and (ii) large plasmoids could disproportionately affect the distribution. The latter point is two-fold; as the primary plasmoids become larger they effectively shrink the domain where the (secondary) current sheets can form and they will eventually start interfering with the opposing current sheet. Especially for the Hs case, these points are influential, which is also accentuated by the larger deviations. All these effects are likely to play a role in explaining the differences in scaling found in this work with respect to previous works. Additionally, the informed (but arbitrary) choice regarding Figure 3: Two dimensional distributions \(N(w,x)\) of the plasma quantities \(x\in\{\bar{\rho},\bar{\sigma},\bar{\beta},\bar{\epsilon}_{\rm em},\bar{ \epsilon}_{\rm kin},\bar{\epsilon}_{\rm th}\}\) as a function of plasmoid half-width (with \(L=2L_{x}\), as per Table 1) for both the Hb (left panels) and Hs (right panels) cases. We stack the distributions as a function of time and divide by the maximum. The green line denotes the mean per width bin that has more than ten counts total. Figure 4: Probability density function \(f\left(w/L\right)\) of plasmoid half-width w (along \(\hat{x}\)) that is scaled according to the total simulation box width (\(L=2L_{\rm x}\)) for both the Harris sheet cases; Hb and Hs. From scaling arguments, proposed in (Udensky et al., 2010; Loureiro et al., 2012), it has been shown that the distribution is expected to scale \(\propto w^{-2}\). The various probability density function profiles are colored according the time at which they occur over a range of \(T\in[1.76,4.10]\ I_{\rm c}\) for Hb and \(T\in[5.47,8.79]\ I_{\rm c}\) for Hs. The mean density profile and one-sigma error over time are denoted by the dashed dark grey line. The power law slope is determined via \(p=-\,d\log f/\,d\log(w/L)\). Figure 5: Probability density function \(f(|\tilde{\Psi}_{\rm B}|/B_{0}L)\) of plasmoid half-width \(|\tilde{\Psi}_{\rm B}|/B_{0}L\) that is scaled according to the total simulation box width (\(L=2L_{\rm x}\)) and initial magnetic field strength (\(B_{0}\)) for both the Harris sheet cases; Hb and Hs. The light grey dashed lines denote the mean of the lowest (1%) values per bin with a corresponding linear fit shown in the same color. For the rest, the description of Fig. 4 also applies here. which bins to include for the fit combined with the imperfect sampling of the distribution by the bins also introduces a \(O(5\%)\) error on the values of \(p\). Although, even despite the differences in simulation configuration (and the numerical uncertainties), we still reach a remarkable consistency with previous studies that employed more idealized configurations for finding plasmoid scaling. ### Grmid #### 3.2.1 General evolution Figures 6 and 7 display the typical structure of the (two-dimensional) MAD (Tchekhovskoy et al., 2011; McKinney et al., 2012) simulations. After having evolved sufficiently, they will saturate in magnetic flux that penetrates the event horizon (see section 3.2.4). Following such a saturation event, the accretion flow is completely halted in axisymmetric (2.5D) simulation, while in 3D a so-called "flux tube" forms (Dexter et al., 2020; Porth et al., 2021). There, instead of halting the accretion flow completely, a localized less dense, more magnetized cavity moves outward from the black hole. These outbursts occur semi-periodically and seems to be even more prevalent in the relatively more confining 2D simulations. Another feature is that the Magneto-Rotational Instability (MRI, Balbus & Hawley, 1991), responsible for angular momentum transport, is suppressed as the main magnetic field component component is strongly poloidal in MAD simulations (Porth et al., 2021). The MRI does play a role in the early developing phase of the simulation, when it is less magnetized, but then one of the leading causes of turbulence (close to the BH) is the Rayleigh-Taylor Instability (RTI) that causes inventions into the disc structure (Marshall et al., 2018, and references therein). The Kelvin-Helmholtz Instability (KHI) becomes important in regions with strong shear flows as are the conditions at the jet-disc interface and are characterized by swirl-like vortices (see, e.g., Begelman et al., 1984; Hillier, 2019). All these instabilities are perturbative channels that are able to set off magnetic reconnection in the accretion disc. Therefore, we find a much more turbulent environment than for the Harris current sheet for which reconnection is only determined by the tearing instability (Ripperda et al., 2017) that is triggered in a relatively controlled scenario. As we are mainly interested in the plasmoids' ability to produce flares, which are known to originate close to the central black hole, we apply our algorithm only within the inner \(25~{}F_{\rm g}\). In Figs. 6 and 7, we find the plasma quantities and energies (similar to Fig. 2) for the \(\rm{i}\)S and \(\rm{r}\)H5 cases. The magenta and green colors denote found plasmoid corresponding to local maximum and local minimum in the magnetic flux function, respectively. Both figures show typical phases of MAD evolution that happen in all the GRMHD simulations in this work. The panels (\(a-h\)) of Fig. 6 correspond to a flux eruption where we find the accretion flow is entirely halted. The panels (\(a-h\)) of Fig. 7 show a fairly generic accretion state with the turbulent accretion flow extending up to the horizon. Even though the density is low near the BH, one does find a reconnection layer along the equatorial plane (denoted by the magenta contours). These plasmoids are the collisional (non-pair-production plasma) equivalent to what has been seen in GRPIC simulation of diffuse collisionless magnetospheres around BHs (Crinquand et al., 2021; Bransgrove et al., 2021). The overall structure and location of the plasmoid chains indicate that at the disc-jet boundary one finds plasmoids that correspond to local maxima (magenta) while when plasmoids occur within the disc they correspond to local minima (green). The magenta contours seem to have a lower density (\(\rho\)) and higher magnetization (\(\sigma\)) than the ones in the disc. They also seem to be smaller when compared to the green contours. Their location and smaller size indicates that they are likely created by the shear-induced KHI. The purple contours also tend to leave the identification domain (\(r\lesssim 25~{}r_{\rm g}\)) on short timescales (5-10 \(r_{\rm g}/c\)) as they rapidly move outwards with turbulent jet-disc layer (also referred to as jet sheath). The green contours are tied to the bulk motion of the disc's fluid giving them more time and matter to interact with which explains their larger size. The energy and plasma parameter distributions will be explained in more detail in the next section (3.2.2). However, before we continue, we would like to point out that the value values (visible in the \(\rho\), \(|\epsilon_{\rm kin}|\), and \(\epsilon_{\rm th}\) maps) near the vertical axis (\(x=0~{}r_{\rm g}\)) are due to floor violations, which happen sufficiently far from our areas of interest and will therefore not interfere with the analysis. #### 3.2.2 Plasmoid statistics Figure 8 shows two-dimensional histograms with various plasmoid quantities as a function of width for the two GRMHD simulations (\(\rm{i}\)H5 and \(\rm{r}\)H5). Before we comment on the general findings from the histograms, we would like to point out that we find a significantly lower plasmoid count for the \(\rm{i}\)H5 case when compared to the the \(\rm{r}\)H5 case. The ideal distributions are therefore more sparsely sampled. We will address this point in more detail in section 3.2.4. Overall, however, we do find that the distributions of \(\rm{i}\)H5 and \(\rm{r}\)H5 are consistent with one another, except for the aforementioned difference in occurrence rate. Before we start describing the distributions, we would like to note that we can no longer use the Euclidean width for the GRMHD cases, as this does not inherently take into account the spacetime curvature. Therefore, we have chosen to display the distribution as a function of "circular" radius \(R_{\rm S}=\sqrt{S/\pi}\) as the surface \(S\) calculation is taking into account the curvature. As plasmoids are generally elliptical, we loose information about the plasmoid shape as the ratio between width and length is no longer defined. Starting with the distribution of \(\bar{\rho}\) (panels \(l.a\) & \(r.a\)), we find that the surface-averaged density is highest for the smallest plasmoids at \(\log_{10}\bar{\rho}\approx 0.75\) and then plateaus at \(\log_{10}\bar{\rho}\approx-0.5\) from \(-0.5<\log_{10}R_{\rm S}<1.0\). For \(\bar{\rho}\) (\(l.b\) & \(r.b\)), we find a roughly constant mean value of \(\log_{10}\bar{\rho}\approx-1\), but a wide spread in values is also present. For \(\bar{\rho}\) (\(l.c\) & \(r.c\)), one finds a very elongated distributions centered around a mean of roughly \(\log_{10}\bar{\rho}\approx 0.5\) which has a complicated origin. This behavior is largely explained by the 'green' (local minima in \(\Psi_{\rm B}\)) and 'purple' (local maxima in \(\Psi_{\rm B}\)) plasmoid populations. For the purple contours, we find the origin of the elongated \(\bar{\rho}\) distribution as the plasmoids detected in the jet sheath correspond to a distribution centered on a relatively low \(\log_{10}\bar{\rho}\approx-1\). The typical distribution of \(\bar{\rho}\) is fairly uniformly distributed with \(-2\lesssim\log_{10}\bar{\rho}\lesssim 2\) centered on the mean value of \(\log_{10}\bar{\rho}\approx 0-0.5\). Similar arguments apply to the distributions for \(\bar{\rho}\) and \(\bar{\sigma}\) where they both near identical means but a larger variance is present of the purple contours. For the green contours, we find more uniform and compact distributions overall that are located around the means for the entire (both green and purple) distribution as shown in Fig. 8. For the energies (\(\epsilon_{\rm em}\), \(|\epsilon_{\rm kin}|\), and \(\epsilon_{\rm th}\)), there are only minor difference between the green and purple distributions, so we will just discuss the combined distributions for the energies in panels (\(l.f\)-\(l.h\)) and (\(r.f\)-\(r.h\)). Interestingly, the mean for all energy distributions describe an almost identical path - starting at \(\log_{10}\bar{\epsilon}\approx 0\) to ending at \(\log_{10}\bar{\epsilon}\approx-2\) for increasing \(R_{\rm S}\). After a rapid decline up to \(\log_{10}R_{\rm S}\approx-0.5\), we find that the \(\log_{10}\bar{\epsilon}\) means plateau, especially for \(\bar{\epsilon}_{\rm kin}\) and \(\bar{\epsilon}_{\rm th}\). Additionally, the distributions indicate that the various surface-averaged energies are of similar strength. Nevertheless, \(\bar{\epsilon}_{\rm em}\) does stand out with respect to the other energies as it has a more compact distribution with a clear, gradually declining trend. Generally speaking, we find that all energy densities are of similar strength independent of the plasmoid size. Continuing with \(\tilde{\epsilon}_{\rm kin}\), we would like to note that \(\tilde{\epsilon}_{\rm kin}\) is negative, except in the jet-sheath where \(\tilde{\epsilon}_{\rm kin}\sim\mathcal{O}(1)\). This is explained in detail in section 2.5. Here, we will look at the absolute value \(|\tilde{\epsilon}_{\rm kin}|\) (\(l.g\) & \(r.g\)). The dashed purple and green lines in these panels correspond to the means of the distributions containing only the positive or negative values of \(\tilde{\epsilon}_{\rm kin}\), respectively. So, it becomes clear that the vast majority of plasmoids has a negative \(\tilde{\epsilon}_{\rm kin}\) values as the global mean (in solid green) lies close to the dashed green line. Lastly, for \(\tilde{\epsilon}_{\rm th}\), we find similar behavior as for the other energies combined with a relatively more considerable contribution at the lowest plasmoid sizes. We define the magnetic Bernoulli factor as \(Be_{m}=-(h+\sigma/2)u_{t}\), which incorporates the contribution of the magnetic pressure (\(\sigma/2\)) and therefore deviates slightly from the standard relativistic Bernoulli \(Be=-hn_{t}\)(Rezzolla & Zanotti, 2013). The Bernoulli criterion states that the fluid is unbound when \(Be_{m}>1\). Note that we have taken Figure 6: Overview of the \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{{ \rm{ }}}}}}}}}}{{{{{{{{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{{\rm{\rm{{\rm{\rm{{\rm{{\rm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \) \) \) \).\) \) \ \ \ \.\ \ the liberty to incorporate a minus sign within the Bernoulli factor. Returning to the distributions in panels (_I.i & r.i._), we find the majority of surface-averaged plasmoids is unbound as they pass the criterion, but there is still a significant number that lies under and close to the critical value of \(\tilde{B}e_{m}=1\) and are therefore bound. The mean of the function does however indicate \(\tilde{B}e_{m}\approx 1\) with a small number going up to relatively high values of \(\tilde{B}e_{m}\approx 2\). In the panels next to \(\tilde{B}e_{m}\), we find the distributions of \(\tilde{\Psi}_{\rm B}\) which seem elongated and somewhat non-uniform. However, they are easily explained as the accretion disc is still undergoing a global evolution over the duration of the evaluated time-window (\(\Delta T=|3000-4000|~{}r_{\rm g}/c\)). At the beginning (\(T=3000~{}r_{\rm g}/c\)), we find a mean of \(\tilde{\Psi}_{\rm B}\approx 6.5\), while at the end (\(T=4000~{}r_{\rm g}/c\)) we find a mean of \(\tilde{\Psi}_{\rm B}\approx 11.5\). The last unexplained panels of Fig. 8 are two variations on the orbital velocity \(\Omega=u^{\theta}/u^{\alpha}\). First, in panel (_I.e & r.e_), we investigate the ratio between the surface-average within the plasmoid contour (\(\tilde{\Omega}_{\rm in}\)) with the surface-average for a shell directly outside the plasmoid contour (\(\tilde{\Omega}_{\rm shell}\)). The outer edge of the shell corresponds to one-and-a-half times the distance to the central O-point. From this quantity we can gauge if the plasmoid moves with its surroundings (\(\tilde{\Omega}_{\rm in}/\tilde{\Omega}_{\rm shell}=1\)) or disconnected from it (\(\tilde{\Omega}_{\rm in}/\tilde{\Omega}_{\rm shell}\neq 1\)). From the distributions, we find that the mean is consistent with \(\tilde{\Omega}_{\rm in}/\tilde{\Omega}_{\rm shell}=1\), but their Figure 7: Overview of the rH5 simulation at \(T=3500~{}r_{\rm g}/c\). Here, we find an accretion state that is standard for MAD simulations with a turbulent but fairly steady flow. The rest of the description is analogous to Fig. 6. The corresponding animations can be found in the following repository; [https://doi.org/10.5281/zenodo.8318522](https://doi.org/10.5281/zenodo.8318522). Figure 8: Two dimensional distributions \(N(w,x)\) of the plasma quantities \(x\in\{\tilde{\rho},\,\tilde{\sigma},\tilde{\beta},\,\tilde{\Psi}_{\rm B},\Omega_{ \rm m}/\Omega_{\rm labell},\,\tilde{\epsilon}_{\rm em},\,\tilde{\epsilon}_{\rm kin },\,\tilde{\epsilon}_{\rm ho},\,\tilde{B}e_{m},\tilde{\Omega}_{\rm m}/\Omega_{ \rm K}\}\) as a function of “circular” radius \(R_{\rm B}=\sqrt{s/\pi}\) of the plasmoid for both the \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}\) and \(\rm{\rm{\rm{\rm{\pi\)\)\)\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, is also a significant variance indicating that the plasmoid can move twice as fast or slow with respect to its direct environment. This can potentially be interpreted in a number of ways, which includes, e.g., that the plasmoid seems to be dynamically disconnected from the accretion disc in its direct surroundings (potentially driven by a plasma-instability). Second, in panels (\(l.j\) & \(r.j\)), we evaluate the ratio of \(\tilde{\Omega}_{\rm in}\) divided by the Keplerian circular orbital velocity in the equatorial plane which is defined as \(\Omega_{\rm K}=(x^{3/2}+a_{*})^{-1}\) with \(x\) the cylindrically radius (corresponding the horizontal axis in Figs. 6 and 7) and \(a_{*}=0.9375\) the black hole spin parameter. It has been established that MAD discs are sub-Keplerian (Igumenshchev, 2008; Porth et al., 2021) which explains the mean of \(\tilde{\Omega}_{\rm in}/\Omega_{\rm K}\approx 0.8\). Nevertheless, the broad distribution with \(0.1\lesssim\tilde{\Omega}_{\rm in}/\Omega_{\rm K}\lesssim 1.3\) indicates the potential for plasmoids to be super- or sub-Keplerian, which has interesting observational implications. However, one still has to take into account that our estimate of the Keplerian orbital velocity is somewhat crude as the plasmoids have non-zero \(u^{\theta}\) or \(u^{r}\) velocities that break both the circular and equatorial assumption for \(\Omega_{\rm K}\). Even though the distributions of iM3, iM3, iM4, and rH4 are not explicitly shown, we have confirmed that the general trends described for iM5 and rH5 are consistent with the lower resolution simulations. The quantitative differences in plasmoid identification rate (\(N_{\rm P}\)) will, however, be outlined explicitly for all cases in section 3.2.4. #### 3.2.3 Plasmoid distribution functions Figure 9 displays the probability density function (\(f\)) of plasmoid radius \(R_{\rm S}\), while Fig. 10 displays the probability density function of plasmoid half-width \(w\). We show both distributions to illustrate the general-relativistic effects in Fig. 9, while Fig. 10 is straightforwardly compared with the Harris sheet's density function (in section 3.1.3) and reflects the plasmoids shown in Figs. 6 and 7. Interestingly, we recover the power law indices of \(p=1.88\pm 0.06\) (\(p=1.90\pm 0.05\)) and \(p=2.15\pm 0.11\) (\(p=2.09\pm 0.09\)) for iM5 and rH5 in Fig. 9 (10), respectively. These are similar to the results described in section 3.1.3, which gives an indication that plasmoid formation is driven by the same principles even while taking into account the curvature of the spacetime. So, even with the additional perturbations by the plasma instabilities outlined in section 3.2.1, we still find scaling that is consistent with \(p\approx 2\). While the onset of magnetic reconnection in the isolated Harris sheet simulations occurs somewhat spontaneously, in GRMHD it is subjected to global dynamics (such as the RTI and KHI) that trigger magnetic reconnection. Although one clearly sees Harris-sheet-like structures forming in GRMHD, they also rapidly fall apart which interestingly does not affect the trends in the density functions. One therefore concludes that the width distributions are robust features of reconnection, no matter how it is triggered. In a way, the identification strategy we employ for the GRMHD simulations is more consistent with the aforementioned works that have outflowing boundary conditions as we stop identifying plasmoids when \(r>25~{}r_{\rm g}\). Another "outflowing" boundary lies at the horizon but the vast majority of plasmoids moves outwards (in \(\dot{r}\)) in the jet-disc region. Some plasmoids, typically associated with green contours, even move into the identification domain along the equatorial plane to then exit via the upper or lower identification boundaries. Only a relatively minor fraction of plasmoids is accreted onto the BH and the majority of those are created in close proximity to the BH in the equatorial current sheet. Close examination of Fig. 9 yields that plasmoid radius goes all the way up to \(R_{\rm S}\approx 10r_{\rm g}\). The Cartesian projection equivalent in Fig. 10 yields a radius of \(w\approx 2~{}r_{\rm g}\). These largest plasmoids are visible in Fig. 6. The smallest detected plasmoid radii correspond to \(R_{\rm S}\approx 10^{-1}~{}r_{\rm g}\) (and \(w\approx 10^{-2}~{}r_{\rm g}\)). Especially the largest plasmoids seem to be comparable in size to the 'hot spots' that were used to interpret flares around Sgr A\({}^{*}\)(Gravity Collaboration et al., 2020; Wielgus et al., 2022; Vos et al., 2022). From our simulations, we find that the plasmoids are of sufficient size to give a physical origin to these hot spots. However, currently, we do not explicitly interpret their emission potential, but as plasmoids are typically hot (\(p/\rho\gtrsim 1\)) and magnetized (\((\dot{\varpi})\gtrsim 0.1\), as per Fig. 8) they are likely to create a emission feature, albeit undertined if predominantly thermal or non-thermal (Werner et al., 2017; Petropoulou and Sironi, 2018). Nevertheless, the occurrence rate of these large, and potentially bright, plasmoids is still quite low. More specifically, for rH5, plasmoids with radii \(R_{\rm S}>2.5~{}r_{\rm g}\) occur at least once and three times on average for all evaluated time instances (corresponding to 8.2% of all identified plasmoids), while plasmoids with (Cartesian-projected) widths \(w>1~{}r_{\rm g}\) are much less common as they occur in only half (51.4%) of the evaluated snapshots (corresponding to 1.8% of all identified plasmoids). This perceived discrepancy is partially due to the space-time curvature (not taken into account for \(w\)) and the mixing of plasmoid length and width for the \(R_{\rm S}\) quantity. For iM5, the occurrence rates of at least one plasmoid passing the \(R_{\rm S}\) and \(w\) criteria are 57.7% and 16.5% (with 6.6% and 2.3% for all identified plasmoids over the entire time window), respectively. Overall, if we take into account the much lower plasmoid counts for iM5, we find that the percentage between the two cases are comparable, except for having at least one \(w>1\) plasmoid per evaluated time. This is well-explained, however, in section 3.2.4. Lastly, we note that the power law gradient \(p=-\mathrm{d}\log f/\mathrm{d}\log(R_{\rm S}/r_{\rm g})\) is less steep for iM5 than for rH5. We believe this is largely explained by the lower plasmoid number, but we also note that the colors indicate that at later times (more yellow) the plasmoid density function spans more radius (or width) bins and therefore lies slightly lower than at earlier times (dark purple to black). This indicates there is some evolution in the density function as is confirmed in section 3.2.4. For rH5, we find a relatively consistent density function over time. Next to a potential difference in evolution, we find that a singular linear relation (in log-log space) is not the best description of the downwards power law. Even though close to \(p\approx 2\), there is a minor break visible and the gradient becomes shallower at \(R_{\rm S}/r_{\rm g}\approx 4\). As especially the larger plasmoid size bins contain more counts, this naturally pushes \(p\) to slightly lower values for iM5. Nevertheless, it is interesting that rH5 indicates a somewhat steeper gradient with \(p=2.15\pm 0.11\). However, combined with the points raised at the end of section 3.1.3, we conclude that iM5 and rH5 are consistent with a power law with \(p\approx 2\) as more robust claims can not be made without further investigation. MAD models are known to saturate in horizon-penetrating magnetic flux. This implies that magnetic energy will be building up and will eventually be released in a sudden flux eruption that partly and temporarily halts the accretion flow onto the BH. In two-dimensional simulations, the accretion flow will be stopped completely due to the constraining nature of the setup. The parameter that is used to quantify this behavior is the so-called MAD parameter \(\phi_{\rm BH}=\Phi_{\rm B}/\sqrt{M}\), which corresponds to the normalized magnetic flux. The MAD parameter saturates (in 3D) at \(\phi_{\rm BH}\approx 15\)1 (cf. Yuan & Narayan 2014). In our simulations, as shown in Fig. 11, we find that \(\phi\) occasionally Figure 10: Probability density function \(f(w/r_{g})\) of plasmoid half-width \(w\) for both the high-resolution cases iM5 (_left_) and rH5 (_right_). The rest of the description for Fig. 4 is also applicable here. Note that the quantities here do not correctly take into account the spacetime curvature, which is the case for Fig. 9. Figure 9: Probability density function \(f(R_{\rm S}/r_{g})\) of “circular” plasmoid radius \(R_{\rm S}\) for both the high-resolution cases iM5 (_left_) and rH5 (_right_). All identification takes place within a circle of radius \(R=25r_{g}\) and we evaluate a time-window of \(T\in[3000,3001,\ldots,3999,4000]\,r_{g}/c\). The rest of the description for Fig. 4 is also applicable here, except now we utilize \(R_{\rm S}\). rises to \(\phi_{\rm BH}\sim 120\). This is due to the confining nature of the 2D simulation, which allows for a greater accumulation of magnetic flux before an eruption. It is consistent with behaviour found for simulations in Ripperda et al. (2020). As we used a different adiabatic index \(\hat{\gamma}=13/9\)(vs. \(\hat{\gamma}=4/3\) for Ripperda et al., 2020), we have a thicker disc at initialization which allows for greater accumulation of magnetic flux. The middle to lower panels (\(d\)-\(f\)) of Fig. 11 display the number of identified plasmoids \(N_{\rm P}\) per simulation. While not shown explicitly in the figure, we confirm that plasmoids for either polarity (i.e., purple and green contours in Figs. 6 and 7) are equally abundant. As we already indicated (in section 3.2.2), a significantly lower number of plasmoids is detected for the ideal simulations than for the resistive ones where a factor \(2-10\) difference (in \(N_{\rm P}\)) is common. The mechanism that triggers plasmoid formation, via the tearing instability, is not well-defined in ideal simulation and, more specifically, resolution dependent (\(\propto\Delta x^{2}\), with \(\Delta x\) being the cell-size). This implies that numerical resistivity (\(\eta_{\rm ide}=\eta_{\rm num}\)) is lower close to the black hole then further away due to the MKS coordinate system and is significantly smaller than \(\eta_{\rm ris}=\eta\) (\(\eta_{\rm num}<<\eta\)). Overall, the tearing-instability is triggered less often, due to the relatively lower resistivity, and less reliably as it is determined by (stochastic) numerical effects. Visually, the ideal simulations are significantly calmer, which is explained by the suppression of the MRI after the initial few thousand time-steps. Starting from \(T\approx 3700\)\(r_{\rm g}/c\), however, a sudden increase in plasmoid formation rate is visible, which roughly corresponds to the state shown in Fig. 6 for iMS5. After this "flaring" event, the rate at which plasmoids are created is somewhat increased (except for iMS5). The resistive simulations possess a surprisingly constant number of plasmoids (\(N_{\rm P}\)), indicating a steady rate of plasmoid formation. As the MRI is also suppressed for the resistive simulations, we can assume that the tearing instability is a sufficient perturbation in itself to keep plasmoid formation up. To get an indication of how the flux eruptions could contribute to this process, we verified if there are significant changes in the \(M_{\Delta T}\equiv\sigma_{\Delta T}/\mu_{\Delta T}\)(see EHTC et al., 2022b, and description therein), with \(\sigma_{\Delta T}\) and \(\mu_{\Delta T}\) being the standard deviation and mean, respectively, over a time-interval \(\Delta T=|3000-4000|\)\(r_{\rm g}/c\). We calculated the modulation index for both the accretion rate \(\dot{M}\) and magnetic flux \(\Phi_{\rm B}\) that penetrate the spherical shell at \(r=2.5\)\(r_{\rm g}\). The modulation indices for our simulation are listed in Table 3. There seems to be little difference between \(M_{\dot{M}}\) or \(M_{\Psi_{\rm B}}\) for the ideal and resistive simulations. This is surprising as \(N_{\rm P}\) indicates a more turbulent disc for the resistive cases, as this would give rise to the greater plasmoid count. Nevertheless, one is not able to ascertain this directly from the shell-penetrating fluxes. Another consequence of setting a fixed resistivity is that there is a fixed length-scale (i.e., width of the current sheet) that determines when the tearing instability is triggered. When this length-scale is sufficiently resolved, one finds consistent results starting from a certain critical resolution and upwards. It is therefore interesting that we see this being verified in the panels (d)-(f). For rH3, the lowest resolution case, we find that the mean plasmoid count \(\langle N_{\rm P}\rangle\sim 75\). While for the higher resolution cases rH4 and rH5, we find \(\langle N_{\rm P}\rangle\sim 100\). As we find converging plasmoid numbers for both resolution cases, we conclude that the current sheet width (set by \(\eta=5\times 10^{-5}\)), within the \(25\)\(r_{\rm g}\) domain, is fully resolved starting from a resolution of \(4096^{2}\). In the last panel (\(g\)), we cross-correlate the plasmoid number (\(N_{\rm P}\)) with the negative gradient of the magnetic shell-penetrating flux Figure 11: Timeseries of the mass accretion rate \(\dot{M}\) (panel \(a\)), magnetic flux \(\Phi_{\rm B}\) (\(b\)), normalized magnetic flux \(\phi=\Phi_{\rm B}/\sqrt{|\dot{M}|}\) (\(c\)), number of identified plasmoids \(N_{\rm Plasmoids}\) per simulation (panels \(d\), \(e\), and \(f\)), and normalized cross-correlation function (corr) between \(-\nabla\Phi_{\rm B}\) and \(N_{\rm P}\) (\(g\)). The fluxes are calculated at \(2.5\)\(r_{\rm g}\). We display both the ideal (iMS3, iMS4, and iMS5 in shades of \(orange\)) and resistive (rH3, rH4, and rH5 in shades of \(purple\)) GRMHD simulations. (\(-\nabla\Psi_{\rm B}\)) and find a positive relation for most cases. Except for rH3, which is the uncorrelated component on the background (in lightest purple), we find a clear correlation between the most prominent peak in \(N_{\rm P}\) and a decrease in \(\Psi_{\rm B}\). The maxima of the correlation function coincides with beginning of a drop in the magnetic flux function and are denoted by vertical dashed line in their corresponding panels. This is a consistent trend as long as one has a clear flux eruption, which also explains the uncorrelated rH3 results as there is no clear decrease in \(\Phi_{\rm B}\) present. For iN5 at \(T\approx 3780~{}r_{\rm g}/c\), the flux eruption is rather large as is indicated by the decrease in \(\Phi_{\rm B}\), which has pushed the maximum corr\((-\nabla\Phi_{\rm B},N_{\rm P})\) further to the right. Just before the flux eruption, we find that an increase (of several factors) in \(\Phi_{\rm B}\) after which it will start to drop. The positive correlation is naturally explained by the fact that the flux eruption, that is accompanied by the temporary halting of the accretion flow, is a significant perturbation to the accretion disc that is able to initiate reconnection in numerous places. Even though this general picture applies, we find that the dynamics are likely also stochastic in nature as the rH4 case displays different behavior with a drop in \(N_{\rm P}\) directly after the flux eruption. This is in part explained by our identification strategy which only identifies plasmoids within \(25~{}r_{\rm g}\) and as the disc has receded during the flux eruption the domain in which plasmoids can form also shrinks and effectively delays the peak in \(N_{\rm P}\). Additionally, the shell-penetrating magnetic flux (\(\Phi_{\rm B}\)) only shows a relatively minor depression which indicates a relatively minor flux eruption and subsequent perturbation of the disc structure. So, in short, one can expect a reaction in the plasmoid formation rate following a flux eruption, which tends to increase the plasmoid count as it perturb the disc triggering reconnection. ## 4 Discussion In this section, we will discuss our results following in the context of earlier works following four main points; (i) direct comparison to GRMHD-related plasmoid detection methods, (ii) specifics from our simulation library, (iii) implication for three-dimensional (3D) results, (iv) effects of resistivity, and (v) a discussion of the flaring potential of plasmoids. **GRMHD plasmoid detection.** Comparison with earlier works that have identified plasmoid structures in GRMHD (Nathanail et al., 2020; Jiang et al., 2023) suggests that the approach outlined in this work finds \(5-10\times\) more plasmoids. Both aforementioned works utilize the Bernoulli factor (\(Be=-hu_{t}\)) as underlying identification medium and use a canny-edge detection algorithm on a Gaussian blurred segment (as provided by the scikit Python package). We have made initial attempt with this proposed method but we did not reach the desired efficacy or fidelity, which started the development of algorithm outlined in this work. Overall, we typically find \(5-10\times\) more plasmoids than the previously mentioned works, which are not all attributed to the detection method difference. Other potential causes for the discrepancy can be the identification medium, resolution, simulation configuration, and the inherent differences between resistive and ideal GRMHD. A number of these points will be discussed in detail in the following paragraphs. Starting with the identification medium, which we took to be the magnetic flux function \(\Psi_{\rm B}\) as it naturally identifies places with circular magnetic field structure. When we compare this with using the Bernoulli factor \(Be\), then it is clear from the results in this work that not all plasmoids are unbound as demonstrated in Fig. 8. One is likely to miss the plasmoids created in the equatorial plane as those tend to be bound (as was also pointed out in Jiang et al., 2023). There are also clear advantageous to using \(Be\), because one can apply well-established image-recognition algorithms if one is able to increase the contrast (i.e., only show a limited color-range) to which the \(Be\) lends itself well. Nevertheless, this comes with the cost that one can only identify a subset of the plasmoid population. **Simulation library.** When visually comparing our simulation to those of Ripperda et al. (2020), with a highest resolution of \(6144\times 3072\) with respect to our \(8192\times 8192\), then we infer that the number of plasmoids does not differ significantly based on the presented figures, except perhaps at the smallest scales. More importantly, one may even draw the conclusion that SANE simulations produce clearer and more abundant plasmoid structures. Nathanail et al. (2020) utilizes an initial single dipolar loop up to intricate multi-polar initial magnetic field configurations with an evolution that can be described as SANE-like (with low \(\phi_{\rm BH}\sim 2\)). Especially, the multi-polar configurations are expected to produce a lot of plasmoids, as is confirmed in their Fig. 6. However, they do not show any statistics. This is done, however, in Jiang et al. (2023) using the same methodology, but their configuration has a multi-polar initial magnetic field and evolves to be heavily magnetized (i.e., MAD-like). The evolution is very chaotic and consistent with MAD but only relatively few plasmoids are visible indicating that the lower resolution (up to \(4096\times 2048\)) and identification technique are likely to play a role. It is important to note that those simulation were using ideal MHD, so we only compare it to the iN3, iN4, and iN5 cases. The differences between resistive and ideal GRMHD will be discussed in detail in section 5. **3D.** How applicable are 2D results to a 3D reality? A number of arguments come into play here. First, the plasmoids in our simulations describe predominantly elliptical (close to circular) structures and have long merging chains. This is in part explained by the confined nature of the 2D simulations. As, due to this confining nature, plasmoids have a greater probability to interact and merge, they are likely to become larger. If one were to add an additional dimension (in \(\dot{\phi}\)), one significantly complicates the situation. First, the plasmoid morphology would change and gain the resemblance of a flux rope. Second, the chance for interaction would decrease significantly as it is simply less likely to come across another flux rope. Third, the definition of flux ropes coalesce is difficult as they likely merge in a single place but not in it's entirety. These points are clearly demonstrated for the 3D equivalent of the Harris sheet as presented in, e.g., Sironi and Spitkovsky (2014); Cerutti et al. (2014). There, one finds complex behavior of and interaction between flux ropes that is partially due to the presence of the kink instability (e.g., Bromberg et al., 2019; Davelaar et al., 2020) which is absent in axisymmetric simulations. For high-resolution 3D GRMHD simulations, some evidence for the presence of plasmoids, or flux ropes, was presented in Ripperda et al. (2022). Nevertheless, the typical appearance and how much it stands out with respect to its environment is relatively unknown in 3D. **Resistivity.** In essence, setting a resistivity (\(\eta\)) allows for consistently resolving the underlying current sheets in the simulation, which \begin{table} \begin{tabular}{c c c c c c c} \hline \hline _Name_ & \(\mu_{\dot{M}}\) & \(\sigma_{\dot{M}}\) & \(M_{\dot{M}}\) & \(\mu_{\Phi_{\rm B}}\) & \(\sigma_{\Phi_{\rm B}}\) & \(M_{\Phi_{\rm B}}\) \\ \hline iN3 & 6.86 & 7.05 & 1.03 & 55.99 & 8.87 & 0.16 \\ iN4 & 7.36 & 11.46 & 1.56 & 54.19 & 8.93 & 0.16 \\ iN5 & 7.29 & 17.35 & 2.38 & 50.45 & 10.17 & 0.2 \\ rH3 & 6.34 & 8.16 & 1.29 & 59.38 & 6.27 & 0.11 \\ rH4 & 7.29 & 9.23 & 1.27 & 59.37 & 6.55 & 0.11 \\ rH5 & 10.78 & 17.28 & 1.6 & 57.79 & 7.53 & 0.13 \\ \hline \hline \end{tabular} \end{table} Table 3: The modulation index \(M_{\rm Q}\equiv\sigma_{\rm Q}/\mu_{\rm Q}\) with \(\sigma_{\rm Q}\) and \(\mu_{\rm Q}\) denoting the standard deviation and mean of quantity \({\rm Q}\in\{\dot{M},\Phi_{\rm B}\}\). This index gives a measure of the variability in the simulations’ timeseries. in ideal (GR)MHD is ill-defined as it is numerically determined and therefore has a stochastic (and coordinate-dependent) component. As is clearly outlined in 3.2.4, there is a clear discrepancy between the resistive and ideal simulations. While the former has a relatively consistent plasmoids number \(N_{\rm P}\sim 100\), the latter has a non-flaring count comparable to \(N_{\rm P}\sim 10\). So, even though these discrepancies were expected, they were not verified in regard to plasmoid count till now. In part it can be a selection effect as the ideal simulation(s) entered a 'quiet' phase with few perturbations to the disc structure, but it is interesting this does not happen for the resistive case. However, in the light of recent finding by the Even Horizon Telescope Collaboration (EHTC et al., 2022), where was pointed out that the (ideal) GRMHD simulations produce too variable emission signatures, one can draw the tentative conclusion that this is further worsened by the use of resistive MHD. Additionally, the physical interpretation of resistivity is that it is a proxy for kinetic effects, which are simulated self-consistently with PIC methods, but to assess what is the 'correct' resistivity for our physical scenario is a non-trivial question (Selvi et al., 2022). A rigorous (GRMHD) study including several resisitivities is therefore needed to make more robust claims, but this is rather computationally expensive as one needs to assure that the current sheets are well-resolved. **Misidentification.** For the approach outlined in this work, we are indiscriminate as to what properties the plasmoid should contain, except that it should correspond to a circular magnetic field geometry. Even though this allows us to get a rather complete distribution, it is slightly sensitive to misclassifications, which happens mainly for overly dense region. This is explained by the sensitivity of both the local extrema finder and the watershed algorithm - even though it is only a minor deviation from the background, it is treated as if it is a plasmoid. Overall, this happens only rarely. What occurs more often is that plasmoids that are in close vicinity to each other are grouped as they have very similar \(\Psi_{\rm B}\) signatures. Except that this diminished the detected plasmoid count somewhat, it does not influence the surface-averaged quantities (and distributions) as they still probe the plasmoid structure. As with all identification problems, the difficulty lies in finding a strategy that is able to bridge the various lengthscales while not picking up on erroneous features. This is largely determined by the blurring layer, which dictates the minimal size-scale to which one is sensitive and gives a handle on how much fine-structure one want to include. As the large plasmoid tend to have a lot of fine-structure, one should apply a more aggressively blurring strategy. Even though our algorithm is accurate, it is by no means computationally fast to run, even despite parallelization attempts which should be intensified in the future. At present, we do not give an exact number of misclassifications, but one is able to find a few in most snapshots while the vast majority (of \(\mathcal{O}(100)\)) is classified correctly. The number of plasmoids that were not classified is also of \(\mathcal{O}(1)\) and are predominantly caused by numerical instabilities in the contour-finding step of the algorithm that typically occur for relatively unclear 'plasmoid' structures. **Flaring potential.** While we started this paper by talking about plasmoids as a potential connection to flares, it is nevertheless difficult to make direct emission interpretations. The main reason for this is that the emission properties of plasmoids in the BH accretion environment are still very unknown, especially as one would expect a significant non-thermal contribution. The utilization of a thermal synchrotron proxy (as in, e.g., Porth et al., 2019) would therefore likely give an unrealistic picture. Ripperda et al. (2020) gave estimates of the synchrotron emission and its potential to explain flares and our estimates are of the same order. Nevertheless, it would be beneficial to conduct a full radiative transfer study to accurately access the flaring potential of plasmoids including a non-thermal electron population or reconnection-dedicated description (Rowan et al., 2017). This is an interesting avenue to pursue in the future, as it is possible to pin-point the plasmoid's location with the algorithm. ## 5 Conclusions We have been able to identify plasmoids in highly turbulent accretion disc surrounding SMBHs with a higher fidelity than has been achieved before, which allows for creating complete time-series and distributions with sufficient counts to assess the statistics. Additionally, we have also verified our methodology with a set of previously well-investigated Harris current sheet simulations and found they are consistent with finding from previous studies (Uzdensky et al., 2010; Loureiro et al., 2012; Huang and Bhattacharjee, 2012; Sironi et al., 2016). Interestingly, the scaling laws (outlined in sections 3.1.3 and 3.2.3) for both the Harris sheet and the GRMHD simulation are very similar, which indicates that plasmoid formation in the more complex accretion disc environment does not differ fundamentally from the Harris sheet picture. Using this newly developed algorithm has enabled us to better study the plasmoid population within MAD accretion discs, and has clearly laid bare discrepancies in plasmoid occurrence rates between ideal and resistive MHD that warrant further investigation with a more systemic study that includes other accretion scenario (e.g., SANEs). The typical plasmoid in a MAD GRMHD simulation is equally dense and somewhat under-magnetized with respect to their surrounding, moves with its surroundings, and is likely to be unbound according to the Bernoulli criterion. Nevertheless, this behavior describes the averages of distributions and does not describe the deviations which occur frequently. Especially for the orbital velocities and boundedness of the plasmoids, one finds large spreads in the distributions. This indicates that plasmoids can both occur as super- or sub-Keplerian features, which is currently still an active point of investigation within the community. Magnetic saturation at the BH event horizon produces flux tubes in a violent event that (partially) pushes back the accretion flow for MAD simulations. Even though this is one of the leading theories to explain flares around SMBHs (Dexter et al., 2020; Porth et al., 2021), they are established to orbit with strongly sub-Keplerian velocities, which is at odds with some observations. The formation of plasmoids is, therefore, still a strong candidate for explaining both Keplerian (Gravity Collaboration et al., 2018, 2020) and super-Keplerian (Matsumoto et al., 2020) near-infra-red observation of flares around Sgr A\({}^{*}\). More specifically, we regularly recover plasmoid sizes that are comparable to the hot spots that were used to interpret flares at both NIR- and mm-wavelengths (Gravity Collaboration et al., 2020, 2020; Wielgus et al., 2022; Vos et al., 2022). Also, as we outlined in section 3.2.4, flux eruptions (corresponding with a decrease in horizon-penetrating magnetic flux \(\Phi_{\rm B}\)) and plasmoid formation are likely strongly correlated with one another, indicating that flux eruptions act as an instigator of magnetic reconnection. Both the flux tube and plasmoid (or flux rope) pictures do therefore not have to be mutually exclusive, but rather have a complementary co-existence. Lastly, we would like to point out that the identification algorithm is much more universally applicable as its function can be well-characterised as a 'closed contour-detector around local extrema'. So, in the future, we are planning to apply our methodology to mapping 3D structure of plasmoids and/or flux tubes for accretion onto SMBHs. It would also lend itself well to other MHD or PIC identification applications, such as shearing or turbulent box simulations. ## Acknowledgements We thank Bart Ripperda, Jordy Davelaar, Fiorenze Stoppa, Alejandra Jimenez-Rosales, and Aristomenis Yfantis for the helpful discussions and comments on the manuscript. JV acknowledges support from the Dutch Research Council (NWO) supercomputing grant No. 2021.013. MM acknowledges support by the NWO grant No. OCENW.KLEIN.113 and support by the NWO Science Athena Award. BC acknowledges the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 863412). HO acknowledges funding from Radboud University through a Virtual Institute of Accretion (VIA) postdoctoral fellowship from the Netherlands Research School for Astronomy (NOVA). Software used to create the results in this work; BHAC (Porth et al., 2017; Olivares et al., 2019), Python (Van Rossum & Drake, 2009), NumPy (Harris et al., 2020), matplotlib (Hunter, 2007), GNU parallel (Tange, 2022) ## Data Availability The data used for this work will be shared following a reasonable request to the authors. The plasmoid identification algorithm will be made publicly available in following repository; [https://github.com/JesseVos/Plasmoid_Finder](https://github.com/JesseVos/Plasmoid_Finder), in due time, but will also be shared following a reasonable request before that time.
2309.11893
On the Performance Analysis of RIS-Empowered Communications Over Nakagami-m Fading
In this paper, we study the performance of wireless communications empowered by Reconfigurable Intelligent Surface (RISs) over Nakagami-m fading channels. We consider two phase configuration designs for the RIS, one random and another one based on coherent phase shifting. For both phase configuration cases, we present single-integral expressions for the outage probability and the bit error rate of binary modulation schemes, which can be efficiently evaluated numerically. In addition, we propose accurate closed-form approximations for the ergodic capacity of the considered system. For all considered metrics, we have also derived simple analytical expressions that become tight for large numbers of RIS reflecting elements. Numerically evaluated results compared with Monte Carlo simulations are presented in order to verify the correctness of the proposed analysis and showcase the impact of various system settings.
Dimitris Selimis, Kostas P. Peppas, George C. Alexandropoulos, Fotis I. Lazarakis
2023-09-21T08:56:49Z
http://arxiv.org/abs/2309.11893v1
# On the Performance Analysis of RIS-Empowered Communications Over Nakagami-\(m\) Fading ###### Abstract In this paper, we study the performance of wireless communications empowered by Reconfigurable Intelligent Surface (RISs) over Nakagami-\(m\) fading channels. We consider two phase configuration designs for the RIS, one random and another one based on coherent phase shifting. For both phase configuration cases, we present single-integral expressions for the outage probability and the bit error rate of binary modulation schemes, which can be efficiently evaluated numerically. In addition, we propose accurate closed-form approximations for the ergodic capacity of the considered system. For all considered metrics, we have also derived simple analytical expressions that become tight for large numbers of RIS reflecting elements. Numerically evaluated results compared with Monte Carlo simulations are presented in order to verify the correctness of the proposed analysis and showcase the impact of various system settings. Reconfigurable intelligent surface, phase configuration design, outage probability, ergodic capacity, bit error rate, Nakagami-\(m\) fading. ## I Introduction Reconfigurable Intelligent Surfaces (RISs) have recently emerged as a promising candidate technology for next generation wireless systems due to their ability to reconfigure the wireless communication environment in an intelligent manner, thus increasing reception reliability at low cost [1, 2, 3]. An RIS consists of a large number of ultra-low power consumption elements which are capable of electronically controlling the phase of the incident electromagnetic waves. Recent research results indicate that RISs have great potential due to their promising gains achieved in terms of spectral and energy efficiency without the need of higher hardware complexity and cost [1]. Moreover, RISs have been envisioned as a key enabling technology to support sixth generation (6G) wireless communications [1]. Some typical RIS applications in emerging 6G systems include multi-user multicast systems, simultaneous wireless information and power transfer (SWIPT), non-orthogonal multiple access (NOMA) systems, physical-layer security and cognitive radio (CR) networks, e.g. see [1] and references therein. Because of their promising properties, the performance of RISs over fading channels has been addressed in several recent research works. Representative examples can be found in [4, 5, 6, 7] and references therein. For example, [4] presented tight bounds and asymptotic results on the performance of RIS systems in the presence of mixed Rayleigh and Ricean fading channels. In [5], bounds on the outage and Ergodic Capacity (EC) performance of RIS-empowered systems operating over fading channels have been presented. In [6], accurate approximations to to the end-to-end (e2e) Signal-to-Noise Ratio (SNR) distribution of RIS-assisted systems operatin over Rayleigh fading channels has been presented. An approximate performance evaluation of RIS-empowered systems over Nakagami-\(m\) fading channels is available in [7]. All of the above cited works have considered Optimal Phase Shifting (OPS), i.e., the phase shift of each RIS element is matched with the phases of its incoming and outgoing fading channels. Although such a design yields the optimal performance, it may not be applicable in many cases of practical interest. Specifically, on the one hand, perfect Channel State Information (CSI) at the source is required, resulting in an extensive system overhead. On the other hand, the finite resolution of practical RIS phase shifters poses additional difficulties to the OPS implementation. In [8], exact expressions and asymptotic results for the BER of large RIS-assisted systems over Nakagami-\(m\) fading have been presented, assuming a Von-Mises distribution for the phase error. In [9], a moment-based method was employed to analyze the performance of RIS-assisted NOMA under Nakagami-\(m\) fading with OPS and Random Phase Shifting (RPS). In [10], a random passive beamforming scheme for RIS-assisted multi-user multicast systems have been proposed and approximate performance evaluation expressions have been deduced. In [11], energy efficient schemes for RIS-aided communications via random phase rotations have been proposed. The impact of OPS and RPS on the performance of RIS-based NOMA systems has been addressed in [12], where tight approximations for the outage and ergodic capacity performance have been presented. In [13], large RIS-assisted MIMO systems have been analyzed using a stochastic geometry approach. Furthermore, because of the inherent difficulty of finding closed-form expressions for the e2e SNR statistics, these works mainly resort to accurate closed-form approximations, tight bounds, or asymptotic analysis. Therefore, as it has also been pointed out in [12], finding exact analytical expressions for such statistics is an important research direction. Motivated by the above facts, the novel contributions of
2307.16488
Model-free Grasping with Multi-Suction Cup Grippers for Robotic Bin Picking
This paper presents a novel method for model-free prediction of grasp poses for suction grippers with multiple suction cups. Our approach is agnostic to the design of the gripper and does not require gripper-specific training data. In particular, we propose a two-step approach, where first, a neural network predicts pixel-wise grasp quality for an input image to indicate areas that are generally graspable. Second, an optimization step determines the optimal gripper selection and corresponding grasp poses based on configured gripper layouts and activation schemes. In addition, we introduce a method for automated labeling for supervised training of the grasp quality network. Experimental evaluations on a real-world industrial application with bin picking scenes of varying difficulty demonstrate the effectiveness of our method.
Philipp Schillinger, Miroslav Gabriel, Alexander Kuss, Hanna Ziesche, Ngo Anh Vien
2023-07-31T08:33:23Z
http://arxiv.org/abs/2307.16488v1
# Model-free Grasping with Multi-Suction Cup Grippers for Robotic Bin Picking ###### Abstract This paper presents a novel method for model-free prediction of grasp poses for suction grippers with multiple suction cups. Our approach is agnostic to the design of the gripper and does not require gripper-specific training data. In particular, we propose a two-step approach, where first, a neural network predicts pixel-wise grasp quality for an input image to indicate areas that are generally graspable. Second, an optimization step determines the optimal gripper selection and corresponding grasp poses based on configured gripper layouts and activation schemes. In addition, we introduce a method for automated labeling for supervised training of the grasp quality network. Experimental evaluations on a real-world industrial application with bin picking scenes of varying difficulty demonstrate the effectiveness of our method. ## I Introduction Model-free grasping with multi-suction grippers is a key challenge for fully automating many pick-and-place tasks in industry and logistics. Fig. 1 shows a typical robotic bin picking system for warehouse order fullfilment including an industrial robot equipped with a multi-suction gripper, an overhead RGB-D camera and bins containing diverse objects positioned on a conveyor belt. Recent research proposes machine learning methods that enable model-free grasp prediction for a wide variety of unseen objects in unstructured environments [18, 11]. Most approaches focus on grasp prediction for parallel-jaw grippers or single-suction grippers. However, suction grippers with multiple suction cups are rarely studied so far, although they are capable of balancing torques which is favorable for dynamic movements of objects with large dimensions [16]. They also enable lifting of heavier objects without damaging their surfaces by distributing the required grasp force over multiple suction cups. Some multi-suction grippers allow the activation of different suction cup groups to offer flexibility in diverse object portfolios. However, multi-suction grippers are more challenging for grasp prediction approaches to deal with due to their complex geometry and alternative activation patterns. In this paper, we propose a method for model-free prediction of grasp qualities for suction grippers that is independent of the actual gripper design, e.g., the number and size of suction cups. More specifically, we present a gripper-agnostic grasp quality prediction including a procedure for automatic labeling and supervised training, allowing for generalization to new object shapes. Furthermore, we present a method for optimal gripper selection and rotation based on the inferred, gripper-agnostic pixel-wise grasp quality prediction combined with footprint images to represent specific gripper design configurations. Our contributions are as follows: (1) Method for model-free prediction of grasp qualities agnostic to the suction gripper design, including automated labeling for supervised training. (2) Method for optimal gripper selection and orientation by matching of arbitrary suction gripper footprints. We evaluate contribution (1) by a comparison of our proposed pixel-wise grasp quality prediction with existing methods based on various bin picking scenes of different levels of difficulty. In addition, we demonstrate contribution (2) by real-world experiments of performing multi-suction cup grasp prediction on an industrial bin picking application. ## II Related Work Modern robot grasping methods often use deep learning techniques trained on large datasets to predict grasps. Most grasp prediction approaches rely on some form of predicting a pixel-wise grasp quality map that represents the grasp success probability at each pixel. Mahler et al. [15] and Zeng et al. [28] propose to learn a grasp map for suction and parallel-jaw grasps from a supervised dataset using RGB-D or depth as input. Following this approach, GG-CNN [17] and FC-GQ-CNN [22] propose to predict a grasp quality map and a 4 DoF parallel-jaw grasp configuration for each Fig. 1: Robotic bin picking system with six-axis industrial robot, overhead RGB-D camera, tool changer, single-suction gripper, multi-suction gripper, bins containing diverse objects and conveyor belt. pixel. Cao et al. [2] propose a pixel-wise grasp map and and a grasp configuration prediction or single-suction grippers. Breyer et al. [1] propose to generate voxel-wise grasps using a truncated signed distance function for a parallel-jaw gripper. A follow-up work optimizes grasp prediction jointly with object shape reconstruction [9]. Other approaches use point clouds as input and predict point-wise grasp qualities and gripper configurations [29, 20, 26, 7, 19, 4, 12]. Various gripper designs are available for robotic grasping. Parallel-jaw and suction grippers are commonly used and effective for regular object sets [3, 6]. In bin picking scenarios, suction grippers have an advantage over parallel-jaw grippers for object reachability. Dex-Net 3.0 [14] improves success rates of Dex-Net 2.0 [15] by training a grasp quality neural network specifically for suction grippers. Shao et al. [24] propose a self-supervised learning method for simulated suction-based picking. Suctionnet-1billion predicts grasps for single-suction grippers through end-to-end training of a pixel-wise prediction network [2]. Jiang et al. [8] jointly learn pixel-wise grasp quality and robot reachability maps for suction vacuum cups. Zeng et al. [27], winners of the Amazon picking challenge, propose learning a pixel-wise grasp map for a hybrid gripper combining parallel-jaw and suction cup functions. Although there are advanced multi-suction gripper designs in industrial products [13, 16], there is little research on explicitly modelling and using them for robotic grasping. Recent efforts focus on optimizing a single network for the prediction of grasps for different gripper types [10, 23, 25]. However, no prior work has explored learning a pixel-wise grasp map that can be used for both single- and multi-suction grippers without altering the network architecture or the training process. ## III Problem Statement Given an image of the scene and a set of multi-suction grippers, our goal is to predict a set of feasible grasp poses which can be used to transfer arbitrary objects from a source to a target bin. In particular, our method receives an RGB-D image as input and infers grasp poses from a multi-channel grasp map where each channel per gripper type encodes pixel-wise grasp quality and rotations. We denote by \(S\) an RGB-D scene image of size \(\mathbb{R}^{4\times h\times w}\), and by \(g=(x,y,z,\alpha,\beta,\gamma,t)\) a grasp configuration predicted at position \((x,y,z)\) with orientation \((\alpha,\beta,\gamma)\) and gripper \(t\in T\). As grippers, we assume that there is a set \(T\) of different types of multi-suction grippers that can be used for a grasp. The problem of predicting multi-suction grasps is to find a mapping \(f:S\mapsto g\) for every input image \(S\). We propose a two-step approach for solving this problem as illustrated in Fig. 2. First, a neural network predicts a pixel-wise "graspability" property for the image, denoting how well each individual pixel can be grasped and in the following referred to as grasp quality. Second, an optimization step determines the best gripper and corresponding grasp pose based on the predicted grasp quality. In contrast to methods that directly approximate \(f\), this two-step approach has the benefit that no gripper-specific training data is needed. To the best of our knowledge, a grasp dataset of multi-suction cups does not exist in literature and, in particular, not for our specific multi-suction cup grippers. ## IV Model-free Grasp Quality Prediction The first part of our proposed method consists of a generic, pixel-wise prediction of graspable surfaces. This prediction can be obtained for a wide range of unknown objects and does not require gripper geometry-specific information. As input, we use high-resolution RGB-D images, e.g. from industry-grade cameras like _Zivid Two_ or _Photoneo PhoXi_, and assume that the intrinsic camera parameters are known. The output is a pixel-wise grasp quality prediction \(Q\) where each pixel ranges from \(0\) (not graspable) to \(1\) (perfectly graspable) and indicates how suitable the respective pixel is for attaching a suction cup. ### _Grasp Quality Inference_ We use a U-Net [21] architecture with a ResNet-34 [5] encoder and a single-channel output \(Q\) to infer the pixel-wise grasp quality. The network input is a three-channel image, \(I:=(S_{\mathrm{Gray}},S_{\mathrm{Depth}},S_{\mathrm{Std}})\), which consists of grayscale \(S_{\mathrm{Gray}}\), depth \(S_{\mathrm{Depth}}\), and the standard deviation of the surface normal vectors \(S_{\mathrm{Std}}\). Fig. 3 shows an example of each channel and the resulting network output \(Q\). Using a single grayscale channel instead of three RGB channels largely retains texture information, but reduces the Fig. 2: Summary of our proposed approach. Our method receives an RGB-D scene recording and camera information to derive a pixel-wise grasp quality. Afterwards, by providing gripper geometry information as footprints, a gripper optimization finds the best poses and corresponding grippers. number of channels that provide this information and, more application-specific, prevents the network from overfitting to colored bins as background. Using \(S_{\mathrm{Std}}\) as a third channel is motivated by the fact that graspability for suction grippers highly depends on the local surface structure. If the surface is very irregular, a suction cup is less likely to form a sealed vacuum at that point. To obtain \(S_{\mathrm{Std}}\), we first calculate the ordered point cloud from \(S_{\mathrm{Depth}}\) and the configured camera matrix \(K\) for each pixel \((u,v)\in h\times w\) as \[S_{\mathrm{Pts}}(u,v)=K^{-1}\big{(}S_{\mathrm{Depth}}(u,v)[u,v,1]^{\mathrm{T}} \big{)}\] and derive the pixel-wise surface normals, \(S_{\mathrm{Normals}}\). Then, \(S_{\mathrm{Std}}\) is computed for a small neighborhood and normalized to a value range \(S_{\mathrm{Std}}(u,v)\in[0,1],\forall u,v\). One practical challenge in calculating \(S_{\mathrm{Normals}}\) is that this calculation is susceptible to errors and inaccuracies in the depth image. To address this, we employ two pre-processing steps where the first one fills missing pixels with depth approximations and the second one reduces noise resulting from outlier pixels. ### _Labeling and Supervised Training_ For supervised training of the grasp quality network, we require a dataset that contains RGB-D input data \(S\), as well as pixel-wise ground-truth for grasp quality \(Q^{*}\). Annotation of \(Q^{*}\) requires a high effort if done manually for cluttered scenes with many objects or complex geometry. We therefore choose an alternative approach for obtaining approximate labels, \(L\), such that \(L(u,v)\approx Q^{*}(u,v)\). This automatic labeling approach is based on the insight that for singulated objects with simple geometries, the suitability for a pixel to be grasped inversely correlates with the \(S_{\mathrm{Std}}\), similar to our initial motivation for using \(S_{\mathrm{Std}}\) as an input to the network. Consequently, we calculate one component of the labels by \(L_{\mathrm{Std}}:=1-S_{\mathrm{Std}}\). Furthermore, we expect grasps closer to the center of mass of an object to be more stable and thus, these pixels should receive a higher grasp quality. To include this property in the training labels, we cluster pixels of graspable surfaces as given by \(L_{\mathrm{Std}}\) and calculate a second component of the labels \(L_{\mathrm{Dist}}\) as the distance to the respective cluster center. One limitation of this labeling method is that also the bin and other non-object geometries of the scenes may be considered as graspable areas, solely depending on their surface. For training, we can circumvent this issue by recording a background image of an empty bin, i.e., a scene recording without any objects before placing the training objects in the scene. We then use background subtraction based on depth to mask all non-object pixels \(M_{\mathrm{bg}}\) and only consider non-zero labels for object pixels. The grasp quality labels \(L\) for training are thus given by \[L=\begin{cases}w_{\mathrm{Std}}L_{\mathrm{Std}}+w_{\mathrm{Dist}}L_{\mathrm{ Dist}}&\text{where }M_{\mathrm{bg}}>0\\ 0&\text{otherwise}\end{cases} \tag{1}\] where the weights \(w_{\mathrm{Std}}\) and \(w_{\mathrm{Dist}}\) can balance the influence of the different label components, but are chosen to be equal in our experiments. The quality metric defined in Eq. (1) distinguishes itself from the approach presented in [8] by using an unnormalized \(L_{\mathrm{Dist}}\) score and excluding the residual error of local plane fitting at each pixel, making Eq. (1) more computationally efficient. Fig. 4 shows three examples from our training dataset. While we observe that this labeling approach works sufficiently well for objects with simple geometries in scenes with only a few instances, the approximation \(L(u,v)\approx Q^{*}(u,v)\) becomes significantly worse for complex geometries and scenes with object instances that are close together. Consequently, we limit training data to such simpler scenes that allow for a good approximation. The approach can be extended to scenes with a larger number of objects or overlapping objects if instance mask annotations are available, which are much easier to obtain by manual annotation than grasp quality labels. In that case, a background image and object clustering is not required since the background is given by all pixels which are not included in any of the object masks and all other labeling steps can be performed in the same way, including the computation of \(S_{\mathrm{Std}}\) and the final labels according to Eq. (1). During training of the grasp quality prediction, the loss \(\mathcal{L}\) for some predicted grasp quality \(Q\) and target labels \(L\) is given by a pixel-wise mean-squared error \[\mathcal{L}=w_{\mathrm{bg}}\textit{MSE}\left(M_{\mathrm{bg}}\circ E\right)+w _{\mathrm{fg}}\textit{MSE}\big{(}(1-M_{\mathrm{bg}})\circ E\big{)} \tag{2}\] Fig. 4: Three training examples, RGB input shown top and generated labels \(L\) shown in the bottom row. Labels become worse for more complex geometries. Missing labels mainly result from invalid depth information. Fig. 3: Input to the grasp quality network is given by a grayscale channel \(S_{\mathrm{Gray}}\), a depth channel \(S_{\mathrm{Depth}}\), and the standard deviations of surface normals \(S_{\mathrm{Std}}\). Output is a single-channel rating \(Q\) how well a suction gripper can be attached to each pixel. for some prediction error \(E:=L-Q\) and \(\circ\) denoting pixel-wise multiplication. The background mask \(M_{bg}\) balances the loss received for on-object pixels with the one for background pixels with weights \(w_{\mathrm{bg}}\) and \(w_{\mathrm{fg}}\) calculated per image to reflect the ratio of background and foreground. For the experiments in this paper, we generated a dataset of around 2,000 proprietary recordings of bin picking scenes collected across various robotic cells with a mixed object portfolio similar to those shown in Fig. 4. Training was then performed for \(30\) epochs on a _Nvidia V100_ GPU with a batch size of \(16\) and images being down-scaled to a resolution of \(1280\times 800\) pixels. We used stochastic gradient descent with an initial learning rate of \(10^{-4}\) and a cosine annealing schedule implemented in _PyTorch_. ## V Grasp Pose Detection The second part of our proposed method is deriving the full grasp pose from the pixel-wise grasp quality prediction described in Sec. IV. For this we assume minor application knowledge about the type of available grippers which is manually specified as gripper footprints. This does not include further scene understanding or context knowledge such as robot kinematics, bin dimensions, or object models. ### _Gripper Footprint Matching_ We propose a gripper selection and matching based on specified gripper footprints, such as the ones shown in Fig. 5. In this work, we assume that a footprint is always centered at the end-effector pose and that the footprint size is scaled to match the correct pixel-per-mm resolution, for example in our experiment application around two pixels per \(\mathrm{mm}\). To identify grasp poses, including a selection of the best gripper and its orientation, we perform a convolution over the inferred grasp quality \(Q\). For this, \(n_{\mathrm{r}}\) different discrete rotation steps of \(n_{\mathrm{f}}\) different gripper footprints are encoded as separate channels in one combined convolution kernel \(F\in\mathbb{R}^{n_{r}\cdot n_{\mathrm{f}}\times h\times w_{F}}\) of size \(h_{F}\times w_{F}\). The result of a convolution of \(Q\) with \(F\) is thus a multi-channel pixel-wise prediction how well each gripper type in each rotation can perform a successful grasp at the respective pixel. However, accumulating grasp quality like this leaves one issue which we denote by the term "edge wrapping": Consider a box which can be grasped well on two sides with different orientations, but not at the edge between these sides. Using a gripper footprint with two suction cups that have enough space between the cups might now match one suction cup to one side and the other cup to the other side. This would result in a grasp with a high theoretical graspability but which will fail in practice. A similar example for edge wrapping is shown in Fig. 6. Edge wrapping again motivates the use of normal vectors for avoiding infeasible grasp proposals. We therefore perform another convolution with the same kernel \(F\) over the three-channel image of normal vectors \(S_{\mathrm{Normals}}\) to compute the standard deviation of normal vectors for each of the respective gripper footprint areas as \[F_{\mathrm{Std}}:=\left|F*S_{\mathrm{Normals}}^{2}-(F*S_{\mathrm{Normals}})^{ 2}\right|^{\frac{1}{2}}. \tag{3}\] The product of the accumulated grasp quality \(F*Q\) with the inverse of the above standard deviation \(F_{\mathrm{Std}}\) then denotes the grasp feasibility. Finally, we compute three single-channel pixel-wise results of this operation. The first result \(O_{\mathrm{Type}}\) gives the pixel-wise gripper type and the second result \(O_{\mathrm{Rot}}\) denotes the respective gripper rotation, both given by the pixel-wise argmax over all \(n_{\mathrm{r}}\cdot n_{\mathrm{f}}\) channels of the convolution result. The last result is given by a pixel-wise grasp quality \(O_{\mathrm{Q}}\), calculated as the \(\max\) over all channels of the convolution result. This is similar to the previously computed grasp quality \(Q\), but \(O_{\mathrm{Q}}\) now denotes for each pixel how feasible the best possible grasp configuration would be at that pixel in contrast to how well a pixel is suitable for being grasped. ### _Pixel-to-Pose Transformation_ To obtain a list of grasps from the previous pixel-wise results, grasp quality values \(Q\) are clustered such that clusters correspond to graspable areas of objects and pixels near the cluster center with highest \(O_{\mathrm{Q}}\) are selected for grasping. This has shown to improve robustness compared to directly using the pixels with highest values and ensures that grasps are detected for all objects that receive high (but not necessarily the highest) grasp quality values in a scene. For each selected pixel \((u,v)\), the grasp pose position is then given by the respective point in the ordered point cloud \(x,y,z:=S_{\mathrm{Pts}}(u,v)\). The roll and pitch rotations (around the \(x\)- and \(y\)-axes of the gripper) are determined by the surface of the object and are thus given by the negative normal vector at the respective pixel \(\alpha,\beta:=-S_{\mathrm{Normals}}(u,v)\). The yaw rotation around the gripper axis is given by Fig. 5: Example footprints of two different multi-suction grippers that both provide multiple activation patterns to choose from. White areas denote full surface contact, black areas mean no contact. Fig. 6: Example for the edge wrapping issue. For the purple grasp in the left image, a large three-suction cup gripper is incorrectly selected and ignores scene geometry. Instead, a smaller footprint aligned with the object surface would be correct and is the result when applying the proposed method, as shown in the right image. the gripper rotation determined during footprint matching \(\gamma:=O_{\mathrm{Rot}}(u,v)\). Finally, the gripper type of the grasp is given by the determined footprint \(t=O_{\mathrm{Type}}(u,v)\). ## VI Experiments We perform two different types of experiments to evaluate the performance of our approach. First, we evaluate grasp quality prediction performance by detecting single-suction grasps for an evaluation dataset and compare it with related methods. Second, we demonstrate our multi-suction grasp detection on an industrial bin picking robot cell. ### _Single-Suction Grasp Comparison_ Due to the lack of a comparable multi-suction gripper work, we determine single-suction grasps on an annotated reference dataset from the target bin picking cell and quantify the performance from the given pixel-wise ground-truth grasp success. We compare our work to the following two methods for single-suction grasp detection: **Dex-Net.** Satish et al. [22] propose a fully convolutional grasp quality CNN (FC-GQ-CNN) for grasp prediction. We use the pretrained model FCQCNN-4.0-SUCTION provided by the authors, which was trained on synthetic depth images of objects in clutter with parameters for a Photoneo PhoXi S camera. Published values are based on a very specific combination and placement of RGB-D camera and gripper. Therefore, we apply static cropping close to the bounding box of the bin and a depth value shift to make the depth images similar to the provided example images of the authors. Input images are resized to \(640\times 480\). **Zeng et al. [28]** introduce a multi-modal grasping framework. They use a separate FCN with residual connections that generates a suctionability map for each pixel. The framework combines RGB and depth data of size \(640\times 480\) using two pre-trained ResNet-101 towers, then concatenates the output features to predict suction affordances. In this paper, we evaluate the framework using two variations: First, we replicated and trained the network with the original dataset of the authors. Second, we retrained on a combination of the authors' dataset and our own dataset with labels and loss as in Sec. IV-B. The dataset for evaluation is created from eleven types of common objects, see Fig. 8 for examples. For detailed results, we split the dataset into three levels of difficulty. **Simple:** Bins filled with a small number of objects such as boxes or cylinders. **Typical:** Bins containing a heap of a larger number of the objects. **Complex:** Bins filled with a large number of challenging objects like partially transparent objects, stacked and texture-less boxes, or blister packs. Each category consists of ten RGB-D images and corresponding ground-truth pixel-wise grasp success. The grasp success is based on manually annotated object instance masks, combined with a similar method as described in Sec. IV-B for consideration of surface and weight, but with higher emphasis on the distance to the center. To achieve meaningful results, we manually reviewed and corrected the grasp quality to ensure reasonable ground-truth labels. For each scene, each method is queried for the top 20 grasps based on their grasp quality prediction. Results are listed in Tab. I. We state for each category as _Quality_ the average ground-truth grasp quality of feasible grasps (between \(0\) and \(1\)), as _Success_ the percentage of feasible grasps among predicted grasps (non-zero grasp quality), as _Objects_ the percentage of objects for which grasps are detected (counting at most 20 objects per scene due to top 20 grasps), as _Multi_ the average number of grasps predicted for the same object (closer to one is better), and as _None_ the percentage of scenes without a single feasible grasp. It can be seen that our method performs well for the considered application, which motivates its use for multi-suction grasp detection. In particular, it can be observed that our method spreads grasps across objects compared to DexNet and Zeng et al., which often predict multiple alternatives for the same object. Especially Zeng et al. achieve a presumably high success rate, but do so by detecting multiple close grasps per object which has limited usefulness in practice. One contribution to this difference likely is the clustering of surface graspability for deriving poses (instead of directly predicting grasp success for pixels), as important for robustly placing multi-suction grippers. We also achieve a high average grasp quality and while this does not directly indicate the feasibility for multi-suction grasps, it suggests that considering a larger footprint Fig. 8: Examples for categories Simple (_top left_), Typical (_top right_) and Complex (_bottom row_) of the evaluation dataset. Fig. 7: Summary of intermediate steps of grasp detection on an example scene from an application (outside of the training distribution). From left to right: RGB input, calculated surface normals, inferred grasp quality, clustered graspable objects, and matched footprints. geometry also improves the overall quality of single-suction grasps. This might be because the larger gripper geometry forces resulting grasp poses to be more consistently located on highly graspable areas and close towards object centers. Finally, we would like to emphasize that we do not claim to generally outperform any of the other methods. We verified that our method works sufficiently well on the intended use case and object portfolio compared to related work. In general, applying the proposed gripper footprint optimization can also be done on grasp quality maps predicted by other methods. However, we observed that it works most robustly for the procedure presented in this paper. Still, especially Dex-Net shows a remarkable performance in the presented evaluation, considering that we were able to directly apply pretrained weights with minimal finetuning or preprocessing. ### _Real-World Multi-Suction Grasp Experiments_ We finally demonstrate our approach by detecting poses for multi-suction grasps in the target application of industrial bin picking. The experiment is performed on an industry-grade robotic bin picking cell as shown in Fig. 1. The cell is equipped with two _Zivid Two_ overhead RGB-D cameras with a resolution of \(1944\times 1200\) pixels, located around \(1.2\,\mathrm{m}\) above the bins. It includes a _Kuka KR10 R1420_ robot with a custom-made linear gripper with multiple suction cups and activation schemes, the same as shown in Fig. 5 (left). The cell is operated by an industrial software stack based on the _Nexeed Automation_ framework with grasp poses provided by a _PyTorch_-based implementation of our method that runs on a dedicated IPC of the cell for perception. The IPC runs Ubuntu \(20.04\) and is equipped with an _Intel Xeon W-1290T_ CPU and a _Nvidia Quadro RTX 4000_ GPU. A short video of our experiment is provided online1. Various object types similar to those from the grasp quality evaluation are provided in six different bins by the conveyor belt system. We set an arbitrary sequence of picking orders in which the system composes deliveries from the available objects, simulating typical operations in a logistics center. Footnote 1: See experiment video: [https://youtu.be/UZikmSjQy3M](https://youtu.be/UZikmSjQy3M) In the experiment, detecting multi-suction grasps requires on average \(628\,\mathrm{ms}\), of which grasp quality inference takes \(53\,\mathrm{ms}\) and gripper footprint optimization takes \(372\,\mathrm{ms}\) for the configured four footprints, a maximum rotation of \(180^{\circ}\) (due to symmetry) and a rotation resolution of \(5^{\circ}\). As in the comparison, each detection results in a list of up to 20 grasps from which subsequently, the path planner can select one to execute and considers the respective gripper activation. Fig. 9 shows the grasp predictions and the performed grasp for one of the scenes from the video. The selected grasp (black) places the footprint centrally on the object and fits three suction cups to increase the robustness. The performed grasp matches the predicted footprint within the error margins of the overall system. For the complete experiment, the system executed 38 grasps with three failures, i.e., a success rate of \(92\%\). In case of a failed grasp attempt, the system automatically executes the next feasible grasp pose. Note that there are no pixel-wise ground-truth labels available in such real-world runs, thus we cannot determine all metrics as in Tab. I for the evaluation dataset. Still, this qualitative experiment verifies practical applicability of the method in an industrial setting, and metrics such as the success rate match the expectations from our previous evaluation. Overall, it can be seen from the video that choosing multi-suction grasps for larger objects indeed leads to an improved robustness for the grasps. Still, we also observe that the overall system often selects grasps with a single suction cup. This can be attributed to the fact that the path planner is allowed to freely rotate poses for the single-suction grasps due to being rotation-symmetric, which significantly increases the likelihood to find a feasible trajectory. For smaller objects, the identified graspable areas are sometimes too small for fitting a footprint, a failure case that can be observed for the narrow cylindrical objects where once in the video, only a single grasp pose is detected but deemed infeasible by the path planner. Finally, one concern was that the simple projection of a 2D footprint onto the scene surface might create projection issues on strongly tilted or bent surfaces, but we did not observe practical issues resulting from it. Fig. 9: Visualization of the grasps and corresponding footprints predicted by our approach for an exemplary scene. The executed grasp with selected three-cup footprint is colored in black in the visualization. ## VII Conclusions In this paper, we proposed a method for detecting and optimizing multi-suction grasp poses for bin picking tasks based on a model-free, gripper-agnostic prediction of pixel-wise graspability values. In addition, we presented an automated procedure for labeling of images for supervised training of the grasp quality network, allowing for a trade-off between annotation quality and labeling effort. For optimizing the selection of an activation pattern and the orientation of a multi-suction grasp, we described a procedure based on a convolution of grasp quality and surface normals with gripper-specific footprints. In our evaluation and real-world experiments, we observed that the approach reliably predicts poses for one or more suction cups in a realistic setting, leading to feasible and robust grasps performed by the system. To address a broader portfolio of objects and surface properties, future work can include multi-channel grasp quality predictions to denote different surface requirements for gripper selection. On the system side, future work may allow for a closer integration of grasp optimization and motion planning.
2309.14733
Fractional Kolmogorov equations with singular paracontrolled terminal conditions
We consider backward fractional Kolmogorov equations with singular Besov drift of low regularity and singular terminal conditions. To treat drifts beyond the socalled Young regime, we assume an enhancement assumption on the drift and consider paracontrolled terminal conditions. Our work generalizes previous results on the equation from Cannizzaro, Chouk 2018 and Kremp, Perkowski 2022 to the case of singular paracontrolled terminal conditions and simultaneously treats singular and non-singular data in one concise solution theory. We introduce a paracontrolled solution space, that implies parabolic time and space regularity on the solution without introducing the socalled "modified paraproduct" from Gubinelli, Perkowski 2017. The tools developed in this article apply for general linear PDEs that can be tackled with the paracontrolled ansatz.
Helena Kremp, Nicolas Perkowski
2023-09-26T07:53:38Z
http://arxiv.org/abs/2309.14733v1
# Fractional Kolmogorov equations with singular paracontrolled terminal conditions ###### Abstract We consider backward fractional Kolmogorov equations with singular Besov drift of low regularity and singular terminal conditions. To treat drifts beyond the socalled Young regime, we assume an enhancement assumption on the drift and consider paracontrolled terminal conditions. Our work generalizes previous results on the equation from [1, 2] to the case of _singular paracontrolled terminal conditions_ and simultaneously treats singular and non-singular data in one concise solution theory. We introduce a paracontrolled solution space, that implies parabolic time and space regularity on the solution without introducing the socalled "modified paraproduct" from [1]. The tools developed in this article apply for general linear PDEs that can be tackled with the paracontrolled ansatz. _Keywords: fractional Laplace operator, paracontrolled distributions, singular terminal conditions MSC2020: 35A21, 60L40_ ## 1 Introduction Kolmogorov equations are second order parabolic differential equations. Their connection to stochastic processes was already investigated by Kolmogorov in the seminal work [16]. There exist analytic and probabilistic methods to study Kolmogorov equations. We refer to the books [11, 2, 3, 4, 5] for an overview on Kolmogorov equations in both finite and infinite dimensional spaces. In the finite dimensional setting, Kolmogorov equations with bounded and measurable coefficients and uniformly elliptic diffusion coefficients can be treated as a special case of the infinite dimensional Dirichlet form methods of [20], see also [11, Section 2.4.1] and the connection to the martingale problem in [11, Section 6.1.2]. We remain in the finite-dimensional setting, but consider distributional drifts in Besov spaces. Besov spaces play well with the paracontrolled calculus that defines products of distributions, cf. the Littlewood-Paley theory in [1]. Previous articles that consider distributional drifts are [2, 13], as well as [14] in the setting of rougher distributional drifts. Heat kernel estimates for the solution to the Kolmogorov equation were established in [15, 2]. In the article [2], the Laplace operator is replaced by a generalized fractional Laplacian. We extend our previous results on the equation from [2] to allow for irregular terminal conditions. That is, we consider the fractional parabolic Kolmogorov backward equation \[\big{(}\partial_{t}-\mathfrak{L}_{\nu}^{\alpha}+V\cdot\nabla\big{)}u=f,\quad u (T,\cdot)=u^{T},\] on \([0,T]\times\mathbb{R}^{d}\), where \(\mathfrak{L}_{\nu}^{\alpha}\) generalizes the fractional Laplace operator \((-\Delta)^{\alpha/2}\) for \(\alpha\in(1,2]\) and \(V\) is a vector-valued Besov drift with negative regularity \(\beta\in(\frac{2-2\alpha}{3},0)\), i.e. \(V\in C([0,T],(B_{\infty,\infty}^{\beta})^{d})=C_{T}\mathscr{C}_{\mathbb{R}^{ d}}^{\beta}\). Since \(V\) is a distribution, we need to be careful with well-definedness of the product \(V\cdot\nabla u\). The regularity obtained from \(-(-\Delta)^{\alpha/2}\) suggests that \(u(t,\cdot)\in\mathscr{C}^{\alpha+\beta}\) if right-hand side \(f\) and terminal condition \(u^{T}\) are regular enough. Therefore we have \(\nabla u(t,\cdot)\in\mathscr{C}^{\alpha+\beta-1}\). Since the product \(V(t,\cdot)\cdot\nabla u(t,\cdot)\) is well-defined if and only if the sum of the regularities of the factors is strictly positive, we obtain the condition \(\alpha+2\beta-1>0\), equivalently \(\beta>(1-\alpha)/2\). We call this the _Young regime_, in analogy to the regularity requirements that are needed for the construction of the Young integral. However, we go beyond the Young regime, considering also the so-called _rough regime_\(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\). In the rough case, we employ paracontrolled distributions (cf. [1]) to solve the equation. The idea is to gain some regularity by treating \(u\) as a perturbation of the solution of the linearized equation with additive noise, \(\partial_{t}w=\mathfrak{L}_{\nu}^{\alpha}w-V\). The techniques work as long as the nonlinearity \(V\cdot\nabla u\) is of lower order than the linear operator \(\mathfrak{L}_{\nu}^{\alpha}\), i.e. for \(\alpha>1\) or equivalently \((2-2\alpha)/3<(1-\alpha)/2\). The price one has to pay to go beyond the Young regime is a stronger assumption on \(V\). That is, we assume that certain resonant products involving \(V\) are a priori given. Those play the role of the iterated integrals in rough paths theory (cf. [13]). We then enhance \(V\) by that resonant product component and call the enhancement \(\mathscr{V}\). In [12, 13], only regular terminal conditions were considered, i.e. \(u^{T}\in\mathscr{C}^{\alpha+\beta}\) in the Young regime and \(u^{T}\in\mathscr{C}^{2(\alpha+\beta)-1}\) in the rough regime. The right-hand side \(f\) can either be an element of \(C_{T}L^{\infty}\) or \(f=V^{i}\) for \(i=1,\ldots,d\). There are techniques available to treat less regular terminal conditions, cf. [1, Section 6]. With the help of those techniques, one can allow for terminal conditions \(u^{T}\in\mathscr{C}^{(1-\gamma)\alpha+\beta}\) in the Young case and \(u^{T}\in\mathscr{C}^{(2-\gamma)\alpha+2\beta-1}\) in the rough case for \(\gamma\in[0,1)\), obtaining a solution \(u_{t}\in\mathscr{C}^{\alpha+\beta}\) for \(t<T\) and blow-up \(\gamma\) for \(t\to T\). In this work, consider moreover _singular paracontrolled_ right-hand sides \(f\) as well as _singular paracontrolled terminal condition_\(u^{T}\), which includes all cases mentioned above. Moreover, we can consider \(f\) and \(u^{T}\) more generally as elements of Besov spaces \(\mathscr{C}_{p}^{\theta}=B_{p,\infty}^{\theta}\) with integrability parameter \(p\in[1,\infty]\). Examples for terminal conditions that we cover in the rough case include the Dirac measure, that is \(u^{T}=\delta_{0}\in\mathscr{C}_{1}^{0}\), and \(u_{T}=V(T,\cdot)\). To be more precise, in the rough regime, we assume paracontrolled right-hand sides and terminal conditions, \[f=f^{\sharp}+f^{\prime}\otimes V,\quad u^{T}=u^{T,\sharp}+u^{T,\prime}\otimes V _{T},\] with \(u^{T,\prime},f^{\prime}_{t}\in\mathscr{C}_{p}^{\alpha+\beta-1}\) and remainders \(f^{\sharp}_{t}\in\mathscr{C}_{p}^{\alpha+2\beta-1}\), \(u^{T,\sharp}\in\mathscr{C}_{p}^{(2-\gamma)\alpha+2\beta-1}\) for \(\gamma\in[0,1)\). For \(f^{\prime}_{t}\) and \(f^{\sharp}_{t}\) we also allow a blow-up \(\gamma\) for \(t\to T\). We prove existence and uniqueness of mild solutions of the Kolmogorov backward equation for singular paracontrolled data \((f,u^{T})\). The paracontrolled solution is an element of the solution space with blow-up \(\gamma\) at terminal time \(T\). As a byproduct, we prove a new commutator estimate for the \((-\mathfrak{L}_{\nu}^{\alpha})\)-semigroup, cf. Lemma 3.4, that allows to gain not only space regularity, but also time regularity. Thanks to Lemma 3.4 there is no need for the so-called "modified paraproduct" from [13, Section 6.1]. Moreover, we prove continuity of the Kolmogorov solution map and a uniform bound for the solutions considered on subintervals of \([0,T]\) for bounded sets of data. It is important to mention that the techniques we develop in this article are not limited to that particular equation and can be used to treat other linear singular PDEs with the paracontrolled approach. In this sense we see the Kolmogorov PDE as a model example. The work is structured as follows. In Section 2 we introduce the generalized fractional Laplacian \(\mathfrak{L}_{\nu}^{\alpha}\) and its semigroup. We prove semigroup and commutator estimates. In Section 3 we introduce the solution spaces and prove generalized Schauder and commutator estimates thereon. Finally, we solve the Kolmogorov equation with singular paracontrolled data \((f,u^{T})\) in Section 4 and prove continuity of the solution map, as well as a uniform bound for the solutions on subintervals. ## 2 Preliminaries Below we introduce some technical ingredients about Besov spaces and paraproducts, that we will need in the sequel. We study estimates for the generalized fractional Laplacian and its semigroup, as well as, commutator estimates involving the paraproducts and the fractional semigroup. Let \((p_{j})_{j\geqslant-1}\) be a smooth dyadic partition of unity, i.e. a family of functions \(p_{j}\in C^{\infty}_{c}(\mathbb{R}^{d})\) for \(j\geqslant-1\), such that 1. \(p_{-1}\) and \(p_{0}\) are non-negative radial functions (they just depend on the absolute value of \(x\in\mathbb{R}^{d}\)), such that the support of \(p_{-1}\) is contained in a ball and the support of \(p_{0}\) is contained in an annulus; 2. \(p_{j}(x):=p_{0}(2^{-j}x)\), \(x\in\mathbb{R}^{d}\), \(j\geqslant 0\); 3. \(\sum_{j=-1}^{\infty}p_{j}(x)=1\) for every \(x\in\mathbb{R}^{d}\); and 4. \(\operatorname{supp}(p_{i})\cap\operatorname{supp}(p_{j})=\emptyset\) for all \(|i-j|>1\). We then define the Besov spaces for \(p,q\in[1,\infty]\), \[B^{\theta}_{p,q}:=\{u\in\mathscr{S}^{\prime}:\|u\|_{B^{\theta}_{p,q}}=\big{\|} (2^{j\theta}\|\Delta_{j}u\|_{L^{p}})_{j\geqslant-1}\big{\|}_{\ell^{q}}<\infty\}, \tag{2.1}\] where \(\Delta_{j}u=\mathscr{F}^{-1}(p_{j}\mathscr{F}u)\) are the Littlewood-Paley blocks, and the Fourier transform is defined with the normalization \(\hat{\varphi}(y):=\mathscr{F}\varphi(y):=\int_{\mathbb{R}^{d}}\varphi(x)e^{-2 \pi i\langle x,y\rangle}dx\) (and \(\mathscr{F}^{-1}\varphi(x)=\hat{\varphi}(-x)\)); moreover, \(\mathscr{S}\) are the Schwartz functions and \(\mathscr{S}^{\prime}\) are the Schwartz distributions. Let \(C^{\infty}_{b}=C^{\infty}_{b}(\mathbb{R}^{d},\mathbb{R})\) denote the space of bounded and smooth functions with bounded partial derivatives. For \(q=\infty\), the space \(B^{\theta}_{p,\infty}\) has the unpleasant property that \(C^{\infty}_{b}\subset B^{\theta}_{p,\infty}\) is not dense. Therefore, we rather work with the following space: \[B^{\theta}_{p,\infty}:=\{u\in\mathscr{S}^{\prime}\mid\lim_{j\to\infty}2^{j \theta}\|\Delta_{j}u\|_{L^{p}}=0\}, \tag{2.2}\] for which \(C^{\infty}_{b}\) is a dense subset (cf. [1, Remark 2.75]). We also use the notation \(\mathscr{C}^{\theta}_{\mathbb{R}^{d}}:=(\mathscr{C}^{\theta})^{d}=\mathscr{C} ^{\theta}(\mathbb{R}^{d},\mathbb{R}^{d})\), \(\mathscr{C}^{\theta-}:=\bigcap_{\gamma<\theta}\mathscr{C}^{\gamma}\) and \(\mathscr{C}^{\theta+}=\bigcup_{\gamma>\theta}\mathscr{C}^{\gamma}\). Furthermore, we introduce the notation \(\mathscr{C}^{\theta}_{p}:=B^{\theta}_{p,\infty}\) for \(\theta\in\mathbb{R}\) and \(p\in[1,\infty]\), where \(\mathscr{C}^{\theta}:=\mathscr{C}^{\theta}_{\infty}\) with norm denoted by \(\|\cdot\|_{\theta}:=\|\cdot\|_{\mathscr{C}^{\theta}}\). For \(1\leqslant p_{1}\leqslant p_{2}\leqslant\infty\), \(1\leqslant q_{1}\leqslant q_{2}\leqslant\infty\) and \(s\in\mathbb{R}\), the Besov space \(B^{s}_{p_{1},q_{1}}\) is continuously embedded in \(B^{s-d(1/p_{1}-1/p_{2})}_{p_{2},q_{2}}\) (cf. [1, Proposition 2.71]). Furthermore, we will use that for \(u\in B^{s}_{p,q}\) and a multi-index \(n\in\mathbb{N}^{d}\), \(\|\partial^{n}u\|_{B^{s-|n|}_{p,q}}\lesssim\|u\|_{B^{s}_{p,q}}\), which follows from the more general multiplier result from [1, Proposition 2.78]. We recall from Bony's paraproduct theory (cf. [1, Section 2]) that in general for \(u\in\mathscr{C}^{\theta}\) and \(v\in\mathscr{C}^{\beta}\) with \(\theta,\beta\in\mathbb{R}\), the product \(uv:=u\otimes v+u\odot v+u\odot v\), is well defined in \(\mathscr{C}^{\min(\theta,\beta,\theta+\beta)}\) if and only if \(\theta+\beta>0\). Denoting \(S_{i}u=\sum_{j=-1}^{i-1}\Delta_{j}u\), the paraproducts are defined as follows \[u\odot v:=\sum_{i\geqslant-1}S_{i-1}u\Delta_{i}v,\quad u\odot v:=v\odot u, \quad u\odot v:=\sum_{|i-j|<1}\Delta_{i}u\Delta_{j}v.\] Here, we use the notation of [19, 20] for the para- and resonant products \(\odot,\odot\) and \(\odot\). In estimates we often use the notation \(a\lesssim b\), which means, that there exists a constant \(C>0\), such that \(a\leqslant Cb\). In the case that we want to stress the dependence of the constant \(C(d)\) in the estimate on a parameter \(d\), we write \(a\lesssim_{d}b\). The paraproducts satisfy the following estimates for \(p,p_{1},p_{2}\in[1,\infty]\) with \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\leqslant 1\) and \(\theta,\beta\in\mathbb{R}\) (cf. [23, Theorem A.1] and [1, Theorem 2.82, Theorem 2.85]) \[\|u\odot v\|_{\mathscr{C}_{p}^{\theta+\beta}} \lesssim\|u\|_{\mathscr{C}_{p_{1}}^{\theta}}\|v\|_{\mathscr{C}_{p _{2}}^{\beta}}, \text{ if }\theta+\beta>0, \tag{2.3}\] \[\|u\odot v\|_{\mathscr{C}_{p}^{\beta}} \lesssim\|u\|_{\mathscr{D}_{1}}v\|_{\mathscr{C}_{p_{2}}^{\beta}} \lesssim\|u\|_{\mathscr{C}_{p_{1}}^{\theta}}\|v\|_{\mathscr{C}_{p _{2}}^{\beta}}, \text{ if }\theta>0,\] \[\|u\odot v\|_{\mathscr{C}_{p}^{\beta+\theta}} \lesssim\|u\|_{\mathscr{C}_{p_{1}}^{\theta}}\|v\|_{\mathscr{C}_{p _{2}}^{\beta}}, \text{ if }\theta<0.\] So if \(\theta+\beta>0\), we have \(\|uv\|_{\mathscr{C}_{p}^{\gamma}}\lesssim\|u\|_{\mathscr{C}_{p_{1}}^{\theta}} \|v\|_{\mathscr{C}_{p_{2}}^{\beta}}\) for \(\gamma:=\min(\theta,\beta,\theta+\beta)\). We define the generalized fractional Laplacian \(\mathfrak{L}_{\nu}^{\alpha}\) via Fourier analysis as follows. **Definition 2.1**.: _Let \(\alpha\in(0,2)\) and let \(\nu\) be a symmetric (i.e. \(\nu(A)=\nu(-A)\)), finite and non-zero measure on the unit sphere \(S\subset\mathbb{R}^{d}\). We define the operator \(\mathfrak{L}_{\nu}^{\alpha}\) as_ \[\mathfrak{L}_{\nu}^{\alpha}\mathscr{F}^{-1}\varphi=\mathscr{F}^{-1}(\psi_{\nu }^{\alpha}\varphi)\qquad\text{for }\varphi\in C_{b}^{\infty}, \tag{2.4}\] _where \(\psi_{\nu}^{\alpha}(z):=\int_{S}\lvert\langle z,\xi\rangle\rvert^{\alpha}\nu( d\xi).\) For \(\alpha=2\), we set \(\mathfrak{L}_{\nu}^{\alpha}:=-\frac{1}{2}\Delta\)._ **Remark 2.2**.: _If we take \(\nu\) as a suitable multiple of the Lebesgue measure on the sphere, then \(\psi_{\nu}^{\alpha}(z)=\lvert 2\pi z\rvert^{\alpha}\) and thus \(\mathfrak{L}_{\nu}^{\alpha}\) is the fractional Laplace operator \((-\Delta)^{\alpha/2}\)._ **Assumption 2.3**.: _Throughout the paper, we assume that the measure \(\nu\) from Definition 2.1 has \(d\)-dimensional support, in the sense that the linear span of its support is \(\mathbb{R}^{d}\)._ So far we defined \(\mathfrak{L}_{\nu}^{\alpha}\) on \(C_{b}^{\infty}\), so in particular on Schwartz functions. But the definition of \(\mathfrak{L}_{\nu}^{\alpha}\) on Schwartz distributions by duality is problematic, because for \(\alpha\in(0,2)\) the function \(\psi_{\nu}^{\alpha}\) has a singularity in \(0\). This motivates the next proposition. **Proposition 2.4** (Continuity of the operator \(\mathfrak{L}_{\nu}^{\alpha}\)).: _Let \(\alpha\in(0,2]\). Then for \(\beta\in\mathbb{R}\) and \(u\in C_{b}^{\infty}\), \(p\in[1,\infty]\), we have_ \[\|\mathfrak{L}_{\nu}^{\alpha}u\|_{\mathscr{C}_{p}^{\beta-\alpha}}\lesssim\|u \|_{\mathscr{C}_{p}^{\beta}}.\] _In particular, \(\mathfrak{L}_{\nu}^{\alpha}\) can be uniquely extended to a continuous operator from \(\mathscr{C}_{p}^{\beta}\) to \(\mathscr{C}_{p}^{\beta-\alpha}\)._ Proof.: For \(j\geqslant 0\) it follows from [1, Lemma 2.2] the estimate \(\|\mathfrak{L}_{\nu}^{\alpha}\Delta_{j}u\|_{L^{p}}\lesssim 2^{-j(\beta-\alpha)}\|u\|_{ \mathscr{C}_{p}^{\beta}}\). This uses that \(\psi_{\nu}^{\alpha}\) is infinitely continuously differentiable in \(\mathbb{R}^{d}\setminus\{0\}\) with \(\lvert\partial^{\mu}\psi_{\nu}^{\alpha}(z)\rvert\lesssim\lvert z\rvert^{ \alpha-\lvert\mu\rvert}\) for a multi-index \(\mu\in\mathbb{N}_{0}^{d}\) with \(\lvert\mu\rvert:=\mu_{1}+\dots+\mu_{d}\leqslant\alpha\) and that \(\Delta_{j}u\) has a Fourier transform, which is supported in \(2^{j}\mathscr{A}\), where \(\mathscr{A}\) is the annulus, where \(p_{0}\) is supported. For \(j=-1\) we use that \(-\mathfrak{L}_{\nu}^{\alpha}\varphi=A\varphi\) for test functions \(\varphi\in C_{b}^{\infty}\) and \(A\) defined as \[A\varphi(x)=\int_{\mathbb{R}^{d}}\bigl{(}\varphi(x+y)-\varphi(x)-\mathbf{1}_{ \{|y|\leqslant 1\}}(y)\nabla\varphi(x)\cdot y\bigr{)}\mu(dy)\qquad\text{for }\varphi\in C_{b}^{\infty}. \tag{2.5}\] and therefore \[-\mathfrak{L}_{\nu}^{\alpha}\mathscr{F}^{-1}\tilde{p}_{-1}(x) =\int_{\mathbb{R}^{d}}\left(\mathscr{F}^{-1}\tilde{p}_{-1}(x+y)- \mathscr{F}^{-1}\tilde{p}_{-1}u(x)-\nabla\mathscr{F}^{-1}\tilde{p}_{-1}u(x) \cdot y\mathds{1}_{\{|y|\leqslant 1\}}\right)\mu(dy)\] \[\lesssim\int_{B(0,1)}\|D^{2}\mathscr{F}^{-1}\tilde{p}_{-1}\|_{L^{ \infty}}\lvert y\rvert^{2}\mu(dy)+\|\mathscr{F}^{-1}\tilde{p}_{-1}\|_{L^{ \infty}}\mu(B(0,1)^{c})\lesssim 1\] where \(B(0,1)=\{|y|\leqslant 1\}\) and \(\tilde{p}_{-1}\) is smooth and compactly supported in a ball and such that \(\tilde{p}_{-1}p_{-1}=p_{-1}\). Then we obtain with \(-\mathfrak{L}_{\nu}^{\alpha}\Delta_{-1}u=-\mathfrak{L}_{\nu}^{\alpha} \mathscr{F}^{-1}\tilde{p}_{-1}*\Delta_{-1}u\) and Young's convolution inequality, \[\|-\mathfrak{L}_{\nu}^{\alpha}\Delta_{-1}u\|_{L^{p}} \leqslant\|-\mathfrak{L}_{\nu}^{\alpha}\mathscr{F}^{-1}\tilde{p}_{-1} \|_{L^{1}}\|\Delta_{-1}u\|_{L^{p}}\leqslant\|-\mathfrak{L}_{\nu}^{\alpha} \mathscr{F}^{-1}\tilde{p}_{-1}\|_{L^{\infty}}\|\Delta_{-1}u\|_{L^{p}}\lesssim \|u\|_{\mathscr{C}_{p}^{\beta}}.\] For \(z\in\mathbb{R}^{d}\setminus\{0\}\), we also have \[\psi_{\nu}^{\alpha}(z)=|z|^{\alpha}\int_{S}\Big{|}\Big{\langle} \frac{z}{|z|},\xi\Big{\rangle}\Big{|}^{\alpha}\nu(d\xi)\geqslant|z|^{\alpha} \min_{|y|=1}\int_{S}|\langle y,\xi\rangle|^{\alpha}\nu(d\xi),\] and by Assumption 2.3 the minimum on the right hand side is strictly positive. Otherwise, there exists some \(y_{0}\neq 0\) with \(\int_{S}|\langle y_{0},\xi\rangle|^{\alpha}\nu(d\xi)=0\) and this would mean that the support of \(\nu\) (and thus also its span) is contained in the orthogonal complement of \(\operatorname{span}(y_{0})\). Therefore, \(e^{-\psi_{\nu}^{\alpha}}\) decays faster than any polynomial at infinity and outside of \(0\) it even behaves like a Schwartz function. **Lemma 2.5** (Semigroup estimates).: _Let \(\nu\) be a finite, symmetric measure on the sphere \(S\subset\mathbb{R}^{d}\) satisfying Assumption 2.3. Let \(P_{t}\varphi:=\mathscr{F}^{-1}(e^{-t\psi_{\nu}^{\alpha}}\dot{\varphi})=\rho_{ t}*\varphi\), where \(t>0\), \(\rho_{t}=\mathscr{F}^{-1}e^{-t\psi_{\nu}^{\alpha}}\in L^{1}\), and \(\varphi\in C_{b}^{\infty}\). Then we have for \(\vartheta\geqslant 0\), \(\beta\in\mathbb{R}\), \(p\in[1,\infty]\)_ \[\|P_{t}\varphi\|_{\mathscr{C}_{p}^{\beta+\vartheta}}\lesssim(t^ {-\vartheta/\alpha}\lor 1)\|\varphi\|_{\mathscr{C}_{p}^{\beta}}, \tag{2.6}\] _and for \(\vartheta\in[0,\alpha]\)_ \[\|(P_{t}-\mathrm{Id})\varphi\|_{\mathscr{C}_{p}^{\beta-\vartheta }}\lesssim t^{\vartheta/\alpha}\|\varphi\|_{\mathscr{C}_{p}^{\beta}}. \tag{2.7}\] _Furthermore, for \(\beta\in(0,1)\), \(p=\infty\),_ \[\|(P_{t}-\mathrm{Id})\varphi\|_{L^{\infty}}\lesssim t^{\beta/ \alpha}\|\varphi\|_{\mathscr{C}^{\beta}}. \tag{2.8}\] _Therefore, if \(\vartheta\geqslant 0\), then \(P_{t}\) has a unique extension to a bounded linear operator in \(L(\mathscr{C}^{\beta},\mathscr{C}^{\beta+\vartheta})\) and this extension satisfies the same bounds._ Proof.: In the case \(\theta\in[0,\alpha)\), this follows from [1, Lemma A.5], see also [1, Lemma A.7], whose generalization to integrability \(p\in[1,\infty]\) is immediate. For the case \(\vartheta=\alpha\) in (2.7), we estimate \[\|(P_{t}-\mathrm{Id})\varphi\|_{\mathscr{C}_{p}^{\beta-\alpha}} =\bigg{\|}\int_{0}^{t}(-\mathfrak{L}_{\nu}^{\alpha})P_{r}\varphi d \bigg{\|}_{\mathscr{C}_{p}^{\beta-\alpha}}\] \[\leqslant\int_{0}^{t}\|(-\mathfrak{L}_{\nu}^{\alpha})P_{r}\varphi \|_{\mathscr{C}_{p}^{\beta-\alpha}}dr\] \[\lesssim\int_{0}^{t}\|P_{r}\varphi\|_{\mathscr{C}_{p}^{\beta}}dr \lesssim t\|\varphi\|_{\mathscr{C}_{p}^{\beta}}\] using Proposition 2.4 and (2.6) for \(\vartheta=0\). (2.8) follows from [1, Lemma A.8]. The next three lemmas deal with commutators between the \((-\mathfrak{L}_{\nu}^{\alpha})\) operator, its semigroup and the paraproduct. The proofs can be found in Appendix A. **Lemma 2.6**.: _Let \(\alpha\in(1,2]\), \(f\in\mathscr{C}_{p}^{\sigma}\) and \(g\in\mathscr{C}^{\varsigma}\) with \(\sigma\in(0,1)\) and \(\varsigma\in\mathbb{R}\), \(p\in[1,\infty]\). Then the commutator for \((-\mathfrak{L}_{\nu}^{\alpha})\) follows:_ \[\|(-\mathfrak{L}_{\nu}^{\alpha})(f\vartriangleleft g)-f\vartriangleleft(- \mathfrak{L}_{\nu}^{\alpha})g\|_{\mathscr{C}_{p}^{\sigma+\varsigma-\alpha}} \lesssim\|f\|_{\mathscr{C}_{p}^{\sigma}}\|g\|_{\mathscr{C}^{\varsigma}}.\] **Lemma 2.7**.: _Let \((P_{t})\) be as in Lemma 2.5. Then, for \(\sigma\in(0,1)\), \(\varsigma\in\mathbb{R}\), \(p\in[1,\infty]\) and \(\vartheta\geqslant-\alpha\) the following commutator estimate holds true:_ \[\|P_{t}(u\vartriangleleft v)-u\vartriangleleft P_{t}v\|_{\mathscr{C}_{p}^{ \sigma+\varsigma+\vartheta}}\lesssim(t^{-\vartheta/\alpha}\lor 1)\|u\|_{ \mathscr{C}_{p}^{\sigma}}\|v\|_{\mathscr{C}^{\varsigma}}. \tag{2.9}\] **Lemma 2.8**.: _Let \(\mathfrak{L}_{\nu}^{\alpha}\) and \((P_{t})_{t\geqslant 0}\) be defined as in Definition 2.1 and Lemma 2.5 and let \(\alpha\in(1,2]\). Let \(T>0\), \(\sigma\in(0,1)\), \(\varsigma\in\mathbb{R}\), \(p\in[1,\infty]\) and \(\theta\geqslant 0\). Then the commutator on the operator \((-\mathfrak{L}_{\nu}^{\alpha})P_{t}\) follows:_ \[\|(-\mathfrak{L}_{\nu}^{\alpha})P_{t}(u\vartriangleleft v)-u\vartriangleleft(- \mathfrak{L}_{\nu}^{\alpha})P_{t}v\|_{\mathscr{C}_{p}^{\sigma+\varsigma-\alpha +\theta}}\lesssim(t^{-\theta/\alpha}\lor 1)\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v\|_{ \mathscr{C}^{\varsigma}}.\] The mild formulation of the Kolmogorov equation is given by \[u_{t}=P_{T-t}u_{T}+\int_{t}^{T}P_{r-t}(V_{r}\cdot\nabla u_{r}-f_{r})dr=:P_{T-t }u_{T}+J^{T}(V\cdot\nabla u-f)(t). \tag{2.10}\] Due to the Schauder estimates, considering a singular terminal condition with \(u_{T}\in\mathscr{C}_{p}^{\beta+}\), we obtain that \(\|P_{T-t}u_{T}\|_{\mathscr{C}_{p}^{\alpha+\beta}}\) blows up for \(t\to T\) and the blow-up is of order \(\gamma\in(0,1)\). This motivates the definition of blow-up spaces below, from which we can build the solution space in the next section. For \(\gamma\in(0,1)\), \(T>0\) and \(\overline{T}\in(0,T]\), and a Banach space \(X\), let us define the blow-up space \[\mathscr{M}_{T,T}^{\gamma}X:=\{u:[T-\overline{T},T)\to X\mid t\mapsto(T-t)^{ \gamma}u_{t}\in C([T-\overline{T},T),X)\},\] with \(\|u\|_{\mathscr{M}_{T,T}^{\gamma}X}:=\sup_{t\in[T-\overline{T},T)}(T-t)^{ \gamma}\|u_{t}\|_{X}\) and \(\mathscr{M}_{\overline{T},T}^{0}X:=C([T-\overline{T},T),X)\). For \(\overline{T}=T\), we use the notation \(\mathscr{M}_{T}^{\gamma}X:=\mathscr{M}_{T,T}^{\gamma}X\). For \(\vartheta\in(0,1]\), \(\gamma\in(0,1)\), we furthermore define \[C_{\overline{T},T}^{\gamma,\vartheta}X:=\left\{u:[T-\overline{T},T)\to X \biggm{|}\|f\|_{C_{T}^{\vartheta}X}:=\sup_{0\leqslant s<t<T}\frac{(T-t)^{ \gamma}\|f_{t}-f_{s}\|_{X}}{|t-s|^{\vartheta}}<\infty\right\}\] and \(C_{T}^{\gamma,\vartheta}X:=C_{T,T}^{\gamma,\vartheta}X\). Let us also define for \(\vartheta\in(0,1]\), \(\overline{T}\in(0,T]\), the space of \(\vartheta\)-Holder continuous functions on \([T-\overline{T},T]\) with values in \(X\), \[C_{\overline{T},T}^{\vartheta}X:=\left\{u:[T-\overline{T},T]\to X\biggm{|}\|u \|_{C_{T}^{\vartheta}X}:=\sup_{T-\overline{T}\leqslant s<t\leqslant T}\frac {\|u_{t}-u_{s}\|_{X}}{|t-s|^{\vartheta}}<\infty\right\}\] and \(C_{T}^{\vartheta}X:=C_{T,T}^{\vartheta}X\). We set \(C_{\overline{T},T}^{\vartheta,\vartheta}X:=C^{\vartheta}([T-\overline{T},T),X)\). We have the trivial estimates \[\|u\|_{\mathscr{M}_{\overline{T},T}^{\gamma_{1}}X}\leqslant\overline{T}^{ \gamma_{1}-\gamma_{2}}\|u\|_{\mathscr{M}_{\overline{T},T}^{\gamma_{2}}X},\quad \|u\|_{C_{\overline{T},T}^{\gamma_{1},\vartheta_{1}}X}\leqslant\overline{T}^{ (\gamma_{1}-\gamma_{2})+(\vartheta_{2}-\vartheta_{1})}\|u\|_{C_{\overline{T},T}^{ \gamma_{2},\vartheta_{2}}X} \tag{2.11}\] for \(0\leqslant\gamma_{2}\leqslant\gamma_{1}<1\) and \(0<\vartheta_{1}\leqslant\vartheta_{2}\leqslant 1\). Moreover, we have that for a subinterval \([T-2\overline{T},T-\overline{T}]\subset[0,T]\) with \(0<\overline{T}\leqslant\frac{T}{2}\), \[\|u\|_{\mathscr{M}_{\overline{T},T-\overline{T}}^{0}X}\leqslant\overline{T}^{- \gamma}\|u\|_{\mathscr{M}_{T}^{\gamma}X}. \tag{2.12}\] ## 3. Schauder theory and commutator estimates for blow-up spaces In this section, we define the solution space \(\mathscr{L}_{T}^{\gamma,\alpha+\beta}\) and prove Schauder and commutator estimates. We conclude the section with interpolation estimates for the solution spaces. Heuristically, the solution space shall combine maximal space regularity (i.e. \(\alpha+\beta\)) in a time-blow-up space with maximal time regularity (i.e. Lipschitz) in a space of low space regularity. By interpolation, the solution will then also admit all time and space regularities "in between". Let us thus define for \(\gamma\in(0,1)\) and \(\theta\in\mathbb{R}\), \(p\in[1,\infty]\), the space \[\mathscr{L}_{T}^{\gamma,\theta}:=\mathscr{M}_{T}^{\gamma}\mathscr{C}_{p}^{ \theta}\cap C_{T}^{1-\gamma}\mathscr{C}_{p}^{\theta-\alpha}\cap C_{T}^{\gamma,1}\mathscr{C}_{p}^{\theta-\alpha}. \tag{3.1}\] We moreover define for \(\gamma=0\), \[\mathscr{L}_{T}^{0,\theta}:=C_{T}^{1}\mathscr{C}_{p}^{\theta-\alpha}\cap C_{ T}\mathscr{C}_{p}^{\theta}, \tag{3.2}\] where \(C_{T}^{1}X\) denotes the space of \(1\)-Holder or Lipschitz functions with values in \(X\). For \(\overline{T}\in(0,T)\), we define \(\mathscr{L}_{\overline{T},T}^{\gamma,\theta}:=\mathscr{M}_{T,T}^{\gamma} \mathscr{C}_{p}^{\theta}\cap C_{T,T}^{1-\gamma}\mathscr{C}_{p}^{\theta-\alpha} \cap C_{\overline{T},T}^{\gamma,1}\mathscr{C}_{p}^{\theta-\alpha}\) and similarly \(\mathscr{L}_{\overline{T},T}^{0,\theta}\). The spaces \(\mathscr{L}_{T}^{\gamma,\theta}\) are Banach spaces equipped with the norm \[\|u\|_{\mathscr{L}_{T}^{\gamma,\theta}}:=\|u\|_{\mathscr{L}_{T}^{ \gamma}\mathscr{C}_{p}^{\theta}}+\|u\|_{C_{T}^{1-\gamma}\mathscr{C}_{p}^{ \theta-\alpha}}+\|u\|_{C_{T}^{\gamma,1}\mathscr{C}_{p}^{\theta-\alpha}}\] \[\qquad\qquad=\sup_{t\in[0,T)}(T\!-\!t)^{\gamma}\|u_{t}\|_{\mathscr{ C}_{p}^{\theta}}+\sup_{0\leq s<t\leqslant T}\frac{\|u_{t}\!-\!u_{s}\|_{ \mathscr{C}_{p}^{\theta-\alpha}}}{|t\!-\!s|^{1-\gamma}}\!+\!\sup_{0\leq s<t<T} \frac{(T\!-\!t)^{\gamma}\|u_{t}\!-\!u_{s}\|_{\mathscr{C}_{p}^{\theta-\alpha}} }{|t\!-\!s|}.\] Notice, that \(u\in\mathscr{L}_{T}^{\gamma,\theta}\) in particular implies that \(t\mapsto\|u_{t}\|_{\mathscr{C}^{\theta-\alpha}}\) is \((1-\gamma)\)-Holder continuous at \(t=T\). The next corollary proves estimates for the semigroup \((P_{t})\) of \((-\mathfrak{L}_{\nu}^{\alpha})\) acting on the spaces \(\mathscr{L}_{T}^{\gamma,\theta}\). We will need the following auxillary lemma. In particular, the lemma can be applied, to show that the inverse fractional Laplacian improves space regularity by \(\alpha\) (and not only by \(\theta<\alpha\)). It is a slight generalization of [1, Lemma A.9, (A.1)]. Its proof can be found in Appendix A. **Lemma 3.1**.: _Let \(\sigma\in\mathbb{R}\), \(p\in[1,\infty]\), \(\gamma\in[0,1)\), \(\varepsilon\in(0,1)\) and \(\varsigma\geqslant 0\). Let moreover \(f:\dot{\Delta}_{T}\to\mathscr{S}^{\prime}\), \(\dot{\Delta}_{T}:=\{(t,r)\in[0,T]^{2}\ |\ t<r\}\), be such that there exists \(C>0\) such that for all \(j\geqslant-1\) and \(0\leqslant t<r\leqslant T\), for the Littlewood-Paley blocks holds_ \[\|\Delta_{j}f_{t,r}\|_{L^{p}}\leqslant C(T-r)^{-\gamma}\min(2^{-j\sigma},2^{-j (\sigma+\varsigma+\varepsilon\varsigma)}(r-t)^{-(1+\varepsilon)}).\] _Then it follows that for all \(t\in[0,T]\)_ \[\bigg{\|}\int_{t}^{T}f_{t,r}dr\bigg{\|}_{\mathscr{C}_{p}^{\theta+\varsigma}} \leqslant[2C\max(\varepsilon^{-1},(1-\gamma)^{-1})](T-t)^{-\gamma}. \tag{3.3}\] **Corollary 3.2** (Schauder estimates).: _Let \((P_{t})\) and \(\nu\) be as in Lemma 2.5. Let \(T>0\), \(\overline{T}\in(0,T]\). For \(t\in[T-\overline{T},T]\) we define \(J^{T}v(t)=J^{T}(v)(t):=\int_{t}^{T}P_{r-t}v(r)dr\). Then we have for \(\beta\in\mathbb{R}\), \(\vartheta\in[0,\alpha]\), \(\gamma\in[\vartheta/\alpha,1]\),_ \[\|P_{T-.}w\|_{\mathscr{L}_{T,T}^{\gamma,\beta+\vartheta}}\lesssim\overline{T}^{ (\gamma\alpha-\vartheta)/\alpha}\|w\|_{\mathscr{C}_{p}^{\beta}} \tag{3.4}\] _and for \(0\leqslant\gamma^{\prime}\leqslant\gamma<1\),_ \[\|J^{T}v\|_{\mathscr{L}_{T,T}^{\gamma,\beta+\alpha}}\lesssim\overline{T}^{ \gamma-\gamma^{\prime}}\|v\|_{\mathscr{M}_{T,T}^{\gamma^{\prime}}\mathscr{C}_{p} ^{\beta}}. \tag{3.5}\] Proof.: For (3.4) we only prove the estimate in \(C^{1-\gamma}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\vartheta-\alpha}\) and in \(C^{\gamma,1}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\vartheta-\alpha}\), the estimate in \(\mathscr{M}^{\gamma}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\vartheta}\) follows from a direct application of Lemma 2.5. Therefore we write \(P_{T-t}w-P_{T-s}w=P_{T-t}(\operatorname{Id}-P_{t-s})w\) for \(T-\overline{T}\leqslant s<t\leqslant T\) and use Lemma 2.5 to conclude \[\|P_{T-t}w-P_{T-s}w\|_{\mathscr{C}_{p}^{\beta+\vartheta-\alpha}} \lesssim\|(\operatorname{Id}-P_{t-s})w\|_{\mathscr{C}_{p}^{\beta+ \vartheta-\alpha}} \lesssim(t-s)^{1-\vartheta/\alpha}\|w\|_{\mathscr{C}_{p}^{\beta}}\] \[\lesssim\overline{T}^{\gamma\alpha-\vartheta)/\alpha}(t-s)^{1- \gamma}\|w\|_{\mathscr{C}_{p}^{\beta}}\] using \(0\leqslant\vartheta\leqslant\alpha\) and \(\gamma\geqslant\vartheta/\alpha\). This controls \(\|P_{T-.}w\|_{C^{1-\gamma}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\gamma- \alpha}}\). To bound the norm \(\|P_{T-.}w\|_{C^{\gamma,1}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\gamma- \alpha}}\), we note that \[\|P_{T-t}w-P_{T-s}w\|_{\mathscr{C}_{p}^{\beta+\vartheta-\alpha}} \lesssim(T-t)^{-\vartheta/\alpha}\|(\operatorname{Id}-P_{t-s})w \|_{\mathscr{C}_{p}^{\beta-\alpha}}\] \[\lesssim(T-t)^{-\vartheta/\alpha}(t-s)\|w\|_{\mathscr{C}_{p}^{ \beta}}\] \[\lesssim\overline{T}^{(\gamma\alpha-\vartheta)/\alpha}(T-t)^{- \gamma}(t-s)\|w\|_{\mathscr{C}_{p}^{\beta}}.\] To estimate the \(\mathscr{M}^{\gamma}_{\overline{T},T}\mathscr{C}_{p}^{\beta+\alpha}\)-norm in (3.5), we use Lemma 3.1 with \(f_{t,r}=P_{r-t}v_{r}\) and \(\sigma=\beta\), \(\varsigma=\alpha\), to obtain for \(t\in[T-\overline{T},T]\) \[(T-t)^{\gamma}\|J^{T}v(t)\|_{\mathscr{C}_{p}^{\beta+\alpha}}=(T-t)^{\gamma- \gamma^{\prime}}(T-t)^{\gamma^{\prime}}\bigg{\|}\int_{t}^{T}P_{r-t}v_{r}dr \bigg{\|}_{\mathscr{C}_{p}^{\beta+\alpha}}\lesssim\overline{T}^{\gamma-\gamma^ {\prime}}\|v\|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T}\mathscr{C}_{p}^ {\beta}}.\] To prove the bounds on the time regularity in (3.5) we write \[J^{T}(v)_{t}-J^{T}(v)_{s}=\int_{s}^{t}P_{r-s}v_{r}dr-(P_{t-s}- \operatorname{Id})\bigg{(}\int_{t}^{T}P_{r-t}v_{r}dr\bigg{)},\] for \(T-\overline{T}\leqslant s<t\leqslant T\). We can estimate by Lemma 2.5 \[\bigg{\|}\int_{s}^{t}P_{r-s}v_{r}dr\bigg{\|}_{\mathscr{C}_{p}^{ \beta}} \leqslant\int_{s}^{t}\|P_{r-s}v_{r}\|_{\mathscr{C}_{p}^{\beta}}dr\] \[\lesssim\|v\|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T} \mathscr{C}_{p}^{\beta}}\int_{s}^{t}|T-r|^{-\gamma^{\prime}}dr\] \[\lesssim\|v\|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T} \mathscr{C}_{p}^{\beta}}(|T-s|^{1-\gamma^{\prime}}-|T-t|^{1-\gamma^{\prime}})\] \[\leqslant\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1-\gamma}\|v \|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T}\mathscr{C}_{p}^{\beta}},\] using that \(0\leqslant\gamma^{\prime}\leqslant\gamma<1\) and the estimate \[|T-t|^{1-\gamma^{\prime}}-|T-s|^{1-\gamma^{\prime}}\leqslant|t-s|^{1-\gamma^{ \prime}}\leqslant\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1-\gamma}.\] On the other hand, we can also estimate that term by \[\bigg{\|}\int_{s}^{t}P_{r-s}v_{r}dr\bigg{\|}_{\mathscr{C}_{p}^{ \beta}} \lesssim\|v\|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T} \mathscr{C}_{p}^{\beta}}\int_{s}^{t}|T-r|^{-\gamma^{\prime}}dr\] \[\leqslant\|v\|_{\mathscr{M}^{\gamma^{\prime}}_{\overline{T},T} \mathscr{C}_{p}^{\beta}}|T-t|^{-\gamma^{\prime}}\int_{s}^{t}dr\] \[\leqslant\overline{T}^{\gamma-\gamma^{\prime}}\|v\|_{\mathscr{M}^ {\gamma^{\prime}}_{\overline{T},T}\mathscr{C}_{p}^{\beta}}|T-t|^{-\gamma}|t-s|.\] Moreover, by Lemma 2.5 for \(\vartheta=\alpha\) and Lemma 3.1, we obtain that \[\left\|(P_{t-s}-\mathrm{Id})\biggl{(}\int_{t}^{T}P_{r-t}v_{r}dr \biggr{)}\right\|_{\mathscr{C}_{p}^{\beta}} \lesssim|t-s|\biggl{\|}\int_{t}^{T}P_{r-t}v_{r}dr\biggr{\|}_{ \mathscr{C}_{p}^{\beta+\alpha}}\] \[\lesssim|t-s|\|v\|_{\mathscr{M}_{T,T}^{\gamma^{\prime}}}\varepsilon _{p}^{\phi}(T-t)^{-\gamma^{\prime}}\] \[\lesssim|t-s|\overline{T}^{\gamma-\gamma^{\prime}}\|v\|_{ \mathscr{M}_{T,T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\beta}}(T-t)^{-\gamma},\] and on the other hand we can estimate by Lemma 2.5 for \(\vartheta=(1-\gamma)\alpha\), \[\left\|(P_{t-s}-\mathrm{Id})\biggl{(}\int_{t}^{T}P_{r-t}v_{r}dr \biggr{)}\right\|_{\mathscr{C}_{p}^{\beta}}\] \[\lesssim|t-s|^{(\alpha-\gamma\alpha)/\alpha}\biggl{\|}\int_{t}^{ T}P_{r-t}v_{r}dr\biggr{\|}_{\mathscr{C}_{p}^{\beta+\alpha-\gamma\alpha}}\] \[\lesssim|t-s|^{(\alpha-\gamma\alpha)/\alpha}\|v\|_{\mathscr{M}_{ T,T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\beta}}\int_{t}^{T}(T-r)^{-\gamma^{ \prime}}(t-r)^{(\gamma\alpha-\alpha)/\alpha}dr\] \[\lesssim|t-s|^{1-\gamma}\|v\|_{\mathscr{M}_{T,T}^{\gamma^{\prime} }\mathscr{C}_{p}^{\beta}}(T-t)^{\gamma-\gamma^{\prime}}\] \[\lesssim|t-s|^{1-\gamma}\|v\|_{\mathscr{M}_{T,T}^{\gamma^{\prime} }\mathscr{C}_{p}^{\beta}}\overline{T}^{\gamma-\gamma^{\prime}},\] where we used that \(\gamma>0\) and that \(\gamma^{\prime}\leqslant\gamma<1\) (if \(\gamma=0\), we can use the previous estimate instead). **Remark 3.3**.: _A less general approach for dealing with singular initial conditions in paracontrolled equations was developed in [1]. The function spaces above (3.1) seem more flexible, and actually there is a mistake in the singular Schauder estimates in [1, Lemma 6.6]: Equation (49) therein is only true for \(\beta\in(0,2-\alpha)\), i.e. only for distributional initial conditions3, and \(\beta\in(-\alpha,0)\) would force \(u_{0}=0\)._ Footnote 3: We thank Ruhong Jin for pointing out this mistake. Next, we prove a commutator estimate for the \(J^{T}\)-operator and the paraproduct. **Lemma 3.4** (Commutator estimates).: _Let \(T>0\) and \(\overline{T}\in(0,T]\) and let \(\varsigma\in\mathbb{R}\), \(\sigma\in(0,1)\) and \(p\in[1,\infty]\). Let \(\alpha\in(1,2]\) and \(\gamma\in[0,1)\). Then for \(u\in\mathscr{C}_{p}^{\sigma}\), \(v\in\mathscr{C}^{\varsigma}\) the following semigroup commutator estimate holds_ \[\|t\mapsto P_{T-t}(u\otimes v)-u\otimes P_{T-t}(v)\|_{\mathscr{L}_{T}^{\gamma, \sigma+\varsigma+\gamma\alpha}}\lesssim\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v\|_ {\mathscr{C}^{\varsigma}}. \tag{3.6}\] _Furthermore, for \(g\in\mathscr{L}_{T,T}^{\gamma^{\prime},\sigma}\) with \(0\leqslant\gamma^{\prime}\leqslant\gamma<1\) and \(h\in C_{T}\mathscr{C}^{\varsigma}\), we have_ \[\|J^{T}(g\otimes h)-g\otimes J^{T}(h)\|_{\mathscr{L}_{T,T}^{\gamma,\sigma+ \varsigma+\alpha}}\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|g\|_{ \mathscr{L}_{T}^{\gamma^{\prime},\sigma}}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}. \tag{3.7}\] **Remark 3.5**.: _It was already known that the commutator for the \(J^{T}\)-operator from the lemma allows for more space regularity than both of its summands. The above commutator estimate moreover yields a gain in time regularity, i.e. \(J^{T}(g\otimes h)-g\otimes J^{T}(h)\in C_{T}^{1-\gamma}\mathscr{C}_{p}^{\sigma +\varsigma}\cap C_{T}^{\gamma,1}\mathscr{C}_{p}^{\sigma+\varsigma}\), provided that \(g\in\mathscr{L}_{T}^{\gamma,\sigma}\)._ Proof.: Recall that \(\mathscr{L}_{T}^{\gamma,\sigma+\varsigma+\gamma\alpha}\) is equipped with the sum of the norms in \[\mathscr{M}_{T}^{\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha \gamma},\quad C_{T}^{\gamma,1}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma- \alpha}\quad\text{ and }\quad C_{T}^{1-\gamma}\mathscr{C}_{p}^{\sigma+ \varsigma+\alpha\gamma-\alpha},\] that we need to estimate below. For (3.6), the estimate in \(\mathscr{M}_{T}^{\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma}\) follows directly by the semigroup commutator Lemma 2.7 applied to \(\vartheta=\gamma\alpha\). For the estimate in \(C_{T}^{\gamma,1}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma-\alpha}\cap C _{T}^{1-\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma-\alpha}\) we write for \(0\leqslant s\leqslant t\leqslant T\), \[P_{T-t}(u\vartriangleleft v)-u\vartriangleleft P_{T-t}(v)-(P_{T- s}(u\vartriangleleft v)-u\vartriangleleft P_{T-s}(v))\] \[=(\operatorname{Id}-P_{t-s})[P_{T-t}(u\vartriangleleft v)-u \vartriangle P_{T-t}v]\] \[\qquad+[u\vartriangleleft P_{t-s}P_{T-t}v-P_{t-s}(u\vartriangleleft P _{T-t}v)].\] The first summand we can estimate by the semigroup estimates (Lemma 2.5) for \(\operatorname{Id}-P_{t-s}\) and the commutator estimate in \(\mathscr{M}_{T}^{\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma}\), obtaining \[\|(\operatorname{Id}-P_{t-s})[P_{T-t}(u\vartriangleleft v)-u \vartriangleleft P_{T-t}v]\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma- \alpha}}\] \[\lesssim|t-s|\|[P_{T-t}(u\vartriangleleft v)-u\vartriangle P_{T-t }v]\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma}}\] \[\lesssim(T-t)^{-\gamma}|t-s|\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v \|_{\mathscr{C}^{\varsigma}}.\] This gives the estimate in \(C_{T}^{\gamma,1}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha(\gamma-1)}\). Analogously we estimate the \(C_{T}^{1-\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha(\gamma-1)}\). norm using the Schauder estimates for \(\operatorname{Id}-P_{t-s}\) (obtaining a factor of \(|t-s|^{1-\gamma}\)) and the commutator in \(C_{\overline{T},T}\mathscr{C}_{p}^{\sigma+\varsigma}\), i.e. \[\|(\operatorname{Id}-P_{t-s})[P_{T-t}(u\vartriangleleft v)-u \vartriangleleft P_{T-t}v]\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma- \alpha}}\] \[\lesssim|t-s|^{1-\gamma}\|[P_{T-t}(u\vartriangleleft v)-u\vartriangleleft P _{T-t}v]\|_{\mathscr{C}_{p}^{\sigma+\varsigma}}\] \[\lesssim|t-s|^{1-\gamma}\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v\|_{ \mathscr{C}^{\varsigma}}.\] The second summand can be estimated using the semigroup commutator (Lemma 2.7) for \(\vartheta=(\gamma-1)\alpha\geqslant-\alpha\) and the semigroup estimate (2.7), such that \[\left\|P_{t-s}(u\vartriangleleft P_{T-t}v)-u\vartriangleleft P_{t- s}P_{T-t}v\right\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma-\alpha}}\] \[\lesssim|t-s|^{1-\gamma}\|u\|_{\mathscr{C}_{p}^{\sigma}}\|P_{T- t}v\|_{\mathscr{C}^{\varsigma}}\lesssim|t-s|^{1-\gamma}\|u\|_{\mathscr{C}_{p}^{ \sigma}}\|v\|_{\mathscr{C}^{\varsigma}}.\] Using instead the semigroup commutator for \(\vartheta=-\alpha\geqslant-\alpha\) and again the semigroup estimate (2.7) yields \[\left\|P_{t-s}(u\vartriangleleft P_{T-t}v)-u\vartriangleleft P_{t- s}P_{T-t}v\right\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha\gamma-\alpha}}\right.\right.\] \[\lesssim|t-s|\|u\|_{\mathscr{C}_{p}^{\sigma}}\|P_{T-t}v\|_{ \mathscr{C}^{\varsigma+\alpha\gamma}}\] \[\lesssim|t-s|(T-t)^{-\gamma}\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v \|_{\mathscr{C}^{\varsigma}}. \tag{3.8}\] Together, we obtain (3.6). For (3.7), we first prove that \(C(g,h):=J^{T}(g\vartriangleleft h)-g\vartriangleleft J^{T}(h)\in\mathscr{M}_{ \overline{T},T}^{\gamma}\mathscr{C}_{p}^{\sigma+\varsigma+\alpha}\). To that end, we write \[C(g,h)_{t}= \int_{t}^{T}\bigl{(}P_{r-t}(g_{r}\vartriangleleft h_{r})-g_{r} \vartriangleleft P_{r-t}h_{r}\bigr{)}dr+\int_{t}^{T}(g_{r}-g_{t})\vartriangleleft P _{r-t}h_{r}dr=:I_{1}(t)+I_{2}(t).\] To estimate \(I_{1}\), we utilize Lemma 3.1 for \(f_{t,r}=P_{r-t}(g_{r}\otimes h_{r})-g_{r}\otimes P_{r-t}h_{r}\), where the assumptions of the lemma are satisfied by the semigroup commutator estimate (Lemma 2.7). Then, we obtain \[\|I_{1}(t)\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha}} \lesssim\|g\|_{\mathscr{\widetilde{T}},T^{\prime}}\mathscr{C}_{p}^{ \sigma}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}(T-t)^{-\gamma^{\prime}} \lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|g\|_{\mathscr{\widetilde{T}},T ^{\prime}}\mathscr{C}_{p}^{\sigma}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}(T-t)^{- \gamma}.\] For \(I_{2}\), we apply Lemma 3.1 for \(f_{t,r}:=(g_{r}-g_{t})\otimes P_{r-t}h_{r}\). We check the assumptions on \(f_{t,r}\) of that lemma, using the time regularity of \(g\), as well as the paraproduct estimate (using \(\sigma-\alpha<0\)) and the semigroup estimates. Then, choosing \(\theta=0\) or \(\theta=(1+\varepsilon)\alpha\) for \(\varepsilon\in[0,1]\), we estimate (the estimate is in fact valid for all \(\theta\geqslant-\alpha\)) \[\|(g_{r}-g_{t})\otimes P_{r-t}h_{r}\|_{\mathscr{C}_{p}^{\sigma+ \varsigma+\theta}} =\|(g_{r}-g_{t})\otimes P_{r-t}h_{r}\|_{\mathscr{C}_{p}^{(\sigma- \alpha)+(\varsigma+\theta+\alpha)}}\] \[\lesssim(T-r)^{-\gamma^{\prime}}(r-t)^{-\theta/\alpha}\|h\|_{C_{ T}\mathscr{C}^{\varsigma}}\|g\|_{C_{T,T}^{\gamma^{\prime},1}\mathscr{C}_{p}^{ \sigma-\alpha}}\] Applying Lemma 3.1 yields then the estimate for \(I_{2}\): \[\|I_{2}(t)\|_{\mathscr{C}_{p}^{\sigma+\varsigma+\alpha}} \lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|g\|_{C_{T,T}^{\gamma^{ \prime},1}\mathscr{C}_{p}^{\sigma-\alpha}}\|h\|_{C_{T}\mathscr{C}_{\mathbb{R} ^{d}}^{\varsigma}}(T-t)^{-\gamma}.\] Next, we prove the time regularity estimates on the commutator \(C(g,h)\). For that, we write for \(T-\overline{T}\leqslant s\leqslant t\leqslant T\), \[J^{T}(g \otimes h)_{t}-g_{t}\otimes J^{T}(h)_{t}-(J^{T}(g\otimes h)_{s}- g_{s}\otimes J^{T}(h)_{s})\] \[=-\int_{s}^{t}P_{r-s}(g_{r}\otimes h_{r})dr-(P_{t-s}-\mathrm{Id}) \bigg{(}\int_{t}^{T}P_{r-t}(g_{r}\otimes h_{r})dr\bigg{)}\] \[\quad+g_{s}\otimes\int_{s}^{t}P_{r-s}h_{r}dr-g_{s,t}\otimes\int_ {t}^{T}P_{r-t}h_{r}dr+g_{s}\otimes(P_{t-s}-\mathrm{Id})\bigg{(}\!\int_{t}^{T}P_ {r-t}h_{r}dr\bigg{)}\] \[=A_{st}+B_{st}+C_{st},\] where we define \[A_{st}:=g_{s}\otimes\int_{s}^{t}P_{r-s}h_{r}dr-\int_{s}^{t}P_{r-s}(g_{r} \otimes h_{r})dr\] and \[B_{st}:=-g_{s,t}\otimes\int_{t}^{T}P_{r-t}h_{r}dr,\] where \(g_{s,t}:=g_{t}-g_{s}\) and \[C_{st}:=g_{s}\otimes P_{t-s}\bigg{(}\!\int_{t}^{T}P_{r-t}h_{r}dr \bigg{)}-P_{t-s}\bigg{(}\!\int_{t}^{T}P_{r-t}(g_{r}\otimes h_{r})dr\bigg{)}.\] We will consider the terms \(A_{st},B_{st}\) and \(C_{st}\) separately and estimate each term in the \(C_{\overline{T},T}^{1-\gamma}\mathscr{C}^{\sigma+\varsigma}\)-norm and in the \(C_{\overline{T},T}^{\gamma,1}\mathscr{C}^{\sigma+\varsigma}\)-norm. We start with \(B_{st}\), using the time regularity of \(g\), obtaining on the one hand \[\|B_{st}\|_{\mathscr{C}_{p}^{\sigma+\varsigma}} =\bigg{\|}g_{s,t}\otimes\int_{t}^{T}P_{r-t}h_{r}dr\bigg{\|}_{ \mathscr{C}_{p}^{(\sigma-\alpha)+(\varsigma+\varsigma)}}\] \[\lesssim\|g\|_{C_{\overline{T},T}^{1-\gamma^{\prime}}\mathscr{C} _{p}^{\sigma-\alpha}}|t-s|^{1-\gamma^{\prime}}\bigg{\|}\int_{t}^{T}P_{r-t}h_{r} dr\bigg{\|}_{\mathscr{C}^{\alpha+\varsigma}}\] \[\lesssim|t-s|^{1-\gamma^{\prime}}\|g\|_{C_{\overline{T},T}^{1- \gamma^{\prime}}\mathscr{C}_{p}^{\sigma-\alpha}}\|h\|_{C_{T}\mathscr{C}^{ \varsigma}}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1-\gamma}\|g \|_{C_{\overline{T},T}^{1-\gamma^{\prime}}\mathscr{C}_{p}^{\sigma-\alpha}}\|h \|_{C_{T}\mathscr{C}^{\varsigma}},\] using \(\sigma-\alpha<0\) and Lemma 3.1 for \(f_{t,r}=P_{r-t}h_{r}\) to bound the time integral. On the other hand, along the same lines, using instead \(g\in C_{\overline{T},T}^{\gamma^{\prime},1}\mathscr{C}_{p}^{\sigma-\alpha}\), we can estimate \(B_{st}\) by \[\|B_{st}\|_{\mathscr{C}_{p}^{\sigma+\varsigma}}\lesssim\overline{T}^{\gamma- \gamma^{\prime}}\|h\|_{C_{T}\mathscr{C}}\|g\|_{C_{\overline{T},T}^{\gamma^{ \prime},1}\mathscr{C}_{p}^{\sigma-\alpha}}|t-s|(T-t)^{-\gamma}.\] For \(A_{st}\), we use the semigroup commutator (Lemma 2.7) for \(\vartheta=0\), as well as the time regularity of \(g\), which yields \[\|A_{st}\|_{\mathscr{C}_{p}^{\sigma+\varsigma}}\] \[\qquad=\bigg{\|}\int_{s}^{t}P_{r-s}(g_{r}\otimes h_{r})dr-g_{s} \otimes\int_{s}^{t}P_{r-s}h_{r}dr\bigg{\|}_{\mathscr{C}_{p}^{\sigma+\varsigma}}\] \[\qquad\leqslant\bigg{\|}\int_{s}^{t}(P_{r-s}(g_{r}\otimes h_{r}) \!-\!g_{r}\!\otimes\!P_{r-s}h_{r})dr\bigg{\|}_{\mathscr{C}_{p}^{\sigma+ \varsigma}}\!+\!\bigg{\|}\int_{s}^{t}(g_{r}\!-\!g_{s})\!\otimes\!P_{r-s}h_{r} dr\bigg{\|}_{\mathscr{C}_{p}^{(\sigma-\alpha)+(\varsigma+\alpha)}}\] \[\qquad\leqslant\|g\|_{\mathscr{M}_{\overline{T},T}^{\gamma^{ \prime},\sigma^{\sigma^{\sigma}}}}\|h\|_{C_{T}\mathscr{C}_{p}^{\varsigma}}\int _{s}^{t}(T-r)^{-\gamma^{\prime}}dr+\|h\|_{C_{T}\mathscr{C}}\|g\|_{C_{\overline {T},T}^{1-\gamma^{\prime}}\mathscr{C}_{p}^{\sigma-\alpha}}\int_{s}^{t}|r-s|^{ -\gamma^{\prime}}dr\] \[\qquad\lesssim\|g\|_{\mathscr{C}_{\overline{T},T}^{\gamma^{ \prime},\sigma^{\sigma}}}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\big{(}(T-s)^{1- \gamma^{\prime}}-(T-t)^{1-\gamma^{\prime}}+|t-s|^{1-\gamma^{\prime}}\big{)}\] \[\qquad\lesssim\|g\|_{\mathscr{C}_{\overline{T},T}^{\gamma^{ \prime},\sigma^{\sigma}}}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\overline{T}^{ \gamma-\gamma^{\prime}}|t-s|^{1-\gamma}.\] We can also estimate the term \(A_{st}\) by \[\|A_{st}\|_{\mathscr{C}_{p}^{\sigma+\varsigma}} \leqslant\|g\|_{\mathscr{M}_{\overline{T},T}^{\gamma^{\prime}} \mathscr{C}_{p}^{\sigma}}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\int_{s}^{t}(T\!- \!r)^{-\gamma^{\prime}}dr\!+\!\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{C_{ \overline{T},T}^{\gamma^{\prime},1}\mathscr{C}_{p}^{\sigma-\alpha}}\int_{s}^{ t}(T\!-\!r)^{-\gamma^{\prime}}dr\] \[\lesssim(T-t)^{-\gamma^{\prime}}|t-s|\|h\|_{C_{T}\mathscr{C}^{ \varsigma}}\bigg{(}\|g\|_{\mathscr{M}_{\overline{T},T}^{\gamma^{\prime}, \sigma^{\sigma}}}+\|g\|_{C_{\overline{T},T}^{\gamma^{\prime},1}\mathscr{C}_{p }^{\sigma-\alpha}}\bigg{)}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}(T-t)^{-\gamma}|t-s| \|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{\mathscr{M}_{\overline{T},T}^{ \gamma^{\prime},\sigma}},\] using \((T-r)^{-\gamma}\leqslant(T-t)^{-\gamma}\) for \(r\in[s,t]\). It is left to estimate the term \(C_{st}\), that we first rewrite: \[C_{st} =P_{t-s}\bigg{(}\int_{t}^{T}P_{r-t}(g_{r}\otimes h_{r})dr\bigg{)} -g_{s}\otimes P_{t-s}\bigg{(}\int_{t}^{T}P_{r-t}h_{r}dr\bigg{)}\] \[=(P_{t-s}-\mathrm{Id})\bigg{(}\int_{t}^{T}P_{r-t}(g_{r}\otimes h_{ r})dr-g_{s}\otimes\int_{t}^{T}P_{r-t}h_{r}dr\bigg{)} \tag{3.9}\] \[\quad+P_{t-s}\bigg{(}g_{s}\otimes\int_{t}^{T}P_{r-t}h_{r}dr \bigg{)}-g_{s}\otimes P_{t-s}\bigg{(}\int_{t}^{T}P_{r-t}h_{r}dr\bigg{)}. \tag{3.10}\] To estimate the term in line (3.9), we use Lemma 2.5 and the estimate for \(I_{1}(t)+I_{2}(t)\) from above to obtain \[\bigg{\|}(P_{t-s}-\mathrm{Id})\bigg{(}\int_{t}^{T}(P_{r-t}(g_{r} \otimes h_{r})-g_{s}\otimes P_{r-t}h_{r})dr\bigg{)}\bigg{\|}_{\mathscr{C}_{p}^ {\varsigma+\sigma+\alpha-\alpha}}\] \[\qquad\lesssim|t-s|(T-t)^{-\gamma}\overline{T}^{\gamma-\gamma^{ \prime}}\|g\|_{\mathscr{M}_{\overline{T},T}^{\gamma^{\prime},\sigma}}\|h\|_{C_{T }\mathscr{C}^{\varsigma}}.\] The term in line (3.9), we can also estimate differently using Lemma 2.5 and an easier estimate for \(I_{1}(t),I_{2}(t)\) using the semigroup estimates and \(\alpha(1-\gamma^{\prime})<\alpha\) to obtain \[\bigg{\|}(P_{t-s}-\mathrm{Id})\bigg{(}\int_{t}^{T}(P_{r-t}(g_{r} \otimes h_{r})-g_{s}\otimes P_{r-t}h_{r})dr\bigg{)}\bigg{\|}_{\mathscr{C}^{s+ \sigma}_{p}}\] \[\qquad\lesssim|t-s|^{1-\gamma^{\prime}}\bigg{\|}\int_{t}^{T}(P_{ r-t}(g_{r}\otimes h_{r})-g_{s}\otimes P_{r-t}h_{r})dr\bigg{\|}_{\mathscr{C}^{s+ \sigma+\alpha(1-\gamma^{\prime})}_{p}}\] \[\qquad\lesssim|t-s|^{1-\gamma^{\prime}}(\|I_{1}(t)\|_{\mathscr{C }^{s+\sigma+\alpha(1-\gamma^{\prime})}_{p}}+\|I_{2}(t)\|_{\mathscr{C}^{s+ \sigma+\alpha(1-\gamma^{\prime})}_{p}})\] \[\qquad\lesssim|t-s|^{1-\gamma^{\prime}}\|h\|_{C_{T}\mathscr{C}^{ \varsigma}}\bigg{(}\|g\|_{\mathscr{A}^{\gamma^{\prime}}_{T,T}\mathscr{C}^{ \sigma}_{p}}+\|g\|_{\mathscr{C}^{\gamma^{\prime},1}_{T,T}\mathscr{C}^{\sigma- \alpha}_{p}}\bigg{]}\int_{t}^{T}(T-r)^{-\gamma^{\prime}}(r-t)^{-1+\gamma^{ \prime}}dr\bigg{)}\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1- \gamma}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{\mathscr{C}^{\gamma^{\prime },\sigma}_{T,T}}\int_{0}^{1}(1-r)^{-\gamma^{\prime}}r^{-1+\gamma^{\prime}}dr\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1- \gamma}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{\mathscr{C}^{\gamma^{\prime },\sigma}_{T,T}}.\] To estimate the term in line (3.10), we use the commutator for \(P_{t-s}\) for \(\vartheta=-\alpha\) and again Lemma 3.1 for \(f_{t,r}=P_{r-t}h_{r}\), yielding \[\bigg{\|}P_{t-s}\bigg{(}g_{s}\otimes\int_{t}^{T}P_{r-t}h_{r}dr \bigg{)}-g_{s}\otimes P_{t-s}\bigg{(}\int_{t}^{T}P_{r-t}h_{r}dr\bigg{)}\bigg{\|} _{\mathscr{C}^{\sigma+\varsigma+\alpha-\alpha}_{p}}\] \[\qquad\lesssim|t-s|\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{ \mathscr{A}^{\gamma^{\prime}}_{T,T}\mathscr{C}^{\sigma}_{p}}(T-s)^{-\gamma^{ \prime}}\bigg{\|}\int_{t}^{T}P_{r-t}h_{r}dr\bigg{\|}_{\mathscr{C}^{\varsigma+ \alpha}}\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}|t-s|\|h\|_{C _{T}\mathscr{C}^{\varsigma}}\|g\|_{\mathscr{A}^{\gamma^{\prime}}_{T,T} \mathscr{C}^{\sigma}_{p}}(T-s)^{-\gamma}.\] Applying instead the semigroup commutator for \(\vartheta=-(1-\gamma^{\prime})\alpha\) yields \[\bigg{\|}P_{t-s}\bigg{(}g_{s}\otimes\int_{t}^{T}P_{r-t}h_{r}dr \bigg{)}-g_{s}\otimes P_{t-s}\bigg{(}\int_{t}^{T}P_{r-t}h_{r}dr\bigg{)}\bigg{\|} _{\mathscr{C}^{\sigma+\varsigma}_{p}}\] \[\qquad\lesssim|t-s|^{1-\gamma^{\prime}}\|g\|_{\mathscr{A}^{ \gamma^{\prime}}_{T,T}\mathscr{C}^{\sigma}_{p}}(T-s)^{-\gamma^{\prime}}\bigg{\|} \int_{t}^{T}P_{r-t}h_{r}dr\bigg{\|}_{\mathscr{C}^{\varsigma+\alpha(1-\gamma^{ \prime})}}\] \[\qquad\lesssim|t-s|^{1-\gamma^{\prime}}\|h\|_{C_{T}\mathscr{C}^{ \varsigma}}\|g\|_{\mathscr{A}^{\gamma^{\prime}}_{T,T}\mathscr{C}^{\sigma}_{p}} (T-s)^{-\gamma^{\prime}}(T-t)^{\gamma^{\prime}}\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}|t-s|^{1- \gamma}\|h\|_{C_{T}\mathscr{C}^{\varsigma}}\|g\|_{\mathscr{A}^{\gamma^{\prime }}_{T,T}\mathscr{C}^{\sigma}_{p}},\] where to bound the time integral, we used that \(\alpha(1-\gamma^{\prime})<\alpha\) and \(s\leqslant t\). Together we obtain the desired estimates for \(C_{st}\), which yield together with the estimates for \(A_{st}\) and \(B_{st}\) the claim. **Remark 3.6**.: _The proof of the commutator estimate does not apply if we consider instead of \(g\in\mathscr{L}^{\gamma,\sigma}_{T}\), a function \(g\in\mathscr{M}^{\gamma}_{T}\mathscr{C}^{\sigma}\cap C^{1-\gamma}_{T}\mathscr{ C}^{\sigma-\alpha}\). The reason is the estimate for the term \(I_{2}\) in the above proof, for which we need to employ that \(g\in C^{\gamma,1}_{T}\mathscr{C}^{\sigma-\alpha}\)._ We conclude this section with interpolation estimates for the spaces \(\mathscr{L}^{\gamma,\theta}_{T}\). **Lemma 3.7** (Interpolation estimates).: _Let \(\gamma\in[0,1)\), \(\theta\in[0,\alpha]\), \(p\in[1,\infty]\). Let moreover \(v\in\mathscr{L}^{\gamma,\theta}_{T}\). Then the following estimates hold true: It follows that for \(\theta\in(0,\alpha)\),_ \[\|v\|_{C^{\theta/\alpha}_{T}L^{p}}\lesssim\|v\|_{\mathscr{L}^{\alpha,\theta}_{T}}. \tag{3.11}\] _Furthermore, for \(\tilde{\theta}\in[0,\alpha]\), it holds that_ \[\|v\|_{C_{T}^{\gamma,\delta/\alpha}\mathscr{C}_{p}^{\theta-\tilde{ \theta}}}\lesssim\|v\|_{\mathscr{L}_{T}^{\gamma,\theta}} \tag{3.12}\] _and_ \[\|v\|_{\mathscr{L}_{T}^{(1-\tilde{\theta}/\alpha)}\mathscr{C}_{ p}^{\theta-\tilde{\theta}}}\lesssim\|v\|_{\mathscr{L}_{T}^{\gamma,\theta}}. \tag{3.13}\] _If \(v_{T}\in\mathscr{C}_{p}^{\theta-\tilde{\theta}}\) and \(\tilde{\theta}\in[\alpha\gamma,\alpha]\), then the following estimate holds true_ \[\|v_{t}\|_{\mathscr{C}_{p}^{\theta-\tilde{\theta}}}\lesssim(T-t )^{\tilde{\theta}/\alpha-\gamma}\|v\|_{\mathscr{L}_{T}^{\gamma,\theta}}+\|v_ {T}\|_{\mathscr{C}_{p}^{\theta-\tilde{\theta}}}. \tag{3.14}\] **Remark 3.8**.: _For \(\gamma=0\), \(\theta\in(0,1]\) and a Banach space \(X\), we recall that \(C_{T}^{0,\theta}X=C_{T}^{\theta}X\)._ Proof.: To prove (3.11) we let \(0\leqslant s\leqslant t\leqslant T\) and estimate \[\|v_{t}-v_{s}\|_{L^{p}} \leqslant\sum_{j}\|\Delta_{j}(v_{t}-v_{s})\|_{L^{p}}\] \[\lesssim|t-s|^{\theta/\alpha}\|v\|_{C_{T}\mathscr{C}_{p}^{\theta }}+|t-s|^{\theta/\alpha}\|v\|_{C_{T}^{1}\mathscr{C}_{p}^{\theta-\alpha}},\] using that \(\theta>0\) for the convergence of the geometric sum and that \(\theta<\alpha\). To prove (3.12) and (3.13), we let \(\tilde{\theta}\in[0,\alpha]\). Then we estimate for \(s<t\), \[\|\Delta_{j}(v_{t}-v_{s})\|_{L^{p}}\lesssim(T-t)^{-\gamma}\min \left(2^{-j\theta}\|v\|_{\mathscr{L}_{T}^{\gamma}\mathscr{C}_{p}^{\theta}},2^ {-j(\theta-\alpha)}|t-s|\|v\|_{C_{T}^{\gamma,1}\mathscr{C}_{p}^{\theta-\alpha }}\right)\] and for \(t\in[0,T)\), \[\|\Delta_{j}v_{t}\|_{L^{p}}\lesssim\min(2^{-j\theta}(T-t)^{-\gamma}\|v\|_{ \mathscr{L}_{T}^{\gamma}\mathscr{C}_{p}^{\theta}},2^{-j(\theta-\alpha)}\|v \|_{C_{T}\mathscr{C}_{p}^{\theta-\alpha}}).\] Thus by interpolation (that is, \(\min(a,b)\leqslant a^{\varepsilon}b^{1-\varepsilon}\) for \(a,b\geqslant 0\), \(\varepsilon\in[0,1]\)) and using that \(\|v\|_{C_{T}\mathscr{C}_{p}^{\theta-\alpha}}\lesssim\|v\|_{\mathscr{L}_{T}^{ \gamma,\theta}}\), we obtain \[\|\Delta_{j}(v_{t}-v_{s})\|_{L^{p}} \lesssim(T-t)^{-\gamma}2^{-j\theta(1-\tilde{\theta}/\alpha)}2^{- j(\theta-\alpha)\tilde{\theta}/\alpha}|t-s|^{\tilde{\theta}/\alpha}\|v\|_{ \mathscr{L}_{T}^{\gamma,\theta}}\] \[=(T-t)^{-\gamma}2^{-j(\theta-\tilde{\theta})}|t-s|^{\tilde{ \theta}/\alpha}\|v\|_{\mathscr{L}_{T}^{\theta}},\] from which (3.12) follows, and \[\|\Delta_{j}v_{t}\|_{L^{p}} \lesssim 2^{-j\theta(1-\tilde{\theta}/\alpha)}(T-t)^{-\gamma(1- \tilde{\theta}/\alpha)}\|v\|_{\mathscr{L}_{T}^{\alpha}\mathscr{C}_{p}^{ \theta}}^{1-\tilde{\theta}/\alpha}\ 2^{-j(\theta-\alpha)\tilde{\theta}/\alpha}\|v\|_{C_{T} \mathscr{C}_{p}^{\theta-\alpha}}^{\tilde{\theta}/\alpha}\] \[\leqslant 2^{-j(\theta-\tilde{\theta})}\|v\|_{\mathscr{L}_{T}^{ \gamma,\theta}}(T-t)^{-\gamma(1-\tilde{\theta}/\alpha)},\] which yields (3.13). Finally, if \((T-t)\geqslant 1\), then (3.14) follows from (3.13) as \[\|v_{t}\|_{\mathscr{C}_{p}^{\theta-\tilde{\theta}}}\lesssim\|v\|_ {\mathscr{L}_{T}^{\gamma,\theta}}(T-t)^{-\gamma(1-\tilde{\theta}/\alpha)} \leqslant\|v\|_{\mathscr{L}_{T}^{\gamma,\theta}}\] \[\leqslant(T-t)^{\tilde{\theta}/\alpha-\gamma}[\|v\|_{\mathscr{L}_{ T}^{\gamma,\theta}}+\|v_{T}\|_{\mathscr{C}_{p}^{\theta-\tilde{\theta}}}^{1-\tilde{ \theta}/\alpha}]+\|vT\|_{\mathscr{C}_{p}^{\theta-\tilde{\theta}}}\] using that \(\hat{\theta}/\alpha\geqslant\gamma\). If \((T-t)\leqslant 1\), then (3.14) follows from \[\|v_{t}\|_{\mathscr{C}^{\theta-\delta}_{p}}\leqslant\|v_{t}-v_{T}\|_{\mathscr{C} ^{\theta-\delta}_{p}}+\|v_{T}\|_{\mathscr{C}^{\theta-\delta}_{p}}\] and \[\|\Delta_{j} (v_{t}-v_{T})\|_{L^{p}}\] \[\lesssim\min\left(2^{-j\theta}(T-t)^{-\gamma}\|v\|_{\mathscr{M}^ {\gamma}_{T}\mathscr{C}^{\theta}_{p}}+2^{-j(\theta-\tilde{\theta})}\|v_{T}\|_ {\mathscr{C}^{\theta-\tilde{\theta}}_{p}},2^{-j(\theta-\alpha)}(T-t)^{1- \gamma}\|v\|_{C^{1-\gamma}_{T}\mathscr{C}^{\theta-\alpha}_{p}}\right)\] \[\leqslant\min\left(2^{-j\theta}(T-t)^{-\gamma}\|v\|_{\mathscr{M}^ {\gamma}_{T}\mathscr{C}^{\theta}_{p}},2^{-j(\theta-\alpha)}(T-t)^{1-\gamma}\| v\|_{C^{1-\gamma}_{T}\mathscr{C}^{\theta-\alpha}_{p}}\right)+2^{-j(\theta- \tilde{\theta})}\|v_{T}\|_{\mathscr{C}^{\theta-\tilde{\theta}}_{p}}.\] By interpolation as above, we thus have \[\|\Delta_{j}(v_{t}-v_{T})\|_{L^{p}}\lesssim 2^{-j(\theta-\tilde{\theta})}(T-t )^{\tilde{\theta}/\alpha-\gamma}\|v\|_{\mathscr{L}^{\gamma,\theta}_{T}},\] such that together (3.14) follows. ## 4 Solving the Kolmogorov backward equation In this section, we develop a concise solution theory that simultaneously treats singular and non-singular terminal condition for the Kolmogorov backward equation. We start by solving the Kolmogorov equation in the Young regime, that is \(\beta>(1-\alpha)/2\). **Theorem 4.1**.: 4 _Let \(\alpha\in(1,2]\), \(\beta\in(\frac{1-\alpha}{2},0)\) and \(p\in[1,\infty]\). Let \(V\in C_{T}\mathscr{C}^{\beta}_{\mathbb{R}^{d}}\), \(f\in C_{T}\mathscr{C}^{\beta}_{p}\) and \(u^{T}\in\mathscr{C}^{\alpha+\beta}_{p}\). Then the PDE_ Footnote 4: The theorem is a generalization of [KP22, Theorem 3.1] to regularity \(\theta=\alpha+\beta\) and integrability \(p\in[1,\infty]\). \[\partial_{t}u=\mathfrak{L}^{\alpha}_{\nu}u-V\cdot\nabla u+f,\quad u(T,\cdot)= u^{T}, \tag{4.1}\] _has a unique mild solution \(u\in C_{T}\mathscr{C}^{\alpha+\beta}\cap C^{1}_{T}\mathscr{C}^{\beta}\) (i.p. by (3.11), \(u\in C^{(\alpha+\beta)/\alpha}_{T}L^{p}\)). Moreover, the solution map_ \[\mathscr{C}^{\alpha+\beta}_{p}\times C_{T}\mathscr{C}^{\beta}_{p}\times C_{T} \mathscr{C}^{\beta}_{\mathbb{R}^{d}}\ni(u^{T},f,V)\mapsto u\in\mathscr{L}^{0, \alpha+\beta}_{T}\] _is continuous. Furthermore, for a singular terminal conditions \(u^{T}\in\mathscr{C}^{(1-\gamma)\alpha+\beta}_{p}\) for \(\gamma\in[0,1)\), the solution \(u\) is obtained in \(\mathscr{L}^{\gamma,\alpha+\beta}_{T}\)._ Proof.: Let \(u^{T}\in\mathscr{C}^{\alpha+\beta}_{p}\). We first prove, that the solution exists in \(\mathscr{L}^{\gamma,\alpha+\beta}_{T}\) for any \(\gamma\in(0,1)\). Afterwards we argue that indeed \(u\in\mathscr{L}^{0,\alpha+\beta}_{T}\). The proof follows from the Banach fixed point theorem applied to the map \[\mathscr{L}^{\gamma,\alpha+\beta}_{T,T}\ni u\mapsto\Phi^{\overline{T},T}(u) \in\mathscr{L}^{\gamma,\alpha+\beta}_{T,T}\text{ with }\Phi^{\overline{T},T}u(t)=P_{T-t}u^{T}+J^{T}(\nabla u\cdot V-f)(t),\] where \(J^{T}(v)(t)=\int_{t}^{T}P_{r-t}v(r)dr\). We show below, that for \(\overline{T}\in(0,T]\) small enough, the map is a contraction. By the Schauder estimates (Corollary 3.2), we obtain that \(t\mapsto P_{T-t}u^{T}\in\mathscr{L}^{0,\alpha+\beta}_{T,T}\) and \(J^{T}(f)\in\mathscr{L}^{0,\alpha+\beta}_{T,T}\). Furthermore, the Schauder estimates (Corollary 3.2) and the interpolation estimate (3.13) from Lemma 3.7 yield that for \(\gamma^{\prime}\in(0,\gamma)\) chosen, such that \(\gamma=\gamma^{\prime}(1-\theta/\alpha)\) for a \(\theta\in(0,\alpha+2\beta-1)\), \[\|J^{T}(\nabla u\cdot V)\|_{\mathscr{L}_{T,T}^{\alpha+\beta}} \lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|\nabla u\cdot V\|_ {\mathscr{M}_{T,T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\beta}}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|\nabla u\|_{ \mathscr{M}_{T,T}^{\gamma^{\prime}}(\mathscr{C}_{p}^{\alpha+\beta-1-\theta}) ^{d}}\|V\|_{C_{T}\mathscr{C}_{\mathbb{R}^{d}}^{\beta}}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|u\|_{\mathscr{L}_ {\overline{T},T}^{\alpha+\beta}}\|V\|_{C_{T}\mathscr{C}_{\mathbb{R}^{d}}^{ \beta}}.\] Notice that due to the choice of \(\theta\) the regularity of the resonant product \(\nabla u\odot V\) is strictly positive. Thus, for \(\overline{T}\in(0,T]\) sufficiently small, \(\Phi^{\overline{T},T}\) is a contraction on \(\mathscr{L}_{\overline{T},T}^{\gamma,\alpha+\beta}\) and we obtain a solution \(u\in\mathscr{L}_{\overline{T},T}^{\gamma,\alpha+\beta}\) (i.e. the fixed point of the map). By plugging the solution back in the contraction map and using the interpolation estimate (3.14) for \(\theta=\alpha+\beta,\tilde{\theta}=\gamma\alpha\) and \(\gamma\in(0,(\alpha+2\beta-1)/\alpha)\), we then obtain \[\|u\|_{\mathscr{L}_{\overline{T},T}^{\rho,\alpha+\beta}} =\|\Phi^{\overline{T},T}(u)\|_{\mathscr{L}_{\overline{T},T}^{0, \alpha+\beta}}\] \[\lesssim\|P_{T-}u^{T}+J^{T}(f)\|_{\mathscr{L}_{\overline{T},T}^{ 0,\alpha+\beta}}+\|u\|_{C_{\overline{T},T}\mathscr{C}^{\alpha+\beta-\gamma \alpha}}\|V\|_{C_{T}\mathscr{C}_{\mathbb{R}^{d}}^{\beta}}\] \[\lesssim\|P_{T-}u^{T}+J^{T}(f)\|_{\mathscr{L}_{T}^{0,\alpha+\beta }}+\|[u\|_{\mathscr{L}_{T,T}^{\alpha+\beta}}+\|u_{T}\|_{\mathscr{C}_{p}^{ \alpha+\beta}}]\|V\|_{C_{T}\mathscr{C}_{\mathbb{R}^{d}}^{\beta}}. \tag{4.2}\] This implies that indeed \(u\in\mathscr{L}_{\overline{T},T}^{0,\alpha+\beta}\) and we constructed the solution on \([T-\overline{T},T]\). Moreover, the choice of \(\overline{T}\) does not depend on the terminal condition \(u^{T}\) and therefore we can iterate the construction of the solution on subintervals \([T-k\overline{T},T-(k-1)\overline{T}]\) for \(k\in 1,\ldots,n\) and \(n\in\mathbb{N}\) such that \(T-n\overline{T}\leqslant 0\). Here, we choose the terminal condition of the solution on \([T-k\overline{T},T-(k-1)\overline{T}]\) equal to the initial value of the solution constructed in the previous iteration step. We then obtain the solution \(u\in\mathscr{L}_{T}^{0,\alpha+\beta}\) on \([0,T]\) by patching the solutions on the subintervals together. Indeed, \(u\) is the fixed point of \(\Phi^{T,T}\), due to the semigroup property \(P_{t}P_{s}=P_{t+s}\) for \(t,s\geqslant 0\). The continuity of the solution map follows from \[\|u\|_{\mathscr{L}_{T}^{0,\alpha+\beta}}\leqslant\sum_{k=0}^{n}\|u\|_{\mathscr{ L}_{\overline{T},T-k\overline{T}}^{0,\alpha+\beta}}\] for \(n\in\mathbb{N}\) such that \(T-(n+1)\overline{T}\leqslant 0\), together with (4.2) applied for each of the terms on the right-hand-side and the contraction property on each of the spaces \(\mathscr{L}_{\overline{T},T-k\overline{T}}^{\gamma,\alpha+\beta}\). For a terminal condition \(u^{T}\in\mathscr{C}_{p}^{(1-\gamma)\alpha+\beta}\), the above arguments show that we obtain a solution in \(\mathscr{L}_{T}^{\gamma,\alpha+\beta}\). Notice that the blow-up just occures for the solution on the last subinterval \([T-\overline{T},T]\). That is, the solutions on \([T-k\overline{T},T-(k-1)\overline{T}]\) for \(k=2,\ldots,n\) have a regular terminal condition in \(\mathscr{C}_{p}^{\alpha+\beta}\). Next, we define the space of enhanced distributions and afterwards the solution space for solving the generator equation with paracontrolled terminal condition and right hand side in the rough regime \(\beta\leqslant\frac{1-\alpha}{2}\). For that, we define for a Banach space \(X\), the blow up space \[\mathscr{M}_{\dot{\Delta}_{T}}^{\gamma}X=\{g:\dot{\Delta}_{T}\to X\mid\sup_{0 \leqslant s<t\leqslant T}(t-s)^{\gamma}\|g(s,t)\|_{X}<\infty\}\] for the triangle without diagonal \(\dot{\Delta}_{T}:=\{(s,t)\in[0,T]^{2}\mid s<t\}\). Below we take \(g(s,t)=P_{t-s}(\partial_{j}\eta_{t}^{i})\odot\eta_{s}^{j}\) for \(\eta\in C_{T}C_{b}^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) and \(i,j\in\{1,...,d\}\). **Definition 4.2** (Enhanced drift).: _Let \(T>0\). For \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) and \(\gamma\in[\frac{2\beta+2\alpha-1}{\alpha},1)\), we define the space of enhanced drifts \(\mathscr{X}^{\beta,\gamma}\) as the closure of_ \[\{(\eta,\mathcal{K}(\eta)):=\big{(}\eta,\big{(}\sum_{i=1}^{d}P_{ \cdot}(\partial_{i}\eta^{j})\odot\eta^{i}\big{)}_{j=1,\ldots,d}\big{)}:\,\eta \in C_{T}C_{b}^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\}\] _in \(C_{T}\mathscr{C}_{\mathbb{R}^{d}}^{\beta+(1-\gamma)\alpha}\times\mathscr{M}_{ \Delta_{T}}^{\gamma}\mathscr{C}_{\mathbb{R}^{d+d}}^{2\beta+\alpha-1}\). We say that \(\mathscr{V}\) is a lift or an enhancement of \(V\) if \(\mathscr{V}_{1}=V\) and we also write \(V\in\mathscr{X}^{\beta,\gamma}\) identifying \(V\) with \((\mathscr{V}_{1},\mathscr{V}_{2})\)._ _For \(\beta\in(\frac{1-\alpha}{2},0)\) and \(\gamma\in[\frac{\beta-1}{\alpha},1)\), we set \(\mathscr{X}^{\beta,\gamma}=C_{T}\mathscr{C}^{\beta+(1-\gamma)\alpha}\)._ **Remark 4.3**.: _For \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma}\), we assume on the first component \(\mathscr{V}_{1}\in C_{T}\mathscr{C}^{\beta+\alpha(1-\gamma)}\). We think of \(\gamma\sim 1\), that is \(\gamma<1\), but very close to \(1\). The assumptions on \(\mathscr{V}\) in particular imply by the semigroup estimates, that \(t\mapsto P_{T-t}V_{T}^{i}\in\mathscr{M}_{T}^{\gamma}\mathscr{C}^{\alpha+\beta}\). Furthermore, from \(\sum_{i}P(\partial_{i}V^{j})\odot V^{i}\in\mathscr{M}_{\Delta_{T}}^{\gamma} \mathscr{C}^{2\beta+\alpha-1}\) follows that \(t\mapsto\sum_{i}J^{T}(\partial_{i}V^{j})_{t}\odot V_{t}^{i}=\sum_{i}\int_{t}^ {T}P_{r-t}(\partial_{i}V_{r}^{j})\odot V_{t}^{i}dr\in C_{T}\mathscr{C}^{2\beta +\alpha-1}\). Indeed, as \(\gamma<1\), we can estimate_ \[\sup_{t\in[0,T]}\big{\|}\sum_{i=1}^{d}J^{T}(\partial_{i}V^{j})_{t }\odot V_{t}^{i}\big{\|}_{\alpha+2\beta-1} \leqslant\sup_{t\in[0,T]}\int_{t}^{T}\big{\|}\sum_{i}P_{u-t}( \partial_{i}V_{u}^{j})\odot V_{t}^{i}\big{\|}_{\alpha+2\beta-1}du\] \[\leqslant\big{\|}\sum_{i}P_{\cdot}(\partial_{i}V^{j})\odot V^{i} \big{\|}_{\mathscr{M}_{\Delta_{T}}^{\gamma}\mathscr{C}_{\mathbb{R}^{d}}^{2 \beta+\alpha-1}}\sup_{t\in[0,T]}\int_{t}^{T}(u-t)^{-\gamma}du\] \[\lesssim\big{\|}\sum_{i}P_{\cdot}(\partial_{i}V^{j})\odot V^{i} \big{\|}_{\mathscr{M}_{\Delta_{T}}^{\gamma}\mathscr{C}_{\mathbb{R}^{d}}^{2 \beta+\alpha-1}}\times T^{1-\gamma},\] _using that \(\gamma<1\). Analogously we obtain that \(\sum_{i}J^{T}(\partial_{i}V^{j})\odot V^{i}\in C_{[0,r]}\mathscr{C}^{\alpha+ 2\beta-1}\) with a uniform bound in \(r\in(0,T]\). The assumptions on the enhancement will become handy, as soon as we consider paracontrolled solutions on subintervals of \([0,T]\)._ **Remark 4.4**.: _We assume the lower bound on \(\gamma\) to ensure, that the regularity of \(V\), respectively the regularity of the resonant products \(\sum_{i}J^{T}(\partial_{i}V^{j})_{t}\odot V_{t}^{i}\) are negative. That is, for \(\gamma<(2\beta+2\alpha-1)/\alpha\), we obtain that \(\sum_{i}J^{T}(\partial_{i}V^{j})_{t}\odot V_{t}^{i}\in C_{T}\mathscr{C}^{2 \beta+(2-\gamma)\alpha-1}\) due to \(V\in C_{T}\mathscr{C}^{\beta+(1-\gamma)\alpha}\) with \(2\beta+(2-\gamma)\alpha-1\geqslant 0\). In this case, \(V\) has enough regularity, so that the Kolmogorov PDE can be solved with the classical approach. We exclude this case here, as we explicitly treat the singular case._ **Definition 4.5**.: _Let \(\alpha\in(1,2]\) and \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\). Let \(T>0\) and \(V\in\mathscr{X}^{\beta,\gamma^{\prime}}\) for \(\gamma^{\prime}\in[\frac{2\beta+2\alpha-1}{\alpha},1)\) and let \(u^{T,\prime}\in\mathscr{C}_{p}^{\alpha+\beta-1}\). For \(\gamma\in(\gamma^{\prime},\frac{\alpha}{2-\alpha-3\beta}\gamma^{\prime})\) and \(\overline{T}\in(0,T]\), we define the space of paracontrolled distributions \(\mathscr{D}_{\overline{T},T}^{\gamma}=\mathscr{D}_{\overline{T},T}^{\gamma, \gamma^{\prime}}(\mathscr{V},u^{T,\prime})\) as the set of tuples \((u,u^{\prime})\in\mathscr{L}_{\overline{T},T}^{\gamma^{\prime},\alpha+\beta} \times(\mathscr{L}_{\overline{T},T}^{\gamma^{\prime},\alpha+\beta-1})^{d}\), such that_ \[u^{\sharp}:=u-u^{\prime}\owedge J^{T}(V)-u^{T,\prime}\owedge P_{T-\cdot}V_{T} \in\mathscr{L}_{\overline{T},T}^{\gamma,2(\alpha+\beta)-1}.\] _We define a metric on \(\mathscr{D}_{\overline{T},T}^{\gamma}\) by_ \[d_{\mathscr{D}_{\overline{T},T}^{\gamma}}((u,u^{\prime}),(v,v^{ \prime})) :=\|u-v\|_{\mathscr{D}_{\overline{T},T}^{\gamma}}\] \[:=\|u-v\|_{\mathscr{D}_{\overline{T},T}^{\gamma^{\prime},\alpha+ \beta}}+\|u^{\prime}-v^{\prime}\|_{(\mathscr{D}_{\overline{T},T}^{\gamma,\alpha+ \beta-1})^{d}}+\|u^{\sharp}-v^{\sharp}\|_{\mathscr{L}_{\overline{T},T}^{\gamma,2( \alpha+\beta)-1}}.\] _Then, \((\mathscr{D}_{\overline{T},T}^{\gamma},d_{\mathscr{D}_{\overline{T},T}^{\gamma}})\) is a complete metric space. If moreover \((v,v^{\prime})\in\mathscr{D}_{\overline{T},T}^{\gamma,\gamma^{\prime}}( \mathscr{W},v^{T,\prime})\) for different data \((\mathscr{W},v^{T,\prime})\in\mathscr{X}^{\beta,\gamma^{\prime}}\times\mathscr{ C}_{p}^{\alpha+\beta-1}\), then we use the same definition for \(\|u-v\|_{\mathscr{D}_{\overline{T},T}^{\gamma}}\), despite the fact that \((u,u^{\prime})\) and \((v,v^{\prime})\) do not live in the same space._ **Remark 4.6**.: _The intuition behind the paracontrolled ansatz is as follows. Assume for simplicitiy regular data \((u^{T},f)\in\mathscr{C}^{2(\alpha+\beta)-1}_{p}\times\mathscr{L}^{0,\alpha+2 \beta-1}_{T}\). Assume also that we found a solution \(u\in\mathscr{L}^{0,\alpha+\beta}_{T}\) and that we can make sense of the resonant product \(\nabla u\odot V\) in such a way that it has its natural regularity \(C_{T}\mathscr{C}^{2\beta+\alpha-1}_{p}\), despite the fact that \(2\beta+\alpha-1\leqslant 0\). Then we would get that_ \[u^{\sharp}: =u-\nabla u\odot J^{T}(V)\] \[=P_{T-}.u^{T}-J^{T}(f)+J^{T}(\nabla u\odot V)+J^{T}(\nabla u\odot V )+(J^{T}(\nabla u\odot V)-\nabla u\odot J^{T}(V))\] _is more regular than \(u\). Indeed, by the Schauder estimates for the first four terms and by the commutator estimate from Lemma 3.4, we obtain that \(u^{\sharp}\in\mathscr{L}^{0,2(\alpha+\beta)-1}_{T}\). This explains why the paracontrolled ansatz might be justified. The reason why the ansatz is useful is that it isolates the singular part of \(u\) in a paraproduct, that we can handle by commutator estimates and the assumptions on \(V\)._ Our main theorem of this section is the following. We give its proof after the corollary below. **Theorem 4.7**.: _Let \(T>0\), \(\alpha\in(1,2]\), \(p\in[1,\infty]\) and \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) and \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\) for \(\gamma^{\prime}\in[\frac{2\beta+2\alpha-1}{\alpha},1)\). Let_ \[f=f^{\sharp}+f^{\prime}\otimes V\] _for \(f^{\sharp}\in\mathscr{L}^{\gamma^{\prime},\alpha+2\beta-1}_{T}\), \(f^{\prime}\in(\mathscr{L}^{\gamma^{\prime},\alpha+\beta-1}_{T})^{d}\) and_ \[u^{T}=u^{T,\sharp}+u^{T,\prime}\odot V_{T}\] _for \(u^{T,\sharp}\in\mathscr{C}^{(2-\gamma^{\prime})\alpha+2\beta-1}_{p}\), \(u^{T,\prime}\in(\mathscr{C}^{\alpha+\beta-1}_{p})^{d}\). Then for \(\gamma\in(\gamma^{\prime},\frac{\alpha}{2-\alpha-3\beta}\gamma^{\prime})\) there exists a unique mild solution \((u,u^{\prime})\in\mathscr{D}^{\gamma}_{T}(\mathscr{V},u^{T,\prime})\) of the singular Kolmogorov backward PDE_ \[\mathscr{G}^{\gamma}u=f,\qquad u(T,\cdot)=u^{T}.\] **Remark 4.8**.: _As \(\mathscr{L}^{\gamma,\theta}_{T}\subset\mathscr{L}^{\gamma^{\prime},\theta}_{ T}\) and \(\mathscr{C}^{(2-\gamma)\alpha+2\beta-1}_{p}\subset\mathscr{C}^{(2-\gamma^{\prime}) \alpha+2\beta-1}_{p}\) for \(\tilde{\gamma}\in[0,\gamma^{\prime}]\), we can in particular treat \(f^{\sharp}\in\mathscr{L}^{\gamma,\alpha+2\beta-1}_{T}\), \(f^{\prime}\in\mathscr{L}^{\tilde{\gamma},\alpha+\beta-1}_{T}\) and \(u^{T,\sharp}\in\mathscr{C}^{(2-\tilde{\gamma})\alpha+2\beta-1}_{p}\)._ **Remark 4.9**.: _Examples for right-hand-sides and terminal conditions, which are paracontrolled by \(V\), respectively \(V_{T}\), are the following. Clearly we can take as a right-hand side \(f=V^{i}\), i.e. \(f^{\prime}=e_{i}\) for the \(i\)-th unit vector \(e_{i}\). Another example would be \(f=J^{T}(\nabla V^{i})\cdot V\) for \(i\in\{1,\ldots,d\}\), where \(f^{\sharp}=J^{T}(\nabla V^{i})\odot V+J^{T}(\nabla V^{i})\odot V\) and \(f^{\prime}=J^{T}(\nabla V^{i})\). Furthermore, as a terminal condition, we can take \(u^{T}=V^{i}_{T}\), i.e. \(u^{T,\prime}=e_{i}\)._ In the case of \(u^{T,\prime}=0\), the terminal condition can still be irregular, but is such that \(t\mapsto P_{T-t}u^{T}=P_{T-t}u^{T,\sharp}\in\mathscr{L}^{\gamma^{\prime}}_{T} \mathscr{C}^{2(\alpha+\beta)-1}_{p}\). As \(\frac{2\alpha+2\beta-1}{\alpha}\leqslant\gamma^{\prime}\) and thus \((2-\gamma^{\prime})\alpha+2\beta-1\leqslant 0\), another example for a terminal condition, that can be treated with our approach would be a distribution \(u^{T}=u^{T,\sharp}\in\mathscr{C}^{0}_{p}\). An example would be \(u^{T}=\delta_{0}\in\mathscr{C}^{0}_{1}\), where \(\delta_{0}\) denotes the Dirac measure at \(x=0\). In the case of \(u^{T,\sharp}=0\) and \(u^{T,\sharp}\in\mathscr{C}^{2(\alpha+\beta)-1}\), the terminal condition is sufficiently regular, such that we can prove, that the solution of the equation is an element of the solution space without blow-up (provided, that \(f\) admits zero blow-up). We define, in the case of \(u^{T,\prime}=0\) and \(u^{T,\sharp}\in\mathscr{C}^{2(\alpha+\beta)-1}\), the paracontrolled solution space as \[D_{T}:=\mathscr{D}^{0}_{T}=\{(u,u^{\prime})\in\mathscr{L}^{0,\alpha+\beta}_{T} \times(\mathscr{L}^{0,\alpha+\beta-1}_{T})^{d}\mid u^{\sharp}:=u-u^{\prime} \odot J^{T}(V)\in\mathscr{L}^{0,2(\alpha+\beta)-1}_{T}\}.\] **Corollary 4.10** (Regular terminal condition).: _Let \(T>0\), \(\alpha\in(1,2]\), \(p\in[1,\infty]\) and \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) and \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\) for \(\gamma^{\prime}\in[\frac{2\beta+2\alpha-1}{2},1)\). Let \(f=f^{\sharp}+f^{\prime}\odot V\) for \(f^{\sharp}\in\mathscr{L}_{T}^{0,\alpha+2\beta-1}\) and \(f^{\prime}\in\mathscr{L}_{T}^{0,\alpha+\beta-1}\) and let \(u^{T}=u^{T,\sharp}\in\mathscr{C}_{p}^{2\alpha+2\beta-1}\) be non-singular._ _Then, there exists a unique mild solution \(u\in D_{T}\) of the generator equation_ \[\mathscr{G}^{\mathscr{V}}u=f,\qquad u(T,\cdot)=u^{T}.\] The proof is deferred to page 23. **Remark 4.11**.: _The proof of the corollary only uses that \(\mathscr{V}=(V,(J^{T}(\partial_{i}V^{j})\odot V^{i})_{i,j})\in C_{T}\mathscr{C }_{\mathbb{R}^{d\times d}}^{\beta}\times C_{T}\mathscr{C}_{\mathbb{R}^{d\times d }}^{2\beta+\alpha-1}\), which is implied by the stronger assumption \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\) (cf. Remark 4.3)._ Proof of Theorem 4.7.: Let \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\) with \(\mathscr{V}_{1}=V\), \(\mathscr{V}_{2}=(\sum_{i}P.(\partial_{i}V^{j})\odot V^{i})_{j}\) for \(\gamma^{\prime}\in[\frac{2\beta+2\alpha-1}{\alpha},1)\). Let \(\overline{T}\in(0,T]\) to be chosen later and \(\gamma\in(\gamma^{\prime},\frac{\alpha}{2-\alpha-3\beta}\gamma^{\prime})\). Then we define the contraction mapping as \[\phi=\phi^{\overline{T},T}:\mathscr{D}_{\overline{T},T}^{\gamma}\to\mathscr{ D}_{\overline{T},T}^{\gamma},\quad(u,u^{\prime})\mapsto(\psi(u),\nabla u-f^{ \prime}) \tag{4.3}\] for \[\psi(u)(t) =P_{T-t}u^{T}+J^{T}(-f)(t)+J^{T}(\nabla u\cdot\mathscr{V})(t), \quad t\in[T-\overline{T},T]\] \[=P_{T-}u^{T,\sharp}+J^{T}(-f^{\sharp})+J^{T}(\nabla u\odot \mathscr{V})+J^{T}(V\odot\nabla u)\] \[\quad+C_{1}(u^{T,\prime},V_{T})+C_{2}(-f^{\prime},V)+C_{2}( \nabla u,V)\] \[\quad+(\nabla u-f^{\prime})\odot J^{T}(V)+u^{T,\prime}\odot P_{T -}.V_{T}, \tag{4.4}\] where we define \[\nabla u\odot\mathscr{V} =\sum_{i=1}^{d}\partial_{i}u\odot\mathscr{V}^{i}\] \[:=\sum_{i=1}^{d}[u^{\prime}\cdot(J^{T}(\partial_{i}V)\odot V^{i} )+C_{3}(u^{\prime},J^{T}(\partial_{i}V),V^{i})+U^{\sharp}\odot V^{i} \tag{4.5}\] \[\qquad+u^{T,\prime}\odot(P_{T-}.\partial_{i}V_{T}\odot V^{i})+C_ {3}(u^{T,\prime},P_{T-}.\partial_{i}V_{T},V^{i})], \tag{4.6}\] with \(U^{\sharp}:=\partial_{i}u^{\sharp}+\partial_{i}u^{\prime}\odot J^{T}(V)+ \partial_{i}u^{T,\prime}\odot P_{T-}.V_{T}\). The commutators are defined as follows: \[C_{1}(f,g):=P_{T-}.(f\odot g)-f\odot P_{T-}.g,\quad C_{2}(u,v):=J^{T}(u\odot v )-u\odot J^{T}(v),\] where \(C_{1}\) denotes the commutator on the semigroup \(P_{T-}\). and \(C_{2}\) is the commutator from Lemma 3.4. Furthermore, \(C_{3}\) denotes the commuator from [12, Lemma 2.4], that is \[C_{3}(f,g,h):=(f\odot g)\odot h-f(g\odot h).\] For the terms in (4.5), we obtain with Remark 4.3, the paraproduct estimates and [12, Lemma 2.4] using that \(3\beta+2\alpha-2>0\) and \(2\beta+\alpha-1\leqslant 0\), \[\big{\|}\sum_{i} \big{(}u^{\prime}\cdot(J^{T}(\partial_{i}V)\odot V^{i})+C_{3}(u^{ \prime},J^{T}(\partial_{i}V),V^{i})+U^{\sharp}\odot V^{i}\big{)}\big{\|}_{ \mathscr{M}_{\overline{T},T}^{\gamma^{\prime}}\mathscr{C}^{\alpha+2\beta-1}}\] \[\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma}}(1+\|\mathscr{V }\|_{\mathscr{X}^{\beta,\gamma}})[\|u^{\sharp}\|_{\mathscr{M}_{\overline{T},T}^ {\gamma^{\prime}}\mathscr{C}_{p}^{2(\alpha+\beta)-1}}+\|u^{\prime}\|_{ \mathscr{M}_{\overline{T},T}^{\gamma^{\prime}}(\mathscr{C}_{p}^{\alpha+\beta-1 })^{d}}]\] For the terms in (4.6), we have by the estimate on the paraproduct and the definition of the enhanced distribution space \(\mathscr{X}^{\beta,\gamma^{\prime}}\) \[\big{\|}\!\sum_{i}u^{T,\prime}\vDash(P_{T-}\partial_{i}V_{T}\cap V^ {i})\big{\|}_{\mathscr{A}_{\overline{T},T}^{\gamma^{\prime}}\mathscr{C}_{p}^{ \beta\beta+\alpha-1}} \lesssim\|u^{T,\prime}\|_{(\mathscr{C}_{p}^{\rho+\beta-1})d}\|\! \sum_{i}P_{T-}\partial_{i}V_{T}\odot V^{i}\big{\|}_{\mathscr{A}_{\overline{T},T }^{\gamma^{\prime}}(\mathscr{C}^{2\beta+\alpha-1})^{d}}\] \[\lesssim\|u^{T,\prime}\|_{(\mathscr{C}_{p}^{\rho+\beta-1})d}\| \mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}},\] where we used that \(\alpha+\beta-1>0\). By the commutator estimate for \(C_{3}\) from [15, Lemma 2.4] and the estimates for the semigroup to control \(P_{T-}.\nabla V_{T}\), we obtain \[\|C_{3}(u^{T,\prime},P_{T-}\partial_{i}V_{T},V^{i})\|_{\mathscr{A}_{\overline{ T},T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\beta\beta+2\alpha-2}}\lesssim\|u^{T, \prime}\|_{(\mathscr{C}_{p}^{\alpha+\beta-1})d}\|\mathscr{V}\|_{\mathscr{X}^{ \beta,\gamma^{\prime}}}^{2},\] using again \(2\alpha+3\beta-2>0\) by the assumption on \(\beta\). Define \(\varepsilon:=\alpha-\alpha\frac{\gamma}{\gamma}\). Then it follows that \(\varepsilon\in(0,3\beta+2\alpha-2)\) by the assumption on \(\gamma\). Subtracting \(\varepsilon\) regularity for \(u^{\prime}\) and \(u^{\sharp}\), we can estimate the resonant product along the same lines as above, due to \(3\beta+2\alpha-2-\varepsilon>0\), obtaining \[\|\nabla u\vartriangle\mathscr{V}\|_{\mathscr{A}_{\overline{T},T}^ {\gamma^{\prime}}\mathscr{C}_{p}^{2\beta+\alpha-1}}\] \[\qquad\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime} }}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\big{(}\|u^{\sharp} \|_{\mathscr{A}_{\overline{T},T}^{\gamma^{\prime}}\mathscr{C}_{p}^{2(\alpha+ \beta)-1-\varepsilon}}+\|u^{\prime}\|_{\mathscr{A}_{\overline{T},T}^{\gamma^{ \prime}}(\mathscr{C}_{p}^{\alpha+\beta-1-\varepsilon})d}\big{)}\] \[\qquad\qquad+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime} }}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\|u^{T,\prime}\|_{( \mathscr{C}_{p}^{\alpha+\beta-1})d}\] \[\qquad\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime} }}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\|(u,u^{\prime})\|_ {\underline{\mathscr{O}}_{\overline{T},T}^{\gamma}}+\|u^{T,\prime}\|_{( \mathscr{C}_{p}^{\alpha+\beta-1})d}]. \tag{4.7}\] In (4.7), we moreover used the interpolation bound (3.13) for the norm of \(u^{\prime}\), that is \[\|u^{\prime}\|_{\mathscr{A}_{\overline{T},T}^{\gamma^{\prime}}\mathscr{C}_{p }^{\alpha+\beta-1-\varepsilon}}=\|u^{\prime}\|_{\mathscr{M}_{\overline{T},T}^{ \gamma(1-\varepsilon/\alpha)}\mathscr{C}_{p}^{\alpha+\beta-1-\varepsilon}} \lesssim\|u^{\prime}\|_{\mathscr{L}_{\overline{T},T}^{\gamma,\alpha+\beta-1}}\] by the definition of \(\varepsilon\), and analogously for \(u^{\sharp}\). For \((u,u^{\prime}),(v,v^{\prime})\in\mathscr{D}_{\overline{T},T}^{\gamma}( \mathscr{V},u^{T,\prime})\), this also implies the Lipschitz bound: \[\|\nabla u\vartriangle\nabla v\odot\mathscr{V}\|_{\mathscr{A}_{ \overline{T},T}^{\gamma^{\prime}}\mathscr{C}_{p}^{2\beta+\alpha-1}}\] \[\qquad\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime} }}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\|(u,u^{\prime})-(v,v^{\prime})\|_{\underline{\mathscr{O}}_{\overline{T},T}^{\gamma}}.\] Next, we show that indeed \(\phi(u,u^{\prime})=(\psi(u),\nabla u-f^{\prime})\in\mathscr{D}_{\overline{T},T}^ {\gamma}\) and that \(\phi\) is a contraction for small enough \(\overline{T}\). Towards the first aim, we note that by (4.4), \[\phi(u,u^{\prime})^{\sharp} =\psi(u)-(\nabla u-f^{\prime})\vDash J^{T}(V)-u^{T,\prime} \vDash P_{T-}.V_{T}\] \[=P_{T-}u^{T,\sharp}+J^{T}(-f^{\sharp})+J^{T}(\nabla u\odot V)+J^ {T}(V\vDash\nabla u)\] \[\qquad+C_{1}(u^{T,\prime},V_{T})+C_{2}(-f^{\prime},V)+C_{2}( \nabla u,V).\] By the Schauder estimates, we obtain \(P_{T-}u^{T,\sharp}+J^{T}(f^{\sharp})\in\mathscr{L}_{\overline{T},T}^{\gamma^{ \prime},2\alpha+2\beta-1}\) and \[\|J^{T}(\nabla u\odot\mathscr{V})+J^{T}(V\vDash\nabla u)\|_{ \mathscr{L}_{\overline{T},T}^{\gamma,2\alpha+2\beta-1}}\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}[\|\nabla u\odot \mathscr{V}\|_{\mathscr{A}_{\overline{T},T}^{\gamma^{\prime}}\mathscr{C}_{p}^{ \alpha+2\beta-1}}+\|V\vDash\nabla u\|_{\mathscr{A}_{\overline{T},T}^{\gamma^{ \prime}}\mathscr{C}_{p}^{\alpha+2\beta-1}}]\] \[\qquad\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|\mathscr{V} \|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}})[(u,u^{\prime})\|_{\dot{\mathscr{O}}_{\overline{T},T}^{\gamma}}\] \[\qquad\qquad+\overline{T}^{\gamma-\gamma^{\prime}}\|u\|_{\mathscr{L} _{\overline{T},T}^{\gamma^{\prime},\alpha+\beta}}\|V\|_{C_{\mathscr{T}} ^{\mathscr{O}}_{\mathbb{R}^{d}}}\] using the estimate for the resonant product from above. Utilizing the commutator estimate (Lemma 3.4), we obtain \[\|C_{2}(\nabla u,V)\|_{\mathscr{L}^{\gamma,2(\alpha+\beta)-1}_{\overline{T},T}} \lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|V\|_{C_{T}\mathscr{C}^{\beta}_{ \mathbb{R}^{d}}}\|u\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{\overline{T },T}}\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\|V\|_{C_{T}\mathscr{C}^{ \beta}_{\mathbb{R}^{d}}}\|(u,u^{\prime})\|_{\mathscr{L}^{\gamma^{\prime}}_{ \overline{T},T}}.\] By \(V_{T}\in\mathscr{C}^{\beta+(1-\gamma^{\prime})\alpha}_{\mathbb{R}^{d}}\) and \(u^{T_{J}}\in(\mathscr{C}^{\alpha+\beta-1}_{p})^{d}\) and the commutator estimate (2.7) for \(C_{1}\) for \(\vartheta=\gamma^{\prime}\alpha\) and \(\alpha+\beta-1\in(0,1)\) and again Lemma 3.4 for \(C_{2}\), we have that \[\|C_{1}(u^{T,\prime},V_{T})+C_{2}(f^{\prime},V)\|_{\mathscr{L}^{ \gamma^{\prime},2(\alpha+\beta)-1}_{\overline{T},T}}\] \[\qquad\lesssim\|V\|_{C_{T}\mathscr{C}^{\beta+(1-\gamma^{\prime}) \alpha}_{\mathbb{R}^{d}}}(\|u^{T,\prime}\|_{(\mathscr{C}^{\alpha+\beta-1}_{p}) ^{d}}+\|f^{\prime}\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta-1}_{T}}).\] Hence, together we obtain \(\phi(u,u^{\prime})^{\sharp}\in\mathscr{L}^{\gamma,2\alpha+2\beta-1}_{\overline {T},T}\). Next, we show that \(\psi(u)\in\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{\overline{T},T}\). Define \(\gamma^{\prime\prime}:=\gamma^{\prime}(1-\varepsilon_{1}/\alpha)\) for a fixed \(\varepsilon_{1}\in(0,(\alpha+\beta-1)\wedge\frac{(1-\gamma^{\prime})\alpha}{2- \alpha-3\beta})=(0,\frac{(1-\gamma^{\prime})\alpha}{2-\alpha-3\beta})\) and define \(\varepsilon_{2}:=\alpha-\alpha\frac{\gamma^{\prime\prime}}{\gamma}\). Then it follows that \(\varepsilon_{2}\in(0,3\beta+2\alpha-2+(1-\gamma^{\prime})\alpha)\). Using that \(V\in C_{T}\mathscr{C}^{\beta+(1-\gamma^{\prime})\alpha}_{\mathbb{R}^{d}}\) and applying twice the interpolation bound (3.13) (once for \(u\) and once for \(u^{\sharp}\) and \(u^{\prime}\)), an analogue estimate as for the resonant product \(\|J^{T}(\nabla u\odot\mathscr{V})\|_{\mathscr{L}^{\gamma^{\prime},2\alpha+2 \beta-1}_{T}}\) yields that \[\|J^{T}(\nabla u\odot\mathscr{V})\|_{\mathscr{L}^{\gamma^{\prime},\beta+\alpha}_{\overline{T},T}}\] \[\qquad\lesssim\|J^{T}(\nabla u\odot V)\|_{\mathscr{L}^{\gamma^{ \prime},\beta+\alpha}_{\overline{T},T}}+\|J^{T}(\nabla u\odot V+\nabla u\odot \mathscr{V})\|_{\mathscr{L}^{\gamma^{\prime},2\alpha+\beta-1}_{\overline{T},T }}\] \[\qquad\lesssim\overline{T}^{\gamma^{\prime}-\gamma^{\prime\prime}} \|\nabla u\odot V\|_{\mathscr{M}^{\gamma^{\prime\prime}}_{\overline{T},T} \mathscr{C}^{\beta}_{p}}+\|\nabla u\odot V+\nabla u\odot\mathscr{V}\|_{ \mathscr{M}^{\gamma^{\prime\prime}}_{\overline{T},T}\mathscr{C}^{\alpha+2 \beta-1}_{p}}\] \[\qquad\lesssim\overline{T}^{\gamma^{\prime}-\gamma^{\prime\prime} }\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{V}\|_{ \mathscr{L}^{\gamma,\beta^{\prime}}})\] \[\qquad\qquad\quad\times[\|u\|_{\mathscr{L}^{\gamma^{\prime}, \alpha+\beta-1}_{\overline{T},T}\mathscr{C}^{\alpha+\beta-1}_{p}}+\|u^{ \sharp}\|_{\mathscr{M}^{\gamma^{\prime\prime}}_{\overline{T},T}\mathscr{C}^{ \alpha(\alpha+\beta)-1-\varepsilon_{2}}_{p}}+\|u^{\prime}\|_{\mathscr{M}^{ \gamma^{\prime\prime}}_{\overline{T},T}(\mathscr{C}^{\alpha+\beta-1- \varepsilon_{2}}_{p})^{d}}]\] \[\qquad\lesssim\overline{T}^{\gamma^{\prime}-\gamma^{\prime\prime} }\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{V}\|_{ \mathscr{L}^{\beta,\gamma^{\prime}}})\|u\|_{\mathscr{L}^{\gamma^{\prime}, \alpha+\beta}_{\overline{T},T}}.\] Thus, we obtain that \[\|\psi(u)\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{\overline{T },T}}\] \[\qquad=\|P_{T-}u^{T}+J^{T}(f)+J^{T}(\nabla u\cdot\mathscr{V})\|_{ \mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{\overline{T},T}}\] \[\qquad\leqslant\|P_{T-}u^{T}\|_{\mathscr{L}^{\gamma^{\prime}, \alpha+\beta}_{\overline{T},T}}+\|f\|_{\mathscr{L}^{\gamma^{\prime},\beta}_{ \overline{T},S}}+\|J^{T}(\nabla u\cdot\mathscr{V})\|_{\mathscr{L}^{\gamma^{ \prime},\alpha+\beta}_{\overline{T},T}}\] \[\qquad\lesssim\|u^{T}\sharp\|_{\mathscr{C}^{(2-\gamma^{\prime}) \alpha+2\beta-1}_{p}}+\|u^{T,\prime}\|_{(\mathscr{C}^{\alpha+\beta-1}_{p})^{d}} \|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}}+\|C_{1}(u^{T,\prime},V_{T} )\|_{\mathscr{M}^{\gamma^{\prime}}_{T}\mathscr{C}^{2\alpha+2\beta-1}_{p}}\] \[\qquad\qquad+\|f\|_{\mathscr{L}^{\gamma^{\prime},\beta}_{\overline{T }}}+\overline{T}^{\gamma^{\prime}-\gamma^{\prime\prime}}\|\mathscr{V}\|_{ \mathscr{L}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{V}\|_{\mathscr{L}^{\beta, \gamma^{\prime}}})\|u\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{ \overline{T},T}},\] which yields in particular \(\psi(u)\in\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{\overline{T},T}\). The Gubinelli derivative \(\phi(u,u^{\prime})^{\prime}=\nabla u-f^{\prime}\), we estimate as follows \[\|\nabla u -f^{\prime}\|_{(\mathscr{L}^{\gamma,\alpha+\beta-1}_{T,T})^{d}}\] \[\lesssim\|\nabla u\|_{\mathscr{M}^{\gamma}_{T,T}(\mathscr{C}^{ \alpha+\beta-1}_{p})^{d}}+\|\nabla u\|_{C^{1-\gamma}_{T,T}(\mathscr{C}^{\beta -1}_{p})^{d}}+\|\nabla u\|_{C^{\gamma,1}_{T,T}(\mathscr{C}^{\beta-1}_{p})^{d} }+\|f^{\prime}\|_{(\mathscr{L}^{\gamma,\alpha+\beta-1}_{T,T})^{d}}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\big{(}\|u\|_{ \mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{T,T}}+\|f^{\prime}\|_{(\mathscr{L} ^{\gamma^{\prime},\alpha+\beta-1}_{T,T})^{d}}\big{)}\] \[\lesssim\overline{T}^{\gamma-\gamma^{\prime}}\big{(}\|u\|_{ \mathscr{L}^{\gamma}_{T,T}}+\|f^{\prime}\|_{(\mathscr{L}^{\gamma^{\prime}, \alpha+\beta-1}_{T,T})^{d}}\big{)},\] where we exploit the fact that \(\gamma-\gamma^{\prime}>0\) to obtain a non-trivial factor depending on \(\overline{T}\). Together with the estimate for \(\psi(u)\) and \(\phi(u,u^{\prime})^{\sharp}\), this yields \(\phi(u,u^{\prime})=(\psi(u),\nabla u-f^{\prime})\in\mathscr{D}^{\gamma}_{ \overline{T},T}\). The contraction property follows using the above estimates for \(\psi(u)\), \(\phi(u,u^{\prime})^{\sharp}\) and \(\phi(u,u^{\prime})^{\prime}\), utilizing linearity of \(\phi\) and \(\psi\) (for \(u^{T}=0,f=0\)), such that \[\|(\psi(u),\nabla u-f^{\prime})-(\psi(v),\nabla v-f^{\prime})\|_{ \mathscr{D}^{\gamma}_{\overline{T},T}}\] \[\qquad=\|\psi(u)-\psi(v)\|_{\mathscr{L}^{\gamma^{\prime},\alpha+ \beta}_{T,T}}+\|\nabla u-\nabla v\|_{(\mathscr{L}^{\gamma,\alpha+\beta-1}_{T,T })^{d}}+\|\phi(u,u^{\prime})^{\sharp}-\phi(v,v^{\prime})^{\sharp}\|_{\mathscr{ L}^{\gamma,2\alpha+2\beta-1}_{T,T}}\] \[\lesssim(\overline{T}^{\gamma-\gamma^{\prime}}\vee\overline{T}^ {\gamma-\gamma^{\prime\prime}})\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{ \prime}}}(1+\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}})\|(u,u^{ \prime})-(v,v^{\prime})\|_{\mathscr{D}^{\gamma}_{\overline{T},T}}. \tag{4.8}\] Now, we can choose \(\overline{T}\) small enough, such that the implicit constant times the factor \((\overline{T}^{\gamma-\gamma^{\prime}}\vee\overline{T}^{\gamma^{\prime}- \gamma^{\prime\prime}})\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}}(1 +\|\mathscr{V}\|_{\mathscr{L}^{\beta,\gamma^{\prime}}})\) is strictly less than \(1\), such that \(\phi=\phi^{\overline{T},T}\) is a contraction on the corresponding space \(\mathscr{D}^{\gamma}_{\overline{T},T}\). It is left to show, that we can obtain a paracontrolled solution in \(\mathscr{D}^{\gamma}_{T}\) on the whole interval \([0,T]\). The solution on \([0,T]\) is obtained by patching the solutions on the subintervals of length \(\overline{T}\) together. Indeed, let inductively \(u^{[T-\overline{T},T]}\) be the solution on the subinterval \([T-\overline{T},T]\) with terminal condition \(u^{T}\) and \(u^{[T-k\overline{T},T-(k-1)\overline{T}]}\) be the solution on \([T-k\overline{T},T-(k-1)\overline{T}]\) with terminal condition \(u^{[T-k\overline{T},T-(k-1)\overline{T}]}_{T-(k-1)\overline{T}}=u^{[T-(k-1) \overline{T},T-(k-2)\overline{T}]}_{T-(k-1)\overline{T}}\) for \(k=2,\ldots,n\) and \(n\in\mathbb{N}\), such that \(T-n\overline{T}\leqslant 0\). There is a small subtlety, as we consider the solution on \([T-k\overline{T},T-(k-1)\overline{T}]\), that is paracontrolled by \(J^{T}(V)\) (and not by \(J^{T-(k-1)\overline{T}}(V)\)). That is, for \(k=2,\ldots,n\), the solution has the paracontrolled structure, \[u^{[T-k\overline{T},T-(k-1)\overline{T}],\sharp}_{t}\] \[\qquad=u^{[T-k\overline{T},T-(k-1)\overline{T}]}_{t}-(\nabla u^{[T -k\overline{T},T-(k-1)\overline{T}]}_{t}-f^{\prime}_{t})\otimes J^{T}(V)_{t}-u ^{T\prime}\otimes P_{T-t}V_{T}\in\mathscr{L}^{\gamma,2(\alpha+\beta)-1}_{ \overline{T},T-(k-1)\overline{T}}\] Notice, that for \(k\geqslant 2\), \(u^{T,\prime}\otimes P_{T-t}V_{T}\in\mathscr{L}^{\gamma,2(\alpha+\beta)-1}_{ \overline{T},T-(k-1)\overline{T}}\), so that term can also be seen as a part of the regular paracontrolled remainder. By assumption we have that \(f^{\sharp}\in\mathscr{L}^{\gamma^{\prime},\alpha+2\beta-1}_{T}\) and \(f^{\prime}\in(\mathscr{L}^{\gamma^{\prime},\alpha+\beta-1}_{T})^{d}\). This implies by (2.12) that \[f^{\sharp}\in\mathscr{M}^{0}_{\overline{T},T-(k-1)\overline{T}} \mathscr{C}^{\alpha+2\beta-1}_{p}\cap C^{1}_{\overline{T},T-(k-1)\overline{T}} \mathscr{C}^{2\beta-1}_{p},\] \[f^{\prime}\in\mathscr{M}^{0}_{\overline{T},T-(k-1)\overline{T}} (\mathscr{C}^{\alpha+\beta-1}_{p})^{d}\cap C^{1}_{\overline{T},T-(k-1) \overline{T}}(\mathscr{C}^{\beta-1}_{p})^{d}\] for \(k=2,\ldots,n\). If \(u^{[T-\overline{T},T]}\) denotes the solution on \([T-\overline{T},T]\), then \(u^{[T-\overline{T},T]}_{T-\overline{T}}\in\mathscr{C}^{\alpha+\beta}_{p}\) and \(u^{[T-\overline{T},T],\sharp}_{T-\overline{T}}\in\mathscr{C}^{2\alpha+2\beta-1}_ {p}\). Thus, for the solution on \([T-2\overline{T},T-\overline{T}]\) follows \[u^{[T-2\overline{T},T-\overline{T}],\sharp}_{T-\overline{T}} =u^{[T-2\overline{T},T-\overline{T}]}_{T-\overline{T}}-(\nabla u ^{[T-\overline{T},T-\overline{T}]}_{T-\overline{T}}-f^{\prime}_{T-\overline{T}}) \otimes J^{T}(V)_{T-\overline{T}}-u^{T\prime}\otimes P_{\overline{T}}V\] \[=u^{[T-\overline{T},T],\sharp}_{T-\overline{T}}\in\mathscr{C}^{2 \alpha+2\beta-1}_{p}.\] Because we can trivially bound, \[\sup_{t\in[T-2\overline{T},T-\overline{T}]}\|P_{T-\overline{T}-t}u^{[T-2 \overline{T},T-\overline{T}],\sharp}_{\mathscr{C}^{2(\alpha+\beta)-1}_{p}}\|_{ \mathscr{C}^{2(\alpha+\beta)-1}_{p}}\lesssim\|u^{[T-2\overline{T},T-\overline {T}],\sharp}_{\mathscr{C}^{2(\alpha+\beta)-1}_{p}},\] there is no blow-up for the solution on \([T-2\overline{T},T-\overline{T}]\) at time \(t=T-\overline{T}\). Hence, the Banach fixed point argument for the map \(\phi^{\overline{T},T-\overline{T}}\) yields a solution \(u^{[T-2\overline{T},T-\overline{T}]}\in\mathscr{D}^{\hat{\gamma}}_{\overline{T},T-\overline{T}}\) for any small \(\hat{\gamma}>0\). By plugging the solution back in the fixed point map and using the interpolation estimates (cf. the arguments in the proof of Theorem 4.1 above and Corollary 4.10 below), we obtain that indeed \(u^{[T-2\overline{T},T-\overline{T}]}\in\mathscr{D}^{0}_{\overline{T},T- \overline{T}}\). Proceeding iteratively, we thus obtain solutions \[u^{[T-k\overline{T},T-(k-1)\overline{T}]}\in\mathscr{D}^{0}_{\overline{T},T-(k -1)\overline{T}}\quad\text{ for }\quad k=2,\dots,n\] and \(u^{[T-\overline{T},T]}\in\mathscr{D}^{\gamma}_{\overline{T},T-(k-1)\overline{ T}}\). Then, the solution \(u\), which is patched together on the subintervals \((u_{t}:=u^{[T-k\overline{T},T-(k-1)\overline{T}]}_{t}\) for \(t\in[T-k\overline{T},T-(k-1)\overline{T}]\), \(k=1,\dots,n)\), is indeed a fixed point of the map \(\phi=\phi^{0,T}\) considered on \([0,T]\) and an element of \(\mathscr{D}^{\gamma}_{T}\). Proof of Corollary 4.10.: By assumption, we have that \(u^{T,\prime}=0\) and \(u^{T,\sharp}=u^{T}\in\mathscr{C}^{2(\alpha+\beta)-1}_{p}\) and \(f^{\sharp},f^{\prime}\) have no blow-up. By the assumption on \(\mathscr{V}\), it follows that \(J^{T}(\partial_{i}V^{j})\odot V^{i}\in C_{T}\mathscr{C}^{\alpha+2\beta-1}\) due to \(\gamma^{\prime}\in(0,1)\). Furthermore due to \(u^{T,\prime}=0\) the paraproduct \(u^{T,\prime}\otimes P_{T-}.V_{T}\) in (4.4) vanishes, which previously was the term that introduced a blow-up of at least \(\gamma^{\prime}\) for the solution. Thus, we have that \(P_{T-,u}u^{T}\in C_{T}\mathscr{C}^{2(\alpha+\beta)-1}\). Hence, the arguments from Theorem 4.7 yield a paracontrolled solution \(u\in\mathscr{D}^{\gamma}_{T}\) for any small \(\gamma>0\), i.p. \(u\in\mathscr{L}^{\gamma,\alpha+\beta}_{T}\). It remains to justify that \(u\in D_{T}\). By the regular terminal condition \(u^{T}\in\mathscr{C}^{2(\alpha+\beta)-1}_{p}\subset\mathscr{C}^{\alpha+\beta}_{p}\) and the interpolation estimate (3.14), we obtain that \[\sup_{t\in[0,T]}\|u_{t}\|_{\mathscr{C}^{\alpha+\beta-\alpha}_{p}} \lesssim\|u\|_{\mathscr{L}^{\gamma,\alpha+\beta}_{T}}+\|u_{T}\|_ {\mathscr{C}^{\alpha+\beta-\alpha}_{p}}\] \[\lesssim\|u\|_{\mathscr{L}^{\gamma,\alpha+\beta}_{T}}+\|u_{T}\|_ {\mathscr{C}^{\alpha+\beta}_{p}}\] and since \(u^{\sharp}_{T}=u_{T}\in\mathscr{C}^{2(\alpha+\beta)-1}_{p}\), \[\sup_{t\in[0,T]}\|u^{\sharp}_{\mathscr{C}^{2(\alpha+\beta)-1- \alpha}_{p}} \lesssim\|u^{\sharp}\|_{\mathscr{L}^{\gamma,2(\alpha+\beta)-1}_{T }}+\|u_{T}\|_{\mathscr{C}^{2(\alpha+\beta)-1-\gamma\alpha}_{p}}\] \[\lesssim\|u^{\sharp}\|_{\mathscr{L}^{\gamma,2(\alpha+\beta)-1}_{T }}+\|u_{T}\|_{\mathscr{C}^{2(\alpha+\beta)-1}_{p}}\] for any small \(\gamma>0\). If \(\gamma\) is small enough, that is \(\gamma\in(0,(3\beta+2\alpha-2)/\alpha)\), we can estimate \[\sup_{t\in[0,T]}\|\nabla u\cdot\mathscr{V}(t)\|_{\beta}\] \[\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+ \|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\Big{(}\sup_{t\in[0,T]} \|u_{t}\|_{\mathscr{C}^{\alpha+\beta-\alpha\gamma}_{p}}+\sup_{t\in[0,T]}\|u^{ \sharp}_{t}\|_{\mathscr{C}^{2(\alpha+\beta)-1-\alpha\gamma}_{p}}\Big{)}. \tag{4.9}\] Plugging now the solution \(u\) back in the contraction map using the fixed point, i.e. \(u=P_{T-,u}u^{T}+J^{T}(\nabla u\cdot\mathscr{V})\), and (4.9), we can use the Schauder estimates for \(\gamma=\gamma^{\prime}=0\), such that we obtain that indeed \(u\in\mathscr{L}^{0,\alpha+\beta}_{T}\). By the commutator estimate (3.7) for \(\gamma=\gamma^{\prime}=0\) and \(u\in\mathscr{L}^{0,\alpha+\beta}_{T}\), we then also obtain that \(u^{\sharp}\in\mathscr{L}^{0,2(\alpha+\beta)-1}_{T}\). The next theorem proves the continuity of the solution map. The proof is similar to [12, Theorem 3.8], but adapted to the generalized setting for singular paracontrolled data. There are a few subtleties. First, the space \(\mathscr{D}_{T}^{\gamma}(V,u^{T\prime})\) depends on \(V,u^{T\prime}\). Furthermore due to the blow-up \(\gamma>0\), one cannot simply estimate the norm \(\mathscr{M}_{T}^{\gamma}\mathscr{C}\mathscr{C}^{\theta}\) on the interval \([0,T]\) by the sum of the respective blow-up norms on subintervals of \([0,T]\). In the case of regular terminal condition, that splitting issue does not occure, but we aim for continuity of the solution map in \(\mathscr{L}_{T}^{0,\alpha+\beta}\). This we establish by first proving continuity of the map with values in \(\mathscr{L}_{T}^{\gamma,\alpha+\beta}\) for any small \(\gamma>0\) and conclude from there together with the interpolation estimates. **Theorem 4.12**.: _In the setting of Theorem 4.7, the solution map_ \[(u^{T}=u^{T,\sharp}+u^{T,\prime}\owedge V_{T},\,f=f^{\sharp}+f^{\prime}\owedge V,\,\mathscr{V})\mapsto(u,u^{\sharp})\in\mathscr{L}_{T}^{\gamma^{\prime}, \alpha+\beta}\times\mathscr{L}_{T}^{\gamma,2(\alpha+\beta)-1},\] _is locally Lipschitz continuous, that is,_ \[\|u-v\|_{\mathscr{L}_{T}^{\gamma^{\prime},\alpha+\beta}}+\|u^{ \sharp}-v^{\sharp}\|_{\mathscr{L}_{T}^{\gamma,2(\alpha+\beta)-1}}\] \[\qquad\leqslant C[\|u^{T,\sharp}-v^{T,\sharp}\|_{\mathscr{L}_{p} ^{(2-\gamma^{\prime})\alpha+2\beta-1}}+\|u^{T,\prime}-v^{T,\prime}\|_{( \mathscr{C}_{p}^{\alpha+\beta-1})^{d}}\] \[\qquad\qquad+\|f^{\sharp}-g^{\sharp}\|_{\mathscr{L}_{T}^{\gamma^ {\prime},\alpha+2\beta-1}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{L}_{T}^{ \gamma^{\prime},\alpha+\beta-1})^{d}}+\|\mathscr{V}-\mathscr{W}\|_{\mathscr{ X}^{\beta},\gamma^{\prime}}] \tag{4.10}\] _for a constant \(C=C(T,\|\mathscr{V}\|,\|\mathscr{W}\|,\|u^{T}\|,\|v^{T}\|,\|f\|,\|g\|)>0\). Furthermore, in the setting of Corollary 4.10, the solution map_ \[(u^{T}=u^{T,\sharp},\,f=f^{\sharp}+f^{\prime}\owedge V,\,\mathscr{V})\mapsto( u,u^{\sharp})\in\mathscr{L}_{T}^{0,\alpha+\beta}\times\mathscr{L}_{T}^{0,2( \alpha+\beta)-1},\] _is locally Lipschitz continuous allowing for an analogue bound (4.10) with \(\gamma^{\prime}=0\) for the norms of \(u^{T,\sharp}-v^{T,\sharp},f^{\sharp}-g^{\sharp},f^{\prime}-g^{\prime}\) and \(u^{T,\prime}=v^{T,\prime}=0\)._ Proof.: We first prove the continuity in the case of singular paracontrolled data. Let \(u\) be the solution of the PDE for \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\), \(f=f^{\sharp}+f^{\prime}\owedge V\) and \(u^{T}=u^{T,\sharp}+u^{T,\prime}\owedge V_{T}\) and \(v\) the solution corresponding to the data \(\mathscr{W}\), \(g\) and \(v^{T}\). By the fixed point property we have \(\phi(u,u^{\prime})=(u,u^{\prime})\) and \(\phi(v,v^{\prime})=(v,v^{\prime})\) and thus \(u^{\prime}=\nabla u-f^{\prime}\) and \(v^{\prime}=\nabla v-g^{\prime}\). Hence, we can estimate \[\|u^{\prime}-v^{\prime}\|_{(\mathscr{L}_{T}^{\gamma^{\prime}, \alpha+\beta-1})^{d}}\lesssim\|u-v\|_{\mathscr{L}_{T}^{\gamma^{\prime},\alpha+ \beta}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{L}_{T}^{\gamma^{\prime},\alpha+ \beta-1})^{d}}. \tag{4.11}\] We estimate the terms in (4.10) by itself times a factor less than \(1\), plus a term depending on \(\|f-g\|\), \(\|\mathscr{V}-\mathscr{W}\|\) and \(\|u^{T}-v^{T}\|\). Here we keep in mind that \(u\in\mathscr{D}_{T}^{\gamma}(V,u^{T\prime})\), whereas \(v\in\mathscr{D}_{T}^{\gamma}(W,v^{T,\prime})\), but we explained the notation of \(\|u-v\|_{\mathscr{D}_{T}^{\gamma}}\) in Definition 4.5. For that purpose, we estimate the product using re-bracketing like \(ab-cd=a(b-d)+(a-c)d\) and the estimate (4.7) for the product, where \(\gamma^{\prime\prime}<\gamma^{\prime}\), \[\|\nabla u\cdot\mathscr{V}-\nabla v\cdot\mathscr{W}\|_{\mathscr{ L}_{T}^{\gamma^{\prime\prime}}\mathscr{C}_{p}^{\beta}}\] \[\qquad\lesssim(1+\|\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{ \prime}}})\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\|u-v\|_{ \mathscr{D}_{T}^{\gamma}}+(1+\|\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{ \prime}}})\|\mathscr{V}-\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\|v \|_{\mathscr{D}_{T}^{\gamma}}\] \[\qquad\qquad+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}} \|u\|_{\mathscr{D}_{T}^{\gamma}}\|\mathscr{V}-\mathscr{W}\|_{\mathscr{X}^{ \beta,\gamma^{\prime}}}\] \[\qquad\qquad+\tilde{C}(\|\mathscr{V}\|,\|\mathscr{W}\|,\|u^{T \prime}\|,\|v^{T\prime}\|)\big{(}\|\mathscr{V}-\mathscr{W}\|_{\mathscr{X}^{ \beta,\gamma^{\prime}}}+\|u^{T\prime}-v^{T,\prime}\|_{(\mathscr{C}_{p}^{ \alpha+\beta-1})^{d}}\big{)}. \tag{4.12}\] Since the solution \(u\) can be bounded in terms of \(u^{T},f,\mathscr{V}\) by Gronwall's inequality for locally finite measures using that \(\gamma,\gamma^{\prime}\in(0,1)\) (cf. [1, Appendix, Theorem 5.1]), and similarly for \(v\), we conclude that \[\|\nabla u\cdot\mathscr{V}-\nabla v\cdot\mathscr{W}\|_{\mathscr{M}^{ \gamma^{\prime}}\mathscr{C}^{\beta}_{p}}\] \[\lesssim\big{(}(\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime} }}+\|v\|_{\mathscr{G}^{\gamma}_{T}})(1+\|\mathscr{W}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}})+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\|u\|_ {\mathscr{G}^{\gamma}_{T}}\big{)}\times\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad constant on the right-hand side is \(<1\). Then we can take the last term to the other side and divide by a positive factor, obtaining \[\|u-v\|_{\gamma,\alpha+\beta}\leqslant C[\|u^{T,\sharp}-v^{T,\sharp} \|_{\mathscr{C}_{p}^{(2-\gamma^{\prime})\alpha+2\beta-1}}+\|u^{T,\prime}-v^{T, \prime}\|_{(\mathscr{C}_{p}^{\alpha+\beta-1})^{d}}\] \[\qquad\qquad\qquad\qquad+\|f^{\sharp}-g^{\sharp}\|_{\mathscr{L} _{T}^{\gamma^{\prime},\alpha+2\beta-1}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{ L}_{T}^{\gamma^{\prime},\alpha+\beta-1})^{d}}+\|\mathscr{V}-\mathscr{W}\|_{ \mathscr{X}^{\beta,\gamma^{\prime}}}], \tag{4.14}\] where \(C=C(T,\|\mathscr{V}\|,\|\mathscr{W}\|,\|u^{T}\|,\|v^{T}\|,\|f\|,\|g\|)>0\) is a constant that depends on the norms of the input data. Thus, the map \((u^{T},f,\mathscr{V})\mapsto(u,u^{\sharp})\) is locally Lipschitz continuous, which implies the claim. If \(T\) is such that \((T^{\gamma-\gamma^{\prime}}\lor T^{\gamma^{\prime}-\gamma^{\prime\prime}})C\) times the implicit constant is at least \(1\), then we want to apply the estimates above on the subintervals \([T-k\overline{T},T-(k-1)\overline{T}]\) of length \(\overline{T}\), where \(\overline{T}\) is chosen, such that \((\overline{T}^{\gamma-\gamma^{\prime}}\lor T^{\gamma^{\prime}-\gamma^{\prime \prime}})C\) times the implicit constant is strictly less than \(1\) and where \(k=1,\ldots,n\) for \(n\in\mathbb{N}\) with \(T-n\overline{T}\leqslant 0\). To obtain the continuity in \(\mathscr{D}_{T}^{\gamma}\), we consider the solutions \(u^{[T-k\overline{T},T-(k-1)\overline{T}]},v^{[T-k\overline{T},T-(k-1) \overline{T}]}\) on the subintervals \([T-k\overline{T},T-(k-1)\overline{T}]\) for \(k=1,\ldots n\), where the terminal condition of the solution \(u^{[T-k\overline{T},T-(k-1)\overline{T}]}\) is the initial value of the solution \(u^{[T-(k-1)\overline{T},T-(k-2)\overline{T}]}\) (analogously for \(v\)), such that, patched together, we obtain the solutions \(u,v\) on \([0,T]\). Let \(\varepsilon>0\) to be chosen below. For \(k=2,\ldots,n\), we have that \(u^{[T-k\overline{T},T-(k-1)\overline{T}]},v^{[T-k\overline{T},T-(k-1) \overline{T}]}\in\mathscr{D}_{T-(k-1)\overline{T}}^{0,\alpha+\beta}\) (see the argument in the proof of Theorem 4.7), such that we can estimate \[\|u-v\|_{\mathscr{M}_{T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\alpha +\beta-\alpha}}\] \[\qquad\leqslant T^{\gamma^{\prime}}\|u-v\|_{\mathscr{M}_{T- \overline{T}}^{0}\mathscr{C}_{p}^{\alpha+\beta-\alpha}}+\|u-v\|_{\mathscr{M}_ {T,T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\alpha+\beta}}\] \[\qquad\leqslant T^{\gamma^{\prime}}\sum_{k=2}^{n}\|u-v\|_{ \mathscr{M}_{T,T-(k-1)\overline{T}}^{0}\mathscr{C}_{p}^{\alpha+\beta-\alpha} }+\|u-v\|_{\mathscr{M}_{T,T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\alpha+\beta}}. \tag{4.15}\] Furthermore, we can estimate for \(\varepsilon\in(0,\gamma^{\prime}]\), \[\|u-v\|_{C_{T}^{1-\gamma^{\prime}}\mathscr{C}_{p}^{\beta}} \leqslant T^{\gamma^{\prime}-\varepsilon}\|u-v\|_{C_{T-T}^{1- \varepsilon}\mathscr{C}_{p}^{\beta}}+\|u-v\|_{C_{\overline{T},T}^{1-\gamma^{ \prime}}\mathscr{C}_{p}^{\beta}}\] \[\leqslant T^{\gamma^{\prime}-\varepsilon}\sum_{k=2}^{n}\|u-v\|_{ \mathscr{L}_{\overline{T},T-(k-1)\overline{T}}^{\varepsilon}}+\|u-v\|_{ \mathscr{L}_{\overline{T},T}^{\gamma^{\prime},\beta+\alpha}}. \tag{4.16}\] Subtracting the terminal condition for each of the terms with \(k=2,\ldots,n\) and applying the interpolation bound (3.14) for \(\theta=\alpha+\beta\), \(\tilde{\theta}=\varepsilon\alpha\) yields for \(k=2,\ldots,n\), \[\|(u-u_{T-(k-1)\overline{T}})-(v-v_{T-(k-1)\overline{T}})\|_{ \mathscr{M}_{T,T-(k-1)\overline{T}}^{0}\mathscr{C}_{p}^{\alpha+\beta- \varepsilon\alpha}}\] \[\qquad\leqslant\|(u-u_{T-(k-1)\overline{T}})-(v-v_{T-(k-1) \overline{T}})\|_{\mathscr{M}_{T,T-(k-1)\overline{T}}^{\varepsilon}\mathscr{C} _{p}^{\alpha+\beta}}. \tag{4.17}\] Together with (4.15), (4.16) and (4.17), this then yields \[\|u-v\|_{\mathscr{L}_{T}^{\gamma^{\prime},\alpha+\beta-\varepsilon \alpha}}\] \[\lesssim T^{\gamma^{\prime}}\sum_{k=2}^{n}\bigl{(}\|(u\!-\!u_{T-(k-1 )\overline{T}})\!-\!(v\!-\!v_{T-(k-1)\overline{T}})\|_{\mathscr{L}_{\overline{ T},T-(k-1)\overline{T}}^{0,\alpha+\beta-\varepsilon\alpha}}\!+\!\|u_{T-(k-1) \overline{T}}\!-\!v_{T-(k-1)\overline{T}}\|_{\mathscr{C}_{p}^{\alpha+\beta}} \bigr{)}\] \[\qquad\qquad+\|u-v\|_{\mathscr{L}_{\overline{T},T}^{\prime}}\] \[\lesssim T^{\gamma^{\prime}}\sum_{k=2}^{n}\bigl{(}\|(u\!-\!u_{T-( k-1)\overline{T}})\!-\!(v\!-\!v_{T-(k-1)\overline{T}})\|_{\mathscr{L}_{\overline{ T},T-(k-1)\overline{T}}^{\varepsilon,\alpha+\beta}}\!+\!\|u_{T-(k-1) \overline{T}}\!-\!v_{T-(k-1)\overline{T}}\|_{\mathscr{C}_{p}^{\alpha+\beta}} \bigr{)}\] \[\qquad\qquad+\|u-v\|_{\mathscr{L}_{\overline{T},T}^{\prime}}\] \[\lesssim T^{\gamma^{\prime}}\sum_{k=2}^{n}\!\|u-v\|_{\mathscr{L}_ {\overline{T},T-(k-1)\overline{T}}^{\varepsilon,\alpha+\beta}}+\|u-v\|_{ \mathscr{L}_{\overline{T},T}^{\prime}}, \tag{4.18}\] where in the last estimate, we estimated the norm of the terminal conditions by the norm of the solutions in the previous iteration step. Analogously, we can argue for \(u^{\sharp}-v^{\sharp}\), obtaining \[\|u^{\sharp}-v^{\sharp}\|_{\mathscr{L}_{T}^{\gamma,2(\alpha+ \beta)-1-\varepsilon\alpha}}\] \[\qquad\lesssim T^{\gamma}\sum_{k=2}^{n}\|u^{\sharp}-v^{\sharp}\|_ {\mathscr{L}_{\overline{T},T-(k-1)\overline{T}}^{\varepsilon,2(\alpha+\beta) -1}}+\|u^{\sharp}-v^{\sharp}\|_{\mathscr{L}_{T,T}^{\gamma,2(\alpha+\beta)-1}}. \tag{4.19}\] Now, taking \(\varepsilon:=(\gamma-\gamma^{\prime})\in(0,\gamma^{\prime})\), we can apply the above estimate (4.14) for each of the terms on the right-hand side of the inequalities (4.18) and (4.19). That is, for each of the terms for \(k=2,\ldots,n\), we obtain \[\|u-v\|_{\mathscr{L}_{\overline{T},T-(k-1)\overline{T}}^{\varepsilon,\alpha+\beta}}+\|u^{\sharp}-v^{\sharp}\|_{\mathscr{L}_{\overline{T},T-(k-1) \overline{T}}^{\varepsilon,2(\alpha+\beta)-1}}\] \[\qquad\lesssim\frac{1}{1-\overline{T}^{\varepsilon}\,\|\mathscr{ V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1\!+\!\|\mathscr{V}\|_{\mathscr{X}^{ \beta,\gamma^{\prime}}})}\biggl{[}\|u^{T,\sharp}-v^{T,\sharp}\|_{\mathscr{C}_ {p}^{(2-\gamma^{\prime})\alpha+2\beta-1}}+\|u^{T,\prime}-v^{T,\prime}\|_{( \mathscr{C}_{p}^{\alpha+\beta-1})^{d}}\] \[\qquad\qquad+\|f^{\sharp}-g^{\sharp}\|_{\mathscr{L}_{T}^{\gamma^ {\prime},\alpha+2\beta-1}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{L}_{T}^{ \gamma^{\prime},\alpha+\beta-1})^{d}}+\|\mathscr{V}-\mathscr{W}\|_{\mathscr{ X}^{\beta,\gamma^{\prime}}}\biggr{]}.\] This uses that by the choice of \(\varepsilon\), \(\overline{T}^{\varepsilon}=\overline{T}^{\gamma-\gamma^{\prime}}\leqslant \overline{T}^{\gamma-\gamma^{\prime}}\vee\overline{T}^{\gamma^{\prime}-\gamma ^{\prime\prime}}\) and that \(u,v\in\mathscr{D}_{T-(k-1)\overline{T}}^{0,\alpha+\beta}\) for \(k=2,\ldots,n\). For \(k=1\), we replace \(\varepsilon\) by \(\gamma\), respectively \(\gamma^{\prime}\) for \(u^{\sharp}-v^{\sharp}\), and obtain the estimate (4.14) on the subinterval \([T-\overline{T},T]\). Together, this then yields the following estimate on the whole interval \([0,T]\) (with a possibly different constant \(C\)): \[\|u-v\|_{\gamma,\alpha+\beta-\varepsilon\alpha}\leqslant C[\|u^{T,\sharp}-v^{T,\sharp}\|_{\mathscr{C}_{p}^{(2-\gamma^{\prime}) \alpha+2\beta-1}}+\|u^{T,\prime}-v^{T,\prime}\|_{(\mathscr{C}_{p}^{\alpha+\beta- 1})^{d}}\] \[\qquad+\|f^{\sharp}-g^{\sharp}\|_{\mathscr{L}_{T}^{\gamma^{ \prime},\alpha+2\beta-1}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{L}_{T}^{\gamma^ {\prime},\alpha+\beta-1})^{d}}+\|\mathscr{V}-\mathscr{W}\|_{\mathscr{X}^{ \beta,\gamma^{\prime}}}]. \tag{4.20}\] Plugging now \(u-v\) back in the contraction map on \([0,T]\), we can remove the loss \(\varepsilon\alpha\) in regularity. That is, we can estimate for \(\varepsilon\) small enough, \[\|u-v\|_{\gamma,\alpha+\beta}\lesssim \|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{ V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\|u-v\|_{\gamma,\alpha+\beta-\alpha\varepsilon}\] \[+C[\|u^{T,\sharp}-v^{T,\sharp}\|_{\mathscr{C}_{p}^{(2-\gamma^{ \prime})\alpha+2\beta-1}}+\|u^{T,\prime}-v^{T,\prime}\|_{(\mathscr{C}_{p}^{ \alpha+\beta-1})^{d}}\] \[+\|f^{\sharp}-g^{\sharp}\|_{\mathscr{L}_{T}^{\gamma^{\prime}, \alpha+2\beta-1}}+\|f^{\prime}-g^{\prime}\|_{(\mathscr{L}_{T}^{\gamma^{\prime}, \alpha+\beta-1})^{d}}+\|\mathscr{V}-\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{ \prime}}}].\] Thus the local Lipschitz continuity (4.10) on \([0,T]\) follows. In the setting of Corollary 4.10, we obtain from the above, that the Lipschitz estimate (4.10) holds true with \(\gamma^{\prime}=0\) for the norms of \(u^{T,\sharp}-v^{T,\sharp},f^{\sharp}-g^{\sharp},f^{\prime}-g^{\prime}\) and \(u^{T,\prime}=v^{T,\prime}=0\) on the right-hand side and any small \(\gamma>0\) on the left-hand-side of the estimate. Similar as in the proof of Corollary 4.10, we can use the fixed point property and the estimate (4.9) for small enough \(\gamma>0\), together with the Schauder estimates and the interpolation bound (3.14), to obtain that \[\|u-v\|_{\mathscr{L}_{T}^{0,\alpha+\beta}}+\|u^{\sharp}-v^{\sharp} \|_{\mathscr{L}_{T}^{0,2(\alpha+\beta)-1}}\] \[\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+ \|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\Big{(}\|u-v\|_{ \mathscr{L}_{T}^{0,\alpha+\beta-\gamma\alpha}}+\|u^{\sharp}-v^{\sharp}\|_{ \mathscr{L}_{T}^{0,2(\alpha+\beta)-1-\gamma\alpha}}\Big{)}\] \[\qquad+C[\|u^{T,\sharp}\!-\!v^{T,\sharp}\|_{\mathscr{L}_{P}^{2 \alpha+2\beta-1}}\!+\!\|f^{\sharp}\!-\!g^{\sharp}\|_{\mathscr{L}_{T}^{0,\alpha +2\beta-1}}\!+\!\|f^{\sharp}\!-\!g^{\sharp}\|_{\mathscr{L}_{T}^{0,\alpha+2 \beta-1}}\!+\!\|f^{\prime}\!-\!g^{\prime}\|_{(\mathscr{L}_{T}^{0,\alpha+\beta -1})}\!+\!\|\mathscr{V}\!-\!\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}]\] \[\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+ \|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\times\] \[\qquad\qquad\Big{(}\|(u\!-\!v)\!-\!(u^{T,\sharp}\!-\!v^{T,\sharp} )\|_{\mathscr{L}_{T}^{0,\alpha+\beta-\gamma\alpha}}\!+\!\|(u^{\sharp}\!-\!v^{ \sharp})\!-\!(u^{T,\sharp}\!-\!v^{T,\sharp})\|_{\mathscr{L}_{T}^{0,2(\alpha+ \beta)-1-\gamma\alpha}}\Big{)}\] \[\qquad+C[\|u^{T,\sharp}\!-\!v^{T,\sharp}\|_{\mathscr{L}_{P}^{2 \alpha+2\beta-1}}\!+\!\|f^{\sharp}\!-\!g^{\sharp}\|_{\mathscr{L}_{T}^{0, \alpha+2\beta-1}}\!+\!\|f^{\prime}\!-\!g^{\prime}\|_{(\mathscr{L}_{T}^{0, \alpha+\beta-1})}\!+\!\|f^{\prime}\!-\!g^{\prime}\|_{(\mathscr{L}_{T}^{0, \alpha+\beta-1})^{d}}\!+\!\|\mathscr{V}\!-\!\mathscr{W}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}}]\] \[\lesssim\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}(1+ \|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}})\Big{(}\|u-v\|_{ \mathscr{L}_{T}^{0,\alpha+\beta}}+\|u^{\sharp}-v^{\sharp}\|_{\mathscr{L}_{T}^ {0,\gamma,2(\alpha+\beta)-1}}\Big{)}\] \[\quad+C[\|u^{T,\sharp}\!-\!v^{T,\sharp}\|_{\mathscr{L}_{P}^{2 \alpha+2\beta-1}}\!+\!\|f^{\sharp}\!-\!g^{\sharp}\|_{\mathscr{L}_{T}^{0, \alpha+2\beta-1}}\!+\!\|f^{\prime}\!-\!g^{\prime}\|_{(\mathscr{L}_{T}^{0, \alpha+\beta-1})^{d}}\!+\!\|\mathscr{V}\!-\!\mathscr{W}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}}].\] Notice that, to apply the interpolation bound (3.14) in the last estimate above, we subtracted the terminal condition \(u^{T}-v^{T}=u^{T,\sharp}-v^{T,\sharp}\), so that \((u-v)_{T}-(u^{T}-v^{T})=0\). The constant \(C\) above changes in each line. Thus together the Lipschitz continuity of the solution map with values in \(\mathscr{L}_{T}^{0,\alpha+\beta}\times\mathscr{L}_{T}^{0,2(\alpha+\beta)-1}\) follows. **Remark 4.13** (Super-exponential dependency of the Lipschitz constant on \(\mathscr{V},\mathscr{W}\)).: _The Lipschitz constant of the solution map on \([0,T]\) depends super-exponentially on the norms \(\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\), \(\|\mathscr{W}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\). Indeed, to obtain the Lipschitz estimate of the solution map on \([0,T]\), we have to apply the estimate in (4.13) on every subinterval \([T-(k+1)\overline{T},T-k\overline{T}]\), where we have to choose \(\overline{T}\) small enough so that \(\overline{T}^{\kappa}<C^{-1}\) for \(\kappa=\gamma-\gamma^{\prime}\wedge\gamma^{\prime}-\gamma^{\prime\prime}\) and for the constant \(C=C(\|\mathscr{V}\|,\|\mathscr{W}\|,\|u^{T}\|,\|v^{T}\|,\|f\|,\|g\|)\). This means that in (4.14) we have to iterate the estimate at least \(T/C^{-\kappa}=TC^{\kappa}\) times, and each time we multiply with the constant \(\tilde{C}\) in (4.14), leading roughly speaking to a factor \(\tilde{C}^{TC^{\kappa}}\). By doing the analysis more carefully we can show that there is (super-)exponential dependence only on \(\|\mathscr{V}\|,\|\mathscr{W}\|\) and that the Lipschitz constant actually depends linearly on \(\|u^{T}\|,\|v^{T}\|,\|f\|,\|g\|\). But the super-exponential dependence on \(\|\mathscr{V}\|,\|\mathscr{W}\|\) is inherent to the problem and we expect that it cannot be significantly improved. By similar arguments, we also see that the norm of the solution \(u\) to the Kolmogorov backward equation in Theorem 4.7 depends super-exponentially on \(\|\mathscr{V}\|\)._ _This will be relevant when we take_ \(\mathscr{V}\) _random, cf. the Brox diffusion with Levy noise in_ _[_10_]__. If we do not have super-exponential moments for_ \(\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{\prime}}}\)_, then we do not know if_ \(u\) _has finite moments. And if_ \(V\) _is Gaussian, then the second component of the lift_ \(\mathscr{V}\) _is a second order polynomial of a Gaussian and therefore it does not have super-exponential moments._ _But note that this only concerns Holder norms of the Kolmogorov backward equation. If we are only interested in the_ \(L^{p}\) _norm,_ \(p\in[1,\infty]\)_, we can always use the trivial bound_ \(\|u\|_{L^{p}}\leq\|u^{T}\|_{L^{p}}+T\|f\|_{L^{p}}\)_, provided that the right-hand side is finite, which holds for smooth_ \(V,f,u^{T}\) _by the stochastic representation (Feynman-Kac) of the Kolmogorov backward equation, and which extends by approximation to the general setting._ Next we consider solutions of the Kolmogorov PDE for \(\mathscr{G}^{\mathscr{V}}\) (for fixed \(\mathscr{V}\)) on subintervals \([0,r]\) of \([0,T]\) for bounded sets of terminal conditions \((y^{r})_{r\in[0,T]}\) and right-hand-sides \((f^{r})_{r\in[0,T]}\). In certain situations one is interested in a uniform bound on the norms of the solutions \((u^{r})\) on \([0,r]\). We prove the latter in the following corollary. The solution \(u^{r}\) on \([0,r]\) has the following paracontrolled structure \[u^{r}=u^{r,\sharp}+(\nabla u^{r}-f^{r,\prime})\otimes J^{r}(V)+y^{r,\prime} \otimes P_{r-}.V_{r} \tag{4.21}\] with \[u^{r,\sharp} =P_{r-}.y^{r,\sharp}+J^{r}(-f^{\sharp})+J^{r}(\nabla u^{r}\odot V )+J^{r}(V\otimes\nabla u^{r})\] \[\quad+C_{1}(y^{r,\prime},V_{r})+C_{2}(-f^{\prime},V)+C_{2}(\nabla u ^{r},V),\] for the commutators from the proof of Theorem 4.7. **Corollary 4.14**.: _Let \(T>0\) and \(\mathscr{V}\in\mathscr{X}^{\beta,\gamma^{\prime}}\) for \(\beta,\gamma^{\prime}\) as in Theorem 4.7. Let \(\gamma\in(\gamma^{\prime},1)\) and \(\gamma^{\prime\prime}\in(0,\gamma^{\prime})\) be as in the proof of Theorem 4.7. Let \((y^{r}=y^{r,\sharp}+y^{r,\prime}\otimes V_{r})_{r\in[0,T]}\) be a bounded sequence of singular paracontrolled terminal conditions, that is,_ \[C_{y}:=\sup_{r\in[0,T]}[\|y^{r,\sharp}\|_{\mathscr{C}^{\gamma^{\prime}}_{p} \supseteq\gamma^{\prime})_{\alpha+2\beta-1}}+\|y^{r,\prime}\|_{\mathscr{C}^{ \alpha+\beta-1}_{p}}]<\infty.\] _Let \((f^{r}=f^{r,\sharp}+f^{r,\prime}\otimes V)_{r\in[0,T]}\) be a sequence of right-hand-sides with_ \[C_{f}:=\sup_{r\in[0,T]}[\|f^{r,\sharp}\|_{\mathscr{L}^{\gamma^{\prime},\alpha +2\beta-1}_{r}}+\|f^{r,\prime}\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta-1 }_{r}}]<\infty.\] _Let for \(r\in[0,T]\), \((u^{r}_{t})_{t\in[0,r]}\) be the solution of the backward Kolmogorov PDE for \(\mathscr{G}^{\mathscr{V}}\) with terminal condition \(u^{r}_{r}=y^{r}\) and right-hand side \(f^{r}\)._ _Then, the following uniform bound for the solutions \((u^{r})\) holds true_ \[\sup_{r\in[0,T]}[\|u^{r,\sharp}\|_{\mathscr{L}^{\gamma,2(\alpha+ \beta)-1}_{r}}+\|u^{r}\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta}_{r}}]\] \[\qquad\qquad\lesssim_{T}\lambda_{\overline{T},\mathscr{V}}^{-1} \bigg{(}\sup_{r\in[0,T]}[\|y^{r,\sharp}\|_{\mathscr{C}^{(2-\gamma^{\prime})_{ \alpha+2\beta-1}}_{p}}+\|f^{r,\sharp}\|_{\mathscr{L}^{\gamma^{\prime},\alpha+2 \beta-1}_{r}}]\] \[\qquad\qquad\qquad+\|\mathscr{V}\|_{\mathscr{X}^{\beta,\gamma^{ \prime}}}\sup_{r\in[0,T]}[\|y^{r,\prime}\|_{\mathscr{C}^{\alpha+\beta-1}_{p}}+ \|f^{r,\prime}\|_{\mathscr{L}^{\gamma^{\prime},\alpha+\beta-1}_{r}}]\bigg{)}, \tag{4.22}\] _where \(\lambda_{\overline{T},\mathscr{V}}:=1-(\overline{T}^{\gamma-\gamma^{\prime}} \vee\overline{T}^{\gamma^{\prime}-\gamma^{\prime\prime}})\|\mathscr{V}\|_{ \mathscr{X}^{\beta,\gamma^{\prime}}}(1+\|\mathscr{V}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}})>0\)._ _In particular, replacing \(y^{r}\) by \(y^{r}_{1}-y^{r}_{2}\) and \(f^{r}\) by \(f^{r}_{1}-f^{r}_{2}\) with analogue bounds, a uniform Lipschitz bound for the solutions \(u^{r}_{1}-u^{r}_{2}\) follows._ _In the setting of Corollary 4.10, the bound (4.22) holds true with \(\gamma=\gamma^{\prime}=0\), under the assumption, that \(C_{f}+C_{y}<\infty\) for \(\gamma^{\prime}=0\)._ **Remark 4.15**.: _In setting of Theorem 4.1 for \(\beta\) in the Young regime and considering bounded sets of terminal conditions \((y^{r})_{r}\subset\mathscr{C}^{(1-\gamma)\alpha+\beta}_{p}\) and right-hand-sides \(f^{r}\subset\mathscr{L}^{\gamma,\beta}_{r}\) for \(\gamma\in[0,1)\), an analogue uniform Lipschitz bound for the solutions \((u^{r})\) on \([0,r]\) holds true. The proof is similar except much easier._ Proof.: The proof follows from Theorem 4.12 replacing \(T\) by \(r\) and considering paracontrolled solutions on \([0,r]\) in the sense of (4.21). Then, by (4.12) and (4.13) from the proof of Theorem 4.12 for \(\mathscr{V}=\mathscr{W}\) and splitting the interval \([0,r]\) in subintervals of length \(\overline{T}\), we obtain for every \(r\leqslant T\), \[\|u^{r,\sharp}\|_{\mathscr{L}_{r}^{\gamma,2(\alpha+\beta)-1}}+\|u^{r} \|_{\mathscr{L}_{r}^{\gamma^{\prime},\alpha+\beta}}\] \[\qquad\lesssim_{T}C(r)\lambda_{\overline{T},\mathscr{V}}^{-1} \bigg{(}\sup_{r\in[0,T]}[\|y^{r,\sharp}\|_{\mathscr{C}_{p}^{(2-\gamma^{\prime}) \alpha+2\beta-1}}+\|f^{r,\sharp}\|_{\mathscr{L}_{r}^{\gamma^{\prime},\alpha+2 \beta-1}}]\] \[\qquad\qquad\qquad\qquad+\|\mathscr{V}\|_{\mathscr{X}^{\beta, \gamma^{\prime}}}\sup_{r\in[0,T]}[\|y^{r,\prime}\|_{\mathscr{C}_{p}^{\alpha+ \beta-1}}+\|f^{r,\prime}\|_{\mathscr{L}_{r}^{\gamma^{\prime},\alpha+\beta-1}} ]\bigg{)}.\] The dependence of the constant \(C(r)\) on \(r\leqslant T\) is as follows: \(C(r)\lesssim\frac{r}{T}\leqslant\frac{T}{T}\). Notice that the choice of \(\overline{T}\) only depends on \(\|\mathscr{V}\|\), which is fixed here. Thus we obtain (4.22). As the solution \(u^{r}\) depends linearily on the terminal condition \(y^{r}\) and the right-hand side \(f^{r}\), the uniform Lipschitz bound follows. **Remark 4.16**.: _Let \((V^{m})\) be such that \(\mathscr{V}^{m}:=(V^{m},(\sum_{i}P(\partial_{i}V^{m,j})\odot V^{i})_{j}) \overset{m\to\infty}{\rightarrow}\mathscr{V}\) in \(\mathscr{X}^{\beta,\gamma^{\prime}}\). Let \((f^{r})\), \((y^{r})\) be as in the corollary. Moreover, let \((y^{r,m})\) with \(y^{r,m}=y^{r,\sharp}+y^{r,\prime}\oslash V_{r}^{m}\) be such that \(\sup_{r\in[0,T]}\|y^{r,\sharp}m-y^{r,\sharp}\|_{\mathscr{C}_{p}^{(2-\gamma^{ \prime})\alpha+2\beta-1}}\to 0\) for \(m\to\infty\). Analogously, let \((f^{r,m})\) with \(f^{r,m}=f^{r,\sharp,m}+f^{r,\prime}\oslash V^{m}\) and convergence of \((f^{r,\sharp,m})_{m}\). Let \(u^{r}\) and \(u^{r,m}\) be the solutions for \(\mathscr{G}^{\mathscr{V}}\) with right-hand side \(f^{r}\) and terminal conditions \(y^{r}\) and \(y^{r,m}\), respectively. Then the proof of the corollary furthermore shows that_ \[\sup_{r\in[0,T]}[\|u^{r,\sharp}-u^{r,\sharp,m}\|_{\mathscr{L}_{r} ^{\gamma,2(\alpha+\beta)-1}}+\|u^{r}-u^{r,m}\|_{\mathscr{L}_{r}^{\gamma^{ \prime},\alpha+\beta}}]\] \[\qquad\qquad\lesssim_{T}\lambda_{\overline{T},\mathscr{V}}^{-1} \Big{(}\sup_{r\in[0,T]}[\|y^{r,\sharp}-y^{r,\sharp,m}\|_{\mathscr{C}_{p}^{(2- \gamma^{\prime})\alpha+2\beta-1}}+\|f^{r,\sharp}-f^{r,\sharp,m}\|_{\mathscr{ L}_{r}^{\gamma^{\prime},\alpha+2\beta-1}}]\] \[\qquad\qquad\qquad+\|\mathscr{V}-\mathscr{V}^{m}\|_{\mathscr{X} ^{\beta,\gamma^{\prime}}}\sup_{r\in[0,T]}[\|y^{r,\prime}\|_{\mathscr{C}_{p}^{ \alpha+\beta-1}}+\|f^{r,\prime}\|_{\mathscr{L}_{r}^{\gamma^{\prime},\alpha+ \beta-1}}]\Big{)}\] \[\to 0,\] _for \(m\to\infty\)._ ## Appendix A Appendix Proof of Lemma 2.6.: The proof of the lemma uses ideas from the proof of [15, Lemma 5.3.20]. Let \(\psi=p_{0}\in C_{\infty}^{\infty}\) and let \(j\geqslant 0\) (for \(j=-1\), \(\Delta_{j}(f\otimes g)=0\), so there is nothing to estimate). Then we estimate (notation: \(S_{j-1}u:=\sum_{l\leqslant j-1}\Delta_{l}u\)) \[\|\Delta_{j}[(-\mathfrak{L}_{\nu}^{\alpha})(f\otimes g)-f\otimes(- \mathfrak{L}_{\nu}^{\alpha})g]\|_{L^{p}}\] \[\qquad=\bigg{(}\int_{\mathbb{R}^{d}}\biggl{|}\int_{\mathbb{R}^{d} }\mathscr{F}^{-1}(-\psi_{\nu}^{\alpha}p_{j})(x-y)(S_{j-1}f(y)-S_{j-1}f(x)) \Delta_{j}g(y)dy\biggr{|}^{p}dx\biggr{)}^{1/p}\] \[\qquad\lesssim\sum_{|\eta|=1}\|[z\mapsto z^{\eta}\mathscr{F}^{-1} (-\psi_{\nu}^{\alpha}p_{j})(z)]\ast\partial^{\eta}(S_{j-1}f)\|_{L^{p}}\| \Delta_{j}g\|_{L^{\infty}}\] \[\qquad\lesssim\sum_{|\eta|=1}\|z\mapsto z^{\eta}\mathscr{F}^{-1} (-\psi_{\nu}^{\alpha}p_{j})(z)\|_{L^{1}}\|\partial^{\eta}S_{j-1}f\|_{L^{p}} \|\Delta_{j}g\|_{L^{\infty}}\] for a multi-index \(\eta\) and using that \(S_{j-1}f(x)-S_{j-1}f(y)=\int_{0}^{1}DS_{j-1}f(\lambda x+(1-\lambda)y))(x-y)d\lambda\) with \(\lambda x+(1-\lambda)y=(1+\lambda)x-\lambda y-(x-y)\) (and substituting \(y\to x-y\), \(x\to(1+\lambda)x-\lambda y\)) and Young's inequality for the last estimate. We have that, as \(\sigma<1\), \[\|\partial^{\eta}S_{j-1}f\|_{L^{p}}\|\Delta_{j}g\|_{L^{\infty}}\lesssim 2^{-j(\sigma-1+ \varsigma)}\|\partial^{\eta}f\|_{\mathscr{C}_{p}^{\sigma-1}}\|g\|_{\mathscr{C} ^{\varsigma}}\lesssim 2^{-j(\sigma-1+\varsigma)}\|f\|_{\mathscr{C}_{p}^{\sigma}}\|g\|_{ \mathscr{C}^{\varsigma}}.\] Moreover, we obtain \[\|z\mapsto z^{\eta}\mathscr{F}^{-1}(-\psi^{\alpha}_{\nu}p_{j})(z)\|_{L ^{1}} =2^{j\alpha}\|z\mapsto z^{\eta}\mathscr{F}^{-1}(\psi^{\alpha}_{\nu}(2^{-j} \cdot)p_{0}(2^{-j}\cdot))(z)\|_{L^{1}}\] \[=2^{j\alpha}2^{-j}\|\mathscr{F}^{-1}(\partial^{\eta}[\psi^{\alpha }_{\nu}p_{0}](2^{-j}\cdot))\|_{L^{1}}\] \[\lesssim 2^{j(\alpha-1)}\] using that \[\|\mathscr{F}^{-1}(\partial^{\eta}[\psi^{\alpha}_{\nu}p_{0}](2^{-j}\cdot))\|_{L ^{1}}=\|2^{jd}\mathscr{F}^{-1}(\partial^{\eta}[\psi^{\alpha}_{\nu}p_{0}])(2^{- j}\cdot)\|_{L^{1}}=\|\mathscr{F}^{-1}(\partial^{\eta}[\psi^{\alpha}_{\nu}p_{0}])\|_{L ^{1}}<\infty.\] Together we have \[\|\Delta_{j}[(-\mathfrak{L}^{\alpha}_{\nu})(f\vartriangleleft g)-f\vartriangleleft( -\mathfrak{L}^{\alpha}_{\nu})g]\|_{L^{p}}\lesssim 2^{-j(\sigma+\varsigma- \alpha)}\|f\|_{\mathscr{C}^{\sigma}_{p}}\|g\|_{\mathscr{C}^{\varsigma}},\] which yields the claim. Proof of Lemma 2.7.: For \(\vartheta\in[-1,\infty)\), the claim follows from [12, Lemma 5.3.20 and Lemma 5.5.7], applied to \(\varphi(z)=\exp(-\psi^{\alpha}_{\nu}(z))\). Here, [12, Lemma 5.3.20] can be generalized, with the notation from that lemma, to \(u\in\mathscr{C}^{\alpha}_{p}\) for \(p\in[1,\infty]\) arguing analoguously as in the proof of Lemma 2.6. It remains to prove the commutator for \(\vartheta\in[-\alpha,-1)\). For that we note that \[P_{t}(u\vartriangleleft v)-u\vartriangleleft P_{t}(v) =(P_{t}-\mathrm{Id})(u\vartriangleleft v)-u\vartriangleleft(P_{t} -\mathrm{Id})v\] \[=\int_{0}^{t}[(-\mathfrak{L}^{\alpha}_{\nu})P_{r}(u\vartriangleleft v )-u\vartriangleleft(-\mathfrak{L}^{\alpha}_{\nu})P_{r}v]dr.\] For the operator \((-\mathfrak{L}^{\alpha}_{\nu})P_{r}\) we have by Lemma 2.8 (whose claim follows from (2.9) for \(\vartheta\geqslant 0\) and Lemma 2.6), that for \(\theta\geqslant 0\) (uniformly in \(r\in[0,t]\)) \[\|(-\mathfrak{L}^{\alpha}_{\nu})P_{r}(u\vartriangleleft v)-u\vartriangleleft(- \mathfrak{L}^{\alpha}_{\nu})P_{r}v\|_{\mathscr{C}^{\alpha+\varsigma-\alpha+ \vartheta}_{p}}\lesssim r^{-\theta/\alpha}\|u\|_{\mathscr{C}^{\alpha}_{p}}\|v \|_{\mathscr{C}^{\varsigma}}\] holds true and thus we obtain (taking \(\theta=\vartheta+\alpha\geqslant 0\)) \[\|P_{t}(u\vartriangleleft v)-u\vartriangleleft P_{t}(v)\|_{ \mathscr{C}^{\varsigma+\sigma+\vartheta}_{p}}\] \[\quad\leqslant\int_{0}^{t}\|(-\mathfrak{L}^{\alpha}_{\nu})P_{r}(u \vartriangleleft v)-u\vartriangleleft(-\mathfrak{L}^{\alpha}_{\nu})P_{r}v\|_{ \mathscr{C}^{(\varsigma+\sigma-\alpha)+(\vartheta+\alpha)}_{p}}dr\] \[\quad\lesssim\|u\|_{\mathscr{C}^{\alpha}_{p}}\|v\|_{\mathscr{C} ^{\varsigma}}\int_{0}^{t}r^{-(\vartheta+\alpha)/\alpha}dr\lesssim t^{- \vartheta/\alpha}\|u\|_{\mathscr{C}^{\alpha}_{p}}\|v\|_{\mathscr{C}^{ \varsigma}},\] where the last two estimates are valid for \(\vartheta\in[-\alpha,0)\). Proof of Lemma 2.8.: We have that \[(-\mathfrak{L}^{\alpha}_{\nu})P_{t}(u\vartriangleleft v)-u\vartriangleleft(- \mathfrak{L}^{\alpha}_{\nu})P_{t}v =(-\mathfrak{L}^{\alpha}_{\nu})\big{(}P_{t}(u\vartriangleleft v)-u \vartriangleleft P_{t}v\big{)}\] \[\quad+(-\mathfrak{L}^{\alpha}_{\nu})(u\vartriangleleft P_{t}v)-u \vartriangleleft(-\mathfrak{L}^{\alpha}_{\nu})P_{t}v.\] The first summand, we estimate by the commutator for \((P_{t})\) from Lemma 2.7, and continuity of the operator \((-\mathfrak{L}^{\alpha}_{\nu})\) from Proposition 2.4, which gives \[\|(-\mathfrak{L}^{\alpha}_{\nu})\big{(}P_{t}(u\vartriangleleft v)- u\vartriangleleft P_{t}v\big{)}\|_{\mathscr{C}^{\sigma+\varsigma+\vartheta- \alpha}_{p}} \lesssim\|P_{t}(u\vartriangleleft v)-u\vartriangleleft P_{t}v\|_{ \mathscr{C}^{\sigma+\varsigma+\vartheta}_{p}}\] \[\lesssim t^{-\vartheta/\alpha}\|u\|_{\mathscr{C}^{\alpha}_{p}}\|v \|_{\mathscr{C}^{\varsigma}}.\] The second summand follows from the commutator for \((-\mathfrak{L}_{\nu}^{\alpha})\). If \(\alpha=2\), then the estimate is immediate due to Leibnitz rule, \(\sigma<1\) and Schauder estimates for \(P_{t}\) as \(\theta\geqslant 0\). If \(\alpha\in(1,2)\), then we apply Lemma 2.6 with \(f=u\) and \(g=P_{t}v\) and use the Schauder estimates with \(\theta\geqslant 0\), Lemma 2.5, to obtain \[\|(-\mathfrak{L}_{\nu}^{\alpha})(u\otimes P_{t}v)-u\otimes(-\mathfrak{L}_{\nu }^{\alpha})P_{t}v\|_{\mathscr{C}_{p}^{\sigma+\varsigma-\alpha+\vartheta}} \lesssim\|u\|_{\mathscr{C}_{p}^{\sigma}}\|P_{t}v\|_{\mathscr{C}^{\varsigma+ \theta}}\lesssim t^{-\theta/\alpha}\|u\|_{\mathscr{C}_{p}^{\sigma}}\|v\|_{ \mathscr{C}^{\varsigma}}.\] Altogether, we obtain the desired bound. Proof of Lemma 3.1.: The proof of the lemma uses the ideas from the proof of [15, Lemma A.9]. Let \(\delta\in(0,\frac{T-t}{2})\) to be chosen later. Then we have that for \(j\geqslant-1\) \[\Delta_{j}\int_{t}^{T}f_{t,r}dr=\int_{t}^{T}\Delta_{j}f_{t,r}dr=\int_{t+\delta }^{T}\Delta_{j}f_{t,r}dr+\int_{t}^{t+\delta}\Delta_{j}f_{t,r}dr.\] The first summand we estimate as follows, using Minkowski's inequality, \[\left\|\int_{t+\delta}^{T}\Delta_{j}f_{t,r}dr\right\|_{L^{p}} \leqslant\int_{t+\delta}^{T}\|\Delta_{j}f_{t,r}\|_{L^{p}}dr\] \[\leqslant C2^{-j(\sigma+\varsigma+\varsigma\varsigma)}\int_{t+ \delta}^{T}(T-r)^{-\gamma}(r-t)^{-(1+\varepsilon)}dr\] \[=C2^{-j(\sigma+\varsigma+\varsigma\varsigma)}(T-t)^{-\gamma- \varepsilon}\int_{\delta/(T-t)}^{1}(1-r)^{-\gamma}r^{-(1+\varepsilon)}dr\] \[\leqslant[2\max(\varepsilon^{-1},(1-\gamma)^{-1})C]2^{-j(\sigma +\varsigma)}(T-t)^{-\gamma-\varepsilon}(\delta/(T-t))^{-\varepsilon}\] \[=[2\max(\varepsilon^{-1},(1-\gamma)^{-1})C]2^{-j(\sigma+ \varsigma)}(T-t)^{-\gamma}(2^{j\varsigma}\delta)^{-\varepsilon},\] where we used that for \(\sigma\in(0,\frac{1}{2})\), as \(\varepsilon>0\) and \(\gamma<1\), \[\int_{\sigma}^{1}(1-r)^{-\gamma}r^{-(1+\varepsilon)}dr =\int_{\sigma}^{1/2}(1-r)^{-\gamma}r^{-(1+\varepsilon)}dr+\int_{ 1/2}^{1}(1-r)^{-\gamma}r^{-(1+\varepsilon)}dr\] \[\leqslant[(\frac{1}{2})^{-\gamma}\varepsilon^{-1}+(\frac{1}{2})^ {-\gamma}(1-\gamma)^{-1}]\sigma^{-\varepsilon}\leqslant 2\max(\varepsilon^{-1},(1- \gamma)^{-1})\sigma^{-\varepsilon}.\] For the second summand, we have \[\left\|\int_{t}^{t+\delta}\Delta_{j}f_{t,r}dr\right\|_{L^{p}} \leqslant C2^{-j\sigma}\int_{t}^{t+\delta}(T-r)^{-\gamma}dr\] \[=\frac{C}{1-\gamma}2^{-j\sigma}[(T-t)^{1-\gamma}-(T-t-\delta)^{1- \gamma}]\] \[=\frac{C}{1-\gamma}2^{-j\sigma}(T-t)^{-\gamma}[(T-t)-(T-t-\delta) \big{(}\frac{T-t}{T-t-\delta}\big{)}^{\gamma}]\] \[\leqslant\frac{C}{1-\gamma}2^{-j\sigma}(T-t)^{-\gamma}\delta.\] The goal is to estimate \(\sup_{j\geqslant-1}2^{j(\sigma+\varsigma)}\big{\|}\Delta_{j}\int_{t}^{T}f_{t, r}dr\big{\|}_{L^{p}}\). For that purpose, we use for \(j\) such that \(2^{-j\varsigma}\leqslant\frac{T-t}{2}\) the above estimates for \(\delta=2^{-j\varsigma}\). If \(j\) is such that \(2^{-j\varsigma}>\frac{T-t}{2}\), then we trivially estimate \[\left\|\Delta_{j}\int_{t}^{T}f_{t,r}dr\right\|_{L^{p}}\leqslant C2 ^{-j\sigma}\int_{t}^{T}(T-r)^{-\gamma}dr =\frac{C}{1-\gamma}2^{-j\sigma}(T-t)^{1-\gamma}\] \[\leqslant\frac{C}{1-\gamma}2^{-j(\sigma+\varsigma)}(T-t)^{-\gamma}\] using \(\gamma<1\). Together we thus obtain uniformly in \(t\in[0,T]\) \[\sup_{j\geqslant-1}2^{j(\sigma+\varsigma)}\left\|\Delta_{j}\int_{t}^{T}f_{t,r}dr \right\|_{L^{p}}\leqslant[2\max(\varepsilon^{-1},(1-\gamma)^{-1})C](T-t)^{- \gamma},\] which yields the claim. ## Acknowledgements H.K. is supported by the Austrian Science Fund (FWF) Stand-Alone programme P 34992. Part of the work was done when H.K. was employed at Freie Universitat Berlin and funded by the DFG under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). N.P. gratefully acknowledges financial support by the DFG via Research Unit FOR2402.
2309.14055
Simultaneous High-Speed and Low-Dose 4-D STEM Using Compressive Sensing Techniques
Here we show that compressive sensing allow 4-dimensional (4-D) STEM data to be obtained and accurately reconstructed with both high-speed and low fluence. The methodology needed to achieve these results compared to conventional 4-D approaches requires only that a random subset of probe locations is acquired from the typical regular scanning grid, which immediately generates both higher speed and the lower fluence experimentally. We also consider downsampling of the detector, showing that oversampling is inherent within convergent beam electron diffraction (CBED) patterns, and that detector downsampling does not reduce precision but allows faster experimental data acquisition. Analysis of an experimental atomic resolution yttrium silicide data-set shows that it is possible to recover over 25dB peak signal-to-noise in the recovered phase using 0.3% of the total data.
Alex W. Robinson, Amirafshar Moshtaghpour, Jack Wells, Daniel Nicholls, Miaofang Chi, Ian MacLaren, Angus I. Kirkland, Nigel D. Browning
2023-09-25T11:40:20Z
http://arxiv.org/abs/2309.14055v3
# Simultaneous High-Speed and Low-Dose 4-D STEM Using Compressive Sensing Techniques ###### Abstract Here we show that compressive sensing allow 4-dimensional (4-D) STEM data to be obtained and accurately reconstructed with both high-speed and low fluence. The methodology needed to achieve these results compared to conventional 4-D approaches requires only that a random subset of probe locations is acquired from the typical regular scanning grid, which immediately generates both higher speed and the lower fluence experimentally. We also consider downsampling of the detector, showing that oversampling is inherent within convergent beam electron diffraction (CBED) patterns, and that detector downsampling does not reduce precision but allows faster experimental data acquisition. Analysis of an experimental atomic resolution yttrium silicide data-set shows that it is possible to recover over 25dB peak signal-to-noise in the recovered phase using 0.3% of the total data. ## I Introduction The goal of this study is to demonstrate that the application of a compressed acquisition methodology can improve the speed and reduce the fluence associated with 4-dimensional (4-D) scanning transmission electron microscopy (STEM). In this imaging mode a series of diffraction patterns for each probe position in a 2D grid are recorded in the far field on a 2D pixelated detector (Fig. 1) [1]. Subsequently a variety of signals can be extracted by suitable geometric integration of regions at the detector. Prior to the widespread use of aberration corrections, Nellist _et al._demonstrated one of the earliest cases of 4-D STEM where coherent micro-diffraction patterns were collected as a function of probe position and used for a super-resolved ptychographic reconstruction [2]. This allowed the resolution of the Si \(\{004\}\) at \(0.136\)nm; a much higher spatial resolution than was achieveable using high-angle annular dark field (HAADF) STEM on the instrument used. Another early demonstration by Zaluzec _et al._, used position resolved diffraction to image distributions of magnetic induction in a Lorentz STEM imaging mode [3, 4]. 4-D STEM has progressed significantly since these early demonstrations, with more recent examples of its application in ptychography having been used to recover the complex object wavefunction of weakly scattering objects, such as lithium ion cathode materials [5] and biological samples [6]. STEM ptychography has also been used to resolve ptyaspedynium dumbbells at the limit set by thermal atomic motion [7]. 4-D STEM has become popular due to its versatility by way of multi-modal imaging using virtual detectors (VDs) [1], differential phase contrast (DPC) [8], centre of mass (CoM) analysis [9], and ptychography [10, 11, 12, 13, 14]. A major limitation in the application of 4-D STEM has been the need for long integration times to a achieve significant signal-to-noise ratio (SNR) in the presence of noise and dark current. Although most commercially available direct electron detectors that operate in counting mode have effective frame rates of less than ten kHz, there have been recently announced direct electron detectors [15, 16, 17, 18] operating at between 100kHz and 1MHz, albeit with small pixel array sizes. Using these detectors CBED patterns can be acquired with little or no noise at an effective dwell time of 10\(\upmu\)s per probe position [18, 19]. While these are significant improvements over earlier indirect scintillator coupled detectors operating at fewer than 30 fps [20, 21], it remains the case that only the most recent detectors match the dwell time of traditional solid state monolithic STEM detectors. Importantly, our approach can also be used with slower large pixel array detectors to provide the required matching speed increase. Hence, 4-D STEM experiments remain susceptible to drift and beam induced damage [22] which potentially limits its applicability to studies of beam sensitive organic and hybrid materials or to investigations of materials dynamics. One option to overcome beam damage is to reduce the electron fluence at the sample [23, 24]. By reducing the fluence below a materials dependent threshold [25], or by using cryogenic temperatures [6], beam damage can be reduced. Furthermore, if combined with alternative methods to increase acquisition speeds such as low bit-depth electron counting [26, 27], the acquisition speed can be increased and sample drift can be reduced. However, given that the SNR is related to the number of detected electrons, and hence, with the fluence per probe position, a combination of fluence and fast acquisition quickly transitions the experiment to conditions that are below the minimum signal-to-noise requirements for 4-D methods such as ptychography [28]. An alternative method to overcome beam damage (as well as to increase the effective frame rate of an existing detector) in STEM is by using techniques based on the theory of compressive sensing (CS) [29, 30], which is referred to here as probe sub-sampling. Probe sub-sampling in this context refers to controlling the set of positions of the STEM probe visits within a raster scan to reduce the number of acquisition points- thereby directly creating a faster scan and a lower fluence and flux at the sample. Probe sub-sampling has already been experimentally demonstrated for a variety of experimental STEM and SEM imaging modes [31, 32, 33, 34, 35, 36, 37, 38, 39], and has also been used to speed up the computational time for STEM simulations [40, 41, 42]. The key benefit for probe sub-sampling in STEM is that by acquiring less data, acquisition rates can be increased, which in turn reduces drift artefacts as well as reducing the total cumulative electron fluence of the entire field of view. Thus, samples which are susceptible to beam damage can be imaged at usable SNRs, without over exposure to the incident beam. Although the dose at any acquired probe location is independent of the scan pattern, work by Nicholls _et al._[35] has shown that the diffusion of radicals due to beam interactions at neighbouring probe locations compounds the damage of samples. By taking larger steps in a random fashion, this cumulative dose can be reduced since radicals are not propagated between successive probe locations. In this paper, we will demonstrate a focused probe acquisition method which reduces beam damage and increases acquisition rate by probe sub-sampling. We acquire only a subset of the CBED patterns and use a Bayesian dictionary learning technique known as Beta Process Factor Analysis (BPFA) to recover the full 4-D STEM data-set from the sub-sampled measurements. The BPFA has been shown as a robust inpainting algorithm to data containing complex structures such as defects [42], and further evidence is given in the Supplemental Material. We describe simulations of this method to a 4-D STEM data-set of yttrium silicide, and demonstrate that 4-D STEM data acquisition can be reduced by at least \(256\times\) without significant quality loss in all imaging modes. Previous work by Stevens _et al._[34], demonstrated that with probe sub-sampling and detector sub-sampling can be employed and that by inpainting followed by phase retrieval, one can recover functionally identical 1 results to a fully sampled experiment. In this work the inpainting of the 4-D data used a Kruskal-factor analysis technique [43]. We extend this approach by using a new implementation of the BPFA algorithm which takes advantage of GPU acceleration. We will also build on the work of Zhang _et al._[44] who showed that the number of detector pixels required for photographic reconstruction can be reduced significantly without loss of resolution. Footnote 1: Functionally identical results are defined as the preservation of features compared to the ground truth, such that the analysis is preserved in determining properties of the sample. ## II Proposed method for sub-sampled 4-D STEM The experimental set-up for the acquisition of a sub-sampled data-set is shown in Fig. 1. We assume a pixelated detector with \(H_{\rm d}\) and \(W_{\rm d}\) pixels in the vertical and horizontal axis, respectively, collecting 2-D CBED patterns of size \(H_{\rm d}\times W_{\rm d}\). Let \(\Omega_{\rm d}\coloneqq\{1,\cdots,H_{\rm d}\}\times\{1,\cdots,W_{\rm d}\}\) be the set of all detector pixel locations and \(\boldsymbol{k}_{\rm d}\coloneqq(k_{\rm d}^{\rm h},k_{\rm d}^{\rm w})\in\Omega_{ \rm d}\) denote the coordinates of a detector pixel. We further assume an electron probe scanning a regular grid of \(H_{\rm p}\) and \(W_{\rm p}\) locations in the vertical and horizontal axis, respectively 2, collected in a probe locations set \(\Omega_{\rm p}\coloneqq\{1,\cdots,H_{\rm p}\}\times\{1,\cdots,W_{\rm p}\}\). Let \(\boldsymbol{r}_{\rm p}\coloneqq(r_{\rm p}^{\rm n},r_{\rm p}^{\rm w})\in\Omega_{ \rm p}\) denote the coordinates of a probe location. Moreover, the total number of detector pixels and probe locations are denoted by, respectively, \(N_{\rm p}=H_{\rm p}W_{\rm p}\) and \(N_{\rm d}=H_{\rm d}W_{\rm d}\). Finally, given a scan step parameter \(\Delta_{\rm p}\), in m, of the electron probe and detector pixel size \(\Delta_{\rm d}\), in mrad, the location of the scanning probe and detector pixel can be converted from their index units to real units. Footnote 2: Note that the coordinate axes of the pixelated detector and scanning probe are not necessarily the same. Let \(\mathcal{X}\in\mathbb{R}^{H_{\rm p}\times W_{\rm p}\times H_{\rm d}\times W_{ \rm d}}\) be the discretised 4-D representation of fully sampled 4-D STEM data; and \(\mathcal{X}(\boldsymbol{r}_{\rm p},\boldsymbol{k}_{\rm d})\) be the 4-D STEM data observed at probe location \(\boldsymbol{r}_{\rm p}\) and detector pixel \(\boldsymbol{k}_{\rm d}\). A _CBED pattern_ collected at probe location \(\boldsymbol{r}_{\rm p}\) is denoted by \(\boldsymbol{X}_{\rm p}^{\rm d}\coloneqq\mathcal{X}(\boldsymbol{r}_{\rm p}, )\in\mathbb{R}^{H_{\rm d}\times W_{\rm d}}\). In this paper, the _virtual image_ corresponding to a detector pixel \(\boldsymbol{k}_{\rm d}\), represented as \(\boldsymbol{X}_{\rm d}^{\rm vl}\coloneqq\mathcal{X}(\boldsymbol{,k}_{\rm d}) \in\mathbb{R}^{H_{\rm p}\times W_{\rm p}}\), refers to a matrix collecting the data observed at detector pixel \(\boldsymbol{k}_{\rm d}\) for all probe positions. We achieve our compressed 4-D STEM by sub-sampling \(M_{\rm p}\ll N_{\rm p}\) probe locations acquired in the sub-sampling set \(\Omega\subset\Omega_{\rm p}\), which is equivalent to sub-sampling each of the virtual images (sharing a common mask determined by \(\Omega\)). This defines our acquisition model as, \[\boldsymbol{Y}_{\boldsymbol{k}_{\rm d}}^{\rm vl}=\boldsymbol{P}_{\Omega}( \boldsymbol{X}_{\boldsymbol{k}_{\rm d}}^{\rm vl})+\boldsymbol{N}_{\boldsymbol{k }_{\rm d}}\in\mathbb{R}^{H_{\rm p}\times W_{\rm p}},\quad\text{for }\boldsymbol{k}_{\rm d}\in\Omega_{\rm d}, \tag{1}\] where \(\boldsymbol{Y}_{\boldsymbol{k}_{\rm d}}^{\rm vl}\) is the sub-sampled measurements at detector pixel \(\boldsymbol{k}_{\rm d}\) and \(\boldsymbol{P}_{\Omega}\) is a mask operator with \((\boldsymbol{P}_{\Omega}(\boldsymbol{U}))_{(i,j)}=\boldsymbol{U}_{(i,j)}\) if \((i,j)\in\Omega\) and \((\boldsymbol{P}_{\Omega}(\boldsymbol{U}))_{(i,j)}=0\) otherwise, and \(\boldsymbol{N}_{\boldsymbol{k}_{\rm d}}\) is an additive noise. Fig. 1: Operating principles of 4-D STEM are demonstrated (a), electrons are converged to form a probe which is rastered in 2-D across the sample plane. The transmitted electrons are collected using a 2-D detector in the far field for each probe position. (b) Inpainting the 4-D STEM data-set by sequentially inpainting each virtual image using the BPFA algorithm. (c) Application of VDs and DPC at the detector plane. We now estimate virtual images \(\hat{\mathbf{X}}_{\mathbf{k}_{\mathrm{d}}}^{\mathrm{vi}}\approx\mathbf{X}_{\mathbf{k}_{\mathrm{d}}}^ {\mathrm{vi}}\) from sub-sampled measurements \(\mathbf{Y}_{\mathbf{k}_{\mathrm{d}}}^{\mathrm{vi}}\) in (1) for \(\mathbf{k}_{\mathrm{d}}\in\Omega_{\mathrm{d}}\), which defines the inpainting problem. In this work we assume that virtual images are sparse or compressible3 in an unknown dictionary that can be learned during the recovery process. This leads to the development of dictionary learning adopting a Bayesian non-parametric method called Beta-Process Factor Analysis (BPFA) as introduced in [45]. The advantages of this approach include the ability to infer both the noise variance and sparsity level of the signal in the dictionary, and allows for the learning of dictionary elements directly from sub-sampled data. This approach has been tested in previous reports [37, 38, 39, 40, 42] and has shown success when applied to electron microscopy data. Note that this approach learns a different dictionary for each virtual image and a BPFA instance is applied to every virtual image. This is not necessarily optimal, however, we will leave the concept of learning a shared dictionary for all virtual images and applying a single instance of BPFA directly on the sub-sampled 4-D data to a future study (a full description of the BPFA process is provided in the Supplemental Material4). Footnote 3: A signal is sparse if it strictly contains only a few non-zero weights in a dictionary, whereas a signal is compressible if the magnitudes of the weights decay rapidly when in descending order. Footnote 4: See Supplemental Material at [URL will be inserted by publisher] for a full description of the BPFA process In addition to probe sub-sampling, we can also downsample the detector pixels to eliminate redundancy. This can also be inferred as the optimisation of our reciprocal space sampling, \(\Delta_{\mathrm{d}}\), which can be carried out by only reading out the set of rows which are within the sampling set. This is different to conventional detector pixel binning (which still requires reading of all rows within the total CBED pattern), since we do not consider nor acquire rows which do not belong to the sampling set. Given the detector downsampling factor \(f_{\mathrm{d}}\in\mathbb{N}\), we first uniformly read-out every \(f_{\mathrm{d}}^{\mathrm{th}}\) row on the detector. This results in faster acquisition of CBED patterns of size \(H_{\mathrm{d}}/f_{\mathrm{d}}\times W_{\mathrm{d}}\) pixels. To further reduce the size of the data-set, we then keep only the data from every \(f_{\mathrm{d}}^{\mathrm{th}}\) column on the detector; resulting in CBED patterns with \(M_{\mathrm{d}}=H_{\mathrm{d}}\cdot W_{\mathrm{d}}/f_{\mathrm{d}}^{2}\) entries. In this paper, we define detector downsampling ratio as \(M_{\mathrm{d}}/N_{\mathrm{d}}=1/f_{\mathrm{d}}^{2}\). In practice, the camera length could also be varied to optimise \(\Delta_{\mathrm{d}}\) since the camera length is inversely proportional to the reciprocal space sampling. This would account for detectors which cannot read out rows/pixels independently. It would also effectively bin Fig. 3: SSIM of phases with respect to probe and detector sampling ratios. As the probe sub-sampling ratio increases, the quality of the phase increases. However, there is only a small difference in the phase quality as the detector downsampling ratio is decreased. This indicates significant redundancy within the 4-D data-set, which can be omitted through detector downsampling and probe sub-sampling. Example images of this experiment are shown in Fig. 2. Fig. 2: Visual comparison of ptychographic phase retrieval quality for different probe sub-sampling and detector downsampling ratios. The reference data is the full data-set passed through the BPFA algorithm (top row, leftmost column). The scale bar indicates 0.5nm. the signal on the detector where hardware binning is limited, improving signal-to-noise. ## III Results In order to model experimental acquisition, an experimental 4-D STEM data-set of Y\({}_{5}\)Si\({}_{3}\) was used (with all scan positions) and applied random sub-sampling of the probe positions and downsampling of the CBED patterns. Y\({}_{5}\)Si\({}_{3}\) is an electride framework composed of cation and anion sublattices. These sublattices have a net positive electric charge which are balanced by loosely bonded, interstitial anionic electrons [46]. Y\({}_{5}\)Si\({}_{3}\) has been proposed as a low Schottky barrier material for _n_-type silicon semiconductors due to its low Schottky barrier height of \(0.27\)eV [47]. Readers are referred to Zheng _et al._[46] for details on practical applications. The experimental data was acquired using a probe forming aperture semi-angle of 30mrad from a 100kV electron electron source with a probe current of \(20\)pA with a dwell-time of 1.3ms. A \(\Delta_{\rm p}\) of \(0.0108\)nm was used, giving a theoretical electron fluence of approximately \(1.4\times 10^{9}\)e\({}^{-}\)nm\({}^{-2}\). The camera collected diffraction patterns of size \(128\times 128\) pixels, where \(\Delta_{\rm d}\) is Imrad. In this study we applied probe sub-sampling ratios \(M_{\rm p}/N_{\rm p}\in\{6.25,12.5,25,50,100\}\%\), as well as detector downsampling ratios \(M_{\rm d}/N_{\rm d}\in\{6.25,25,100\}\%\). LAADF and annular BF (ABF) [48] virtual detector images, \((r_{1},\ r_{\rm o})=(30,\ 60)\) mrad and \((r_{\rm i},\ r_{\rm o})=(10,\ 22)\) mrad were simulated together with DPC images with \((r_{\rm i},\ r_{\rm o})=(10,\ 22)\) mrad and \((\theta,\ \delta)=(3\pi/4,\ \pi/2)\) rad. In addition we simulated the recovered ptychographic phase (Fig. 2). For this there are a number of analytical and iterative algorithms [49, 50, 51, 52, 53, 54] that recover the complex ptychographic wavefunction, and here we used a modification of the Wigner distribution deconvolution (WDD) algorithm [5, 55, 56, 57, 58, 59] within the _pychoSTEM_ package for MATLAB [14]. Details on the analysis methods used can be found in the Supplemental Material. Fig. 3 shows the quality of the ptychographic phase (using the structural similarity index measure (SSIM) [60] as our chosen metric) with respect to different probe sub-sampling and detector downsampling ratios. There is only a small degradation in the quality as the sampling at the detector is decreased; this implies the detector is over-sampled. We further observe that probe sub-sampling can be used with BPFA to recover visually identical results in the phase. Similarly, Fig. 4(a) shows a comparison of the quality of CoM field analysis as a function of sub-sampling ratio, where visually identical results are achieved with respect to the reference data. Comparing Fig. 3 and Fig. 4(a) suggests that ptychographic phase recovery is more robust in this case. This is possibly due to the fact that the WDD operates on a full 4-D data-set, while the CoM field is computed from individual CBED patterns. Fig. 4(b) is a direct image comparison between our reference data and reduced sampling data (\(M_{\rm p}/N_{\rm p}=M_{\rm d}/N_{\rm d}=6.25\%\)) when applied to CoM field analysis, DPC, ABF, and LAADF. It is clear that there is very little difference in the quality of the images from a visual perspective, and this is supported comparison of the corresponding peak signal-to-noise (PSNR) and SSIM values corresponding to each. Fig. 2 is a visual comparison of the data in Fig. 3. As can be seen, the recovered phase data is almost indistinguishable, with all showing the expected location of yttrium and silicon atoms. ## IV Conclusions Our results demonstrate the inherent redundancy within the 4-D STEM data-set. By utilising inpainting algorithms, it is possible to discard over \(99.6\%\) (see Fig. 2 bottom-right) of the original data-set whilst still recovering qualitatively identical results in the reconstructed phase, CoM field, DPC and VD images, to those obtained from processing the full data-set. This method has also been shown as robust to 4-D STEM data containing an interface, and the results are given in Fig. S5 in the Supplemental Material. However, given the inherent redundancy in 4-D STEM data, we propose that even lower sampling ratios could be employed using a multi-dimensional recovery algorithm. The benefit of this is that by using a multi-dimensional recovery algorithm we can leverage more data during the training process as well as the similarity between virtual images during the recovery step. It may be possible to also include Fig. 4: (a) SSIM values as a quality metric for CoM field images. (b) CoM field, DPC, ABF, and LAADF images for \(6.25\%\) probe sampling and \(6.25\%\) detector downsampling after inpainting. The reference data is the full data passed through the BPFA algorithm (top row). The PSNR and SSIM values are overlaid, the spatial scale bar indicates 0.5nm, and the detector scale bar indicates 30 mrad. The left-most column is an example data-point from the plot in (a), and the corresponding plots similar to (a) for DPC, ABF, and LAADF can be found in the Supplemental Material. sparse detector sampling followed by inpainting the 4-D STEM data-set with minor modifications to the acquisition model. This could further increase acquisition speeds by assuming that each pixel has a fixed read-out time, and potentially allow for multiple 4-D STEM data-sets to be acquired rapidly. We postulate that time-resolved 4-D STEM is now not limited by the detector read-out speed, but can instead be acquired through reduced sampling strategies. ## V Acknowledgments This work was performed at the Albert Crewe Centre (ACC) for Electron Microscopy, a shared research facility (SRF) fully supported by the University of Liverpool. This work was also funded by the EPSRC Centre for Doctoral Training in Distributed Algorithms (EP/S023445/1), Sivananthan Labs, and the Rosalind Franklin Institute. M.C. would like to acknowledge the support by the US DOE Office of Science Early Career project FWP# ERKCZ55 and the Center for Nanophase Materials Sciences (CNMS), a US DOE Office of Science User Facility. Initial experiments were carried out using MagTEM, a JEOL ARM200F STEM in the Kelvin Nanocharacterisation Centre, which was installed with support from the University of Glasgow and the Scottish Universities Physics Alliance. A.W.R. would like to thank Jordan A. Hatchel (ORNL) for his knowledge and insights of 4-D STEM analysis. ## Data Availability Statement The data that support the findings of this study are available within the article and its supplementary material.
2309.04246
Search for shower's duplicates at the IAU MDC. Methods and general results
Observers submit both new and known meteor shower parameters to the database of the IAU Meteor Data Center (MDC). It may happen that a new observation of an already known meteor shower is submitted as a discovery of a new shower. Then, a duplicate shower appears in the MDC. On the other hand, the observers may provide data which, in their opinion, is another set of parameters of an already existing shower. However, if this is not true, we can talk about a shower that is a false-duplicate of a known meteor shower. We aim to develop a method for objective detection of duplicates among meteor showers and apply it to the MDC. The method will also enable us to verify whether various sets of parameters of the same shower are compatible and, thus, reveal the false-duplicates. We suggest two methods based on cluster analyses and two similarity functions among geocentric and heliocentric shower parameters collected in the MDC. 7 new showers represented by two or more parameter sets were discovered. 30 times there was full agreement between our results and those reported in the MDC. 20 times the same duplicates as given in the MDC, were found only by one method. We found 34 multi-solution showers for which the number of the same duplicates found by both method is close to the corresponding number in the MDC database. However for 56 multi-solution showers listed in the MDC no duplicates were found by any of the applied methods. The obtained results confirmed the effectiveness of the proposed approach of identifying duplicates. We have shown that in order to detect and verify duplicate meteor showers, it is possible to apply the objective proposal instead of the subjective approach used so far.
T. J. Jopek, L. Neslušan, R. Rudawska, M. Hajduková
2023-09-08T10:23:49Z
http://arxiv.org/abs/2309.04246v2
# Search for shower's duplicates at the IAU MDC. ###### Abstract Context:The meteor shower database of the IAU Meteor Data Center (MDC) is used by the whole community of meteor astronomers. Observers submit both new and known meteor shower parameters to the MDC. It may happen that a new observation of an already known meteor shower is submitted as a discovery of a new shower. Then, a duplicate shower appears in the MDC. On the other hand, the observers may provide data which, in their opinion, is another set of parameters of an already existing shower. However, if this is not true, we can talk about a shower that is a false-duplicate of a known meteor shower. The MDC database contains such duplicates and false-duplicates, so it is desirable to detect them among the streams already in the database and those delivered to the database as new streams. Aims:We aim to develop a method for objective detection of duplicates among meteor showers and apply it to the MDC. The method will also enable us to verify whether various sets of parameters of the same shower are compatible and, thus, reveal the false-duplicates. Methods:We suggest two methods based on cluster analyses and two similarity functions among geocentric and heliocentric shower parameters collected in the MDC. Results:A number of results of varying significance were obtained. Seven new showers represented by two or more parameter sets were discovered. 30 times there was full agreement between our results and those reported in the MDC database. 20 times the same duplicates as given in the MDC, were found only by one method. We found 34 multi-solution showers for which the number of the same duplicates found by both method is close to the corresponding number in the MDC database. However for 56 multi-solution showers listed in the MDC no duplicates were found by any of the applied methods. Conclusions:The obtained results confirmed the effectiveness of the proposed approach of identifying duplicates. We have shown that in order to detect and verify duplicate meteor showers, it is possible to apply the objective proposal instead of the subjective approach used so far. We consider the identification of 87 problematic cases in the MDC database, among which at least some duplicates were misclassified, to be a particularly important result. The correction of these cases will significantly improve the content of the MDC database. Methods and general results. ## 1 Introduction In the case of asteroids (comets), it sometimes happens that observations of a new object, after some time, are linked to its observations made in the past. Thus, the 'new' object turned out to be an object already known. In meteor astronomy, we do not know this kind of cases. In fact, meteoroids falling into the Earth's atmosphere are crushed and in the form of dust, and sometimes larger fragments fall onto the Earth's surface.1 Footnote 1: However, there are exceptions — the occurrence of an Earth-grazing fireball, a very bright meteoroid that enters Earth’s atmosphere and leaves again, see e.g. Spurný et al. (1991). But in case of the meteoroid streams, we are dealing with an analogy called -- duplicates. In particular, we intend to deal with the duplicate showers in the IAU MDC database. This database also includes showers whose average parameters were determined by two or more author teams. Hereafter, we refer to each set of the mean parameters, published by one author team, as a "solution". Hence, in the remainder of this article, if we have more than one solution for a given shower in the MDC database, we will use the term multi-solution shower (MSS) and we refer to each of such solutions as "duplicates". Otherwise, we will refer to a single-solution shower (SSS). When the MDC list was created, no general criteria were adopted to distinguish whether a newly found solution is another solution of already existing shower or if it is the first solution of new shower; the individual author teams proceeded according to their own approaches, but evidently used different criteria. However, an MSS solution might have been incorrectly assigned in the MDC and is a stand-alone shower, i.e. not a duplicate of the shower in question. Such a solution is called a "false duplicate". The duplicate showers we are searching for (the redundant showers in the MDC) are showers which were submitted as new, autonomous showers to the MDC, but, actually, are other solutions of previously submitted showers. To each such solution, it is also referred to as a "duplicate". Hence, a duplicate can be a redundant old (already existing in the database) or new (just supplied to the database) solution of a given shower. And it is something obvious that a correct list of showers in the MDC should not contain MSS that consist of false duplicates. If an autonomous shower is found to be a duplicate of another shower, its solution will be appended to that shower and the autonomous shower will be moved to the List of Removed Showers. Of course, it is possible that some MSS solutions will be found to be duplicates of other showers and the rest of the MSS solutions will remain as duplicates of the original showers. In that case, the original shower will be retained, but with fewer solutions. Within the MDC shower database, one can now find solutions that have been both correctly and incorrectly classified. To provide a clearer understanding of the terms used, we present a summary of them. The correctly classified solutions are: _(i) an autonomous solution of an SSS_, _(ii) a duplicate solution within an MSS_ (or, simply, a duplicate). The solutions that were classified incorrectly and need to be re-classified are: _(iii) a false duplicate solution_ -- a solution incorrectly classified as a duplicate solution of an MSS; after correction, it will be re-classified as a new SSS or as another solution of a different MSS, _(iv) a false autonomous solution_ -- after correction, it will be re-classified as a duplicate solution of a different MSS. In order to determine whether we are dealing with duplicate meteor showers, the methods developed to identify comets and asteroids observed in successive apparitions are proving ineffective. The main obstacle are the single-apparition nature of the meteor phenomenon and the markedly lower precision of its observation. Moreover, the geocentric and heliocentric parameters of the repeatedly observed meteoroid stream can differ significantly. The different observational techniques and weather conditions can cause that independent observers detect the meteors of different size distributions and/or in the different periods of the shower's activity. As result, the new and old mean characteristics of a given shower can be quite different. Hence, the question whether we are dealing with a duplicate of a shower or not is not trivial. One can find several publications mentioning the occurrence of similar showers (shower's duplicates in our terminology) in the MDC database, e.g. Holman and Jenniskens (2012); Andreic et al. (2014); Koukul (2017); Koseki (2020). Usually, authors provide a list of pairs that they consider to be similar showers, and recommend removing the shower in question from the database and introducing it as another solution to an already known shower, (e.g. Holman and Jenniskens, 2012). To date, however, the identification of a duplicate meteor shower and the associated recommendation have been based on more or less subjective considerations. Therefore, our work has focused on developing of an objective method for detecting duplicates among meteor showers in the MDC database. Two approaches were used: the method based on orbital similarity and the method developed by us called "maximum-\(\sigma\)"s" criterion based on direct comparison of selected geocentric and heliocentric shower parameters. ## 2 Meteor shower data used As to December 2022, the MDC database contained data of 920 meteor showers, represented by 1385 solutions. For 252 showers, two or more solutions are available. The meteor data available from the MDC are not uniform, varying in the completeness of the averaged shower data. For all showers, the obligatory parameters are: the moment of activity, expressed by the ecliptic longitude of the Sun at the time the shower observation, the geocentric equatorial coordinates of its radiant and the corresponding geocentric velocity. But for many showers the corresponding averaged values of the orbital elements are also given. And for a relatively small number of showers, individual shower member data are available alongside the averaged data. Hence, in order to make a full comparison of the results obtained in this study, we were limited to a subset of the MDC data allowing the application of each method used. We utilized only the data from the 'List of Established Showers' and the 'Working List'. 195 solutions on these lists were rejected either due to incomplete orbital data or because the orbital eccentricities corresponded to open orbits. We assume that interstellar meteoroid stream solutions are not real and are the result of measurement uncertainties (Hajdukova and Kornos, 2020). Additional 8 solutions were removed due to incorrect values of some parameters, e.g. orbital inclinations exceeding 180 degrees or because of a clear inconsistency between geocentric and heliocentric parameters. This inconsistency is due to the fact that, although orbits with eccentricities \(e\geq 1\) were removed from our MDC sample, for some meteoroids the values of the \(\hat{\Theta}\)pik \(U\) and \(\theta\) variables (see Opik, 1976; Valsecchi et al., 1999), calculated on the basis of the geocentric parameters clearly fall in the region corresponding to open orbits, see Figure 1. The \(\hat{\Theta}\)pik variables, \(U\) (in the units of the Earth's velocity) and \(\theta\) are defined in an instantaneous geocentric reference frame, and correspond to the geocentric velocity of the meteoroid when encountering the Earth, and to the angular elongation of the meteoroid geocentric radiant from the Earth' apex, respectively. The values of the \(\theta\) and \(U\) variables shown in Figure 1 were calculated using the geocentric coordinates of the radiant and the speed of the meteoroids using the formulas given in Jopek et al. (1999b). In the selected MDC dataset, 835 showers are represented by 1182 meteor-shower solutions, since the parameters of many showers were determined by more than one author team. 185 showers are represented by more than one solution. There are 532 solutions (45.0% of the total sample examined) which belong to these 185 MSS. However, it has to be taken into account that we have no assurance that the grouping of the solutions in the MDC is, in all the listed showers, correct, since no objective method was available to detect the duplicates. Table 1 provides more details on the MSS that occur in the surveyed sample of 1182 meteor shower solutions. Similarly to the individual meteoroid orbits, the density of meteor showers in near-ecliptical orbits is clearly higher, so we decided (following the idea given by Galligan (2001)) to split the sample of 1182 shower solutions into two partitions. As can be seen in Figure 2, the area containing orbits with small inclinations significantly dominates the study sample. It also contains many orbits with perihelion distances greater than 0.9 [au]. Partition one (P1) contained 498 shower solutions with orbits with inclinations in the range 0\(-\)40 [deg], partition P2 684 other shower solutions. In the remainder of this paper, we will refer to these subsets of showers as P1- and P2- components, respectively. Table 2 and 3 gives a full list of P1- and P2- MSS (86 and 99, respectively) that were provided in the MDC database. The minimum number of solutions (duplicates) is, of course, 2, with a maximum of 11 solutions for the Southern Tau-rids shower (02/STA). 2 SSS showers are not listed in the Tables 2 and 3. Footnote 2: All showers we used were named using the old rules of naming and coding meteoroid streams valid until August 2022, see Jopek et al. (2023). In the following sections, we describe methodology and the results of the study, which aimed to find new MSS and to assess whether the MSS listed in Table 2, Table 3 contains duplicates correctly classified. In Tables 2, 3, we introduced \(DH_{Min}\) and \(<\)\(DH\)\(>\) columns. The \(DH_{Min}\) column contains the smallest threshold values of orbital similarity corresponding to the DH-function (Jopek 1993a), with which all members (all duplicates) of the MSS given in Tables 2, 3, can be identified by cluster analysis using the single linking method. The \(<\)\(DH\)\(>\) column gives the arithmetic mean of the DH orbital similarity values calculated for all pairs of the given MSS. These values are related to the compactness of the MSS in question. From a cursory analysis of the contents of the \(DH_{Min}\) column, it is easy to see that for a many of MSS consisted of only two duplicates, we are most likely dealing with cases of false duplicates. Using formulas (21) and (22) given in Table 9 in the Jopek & Bronikowska (2017), the plausible threshold values of orbital similarity for pairs of orbits taken from a set of 600 orbits are 0.022-0.025. In contrast, in Table 2 and 3 we see that for many MSS with only two duplicates the \(DH_{Min}\) values are much larger. This observation justifies the revision we have undertaken of the MDC meteor base in terms of the MSS contained therein. Verification and search for duplicates is an important topic, abandoning such research leaves us with artifacts in the MDC database. can introduce some level of inconsistency between the geometric and heliocentric parameters, and even between the heliocentric parameters themselves. For example the individually averaged orbital elements \(q\), \(a\), \(e\) may not necessarily satisfy the known formula \(q=a(1-e)\). The same problem occurs when the medians of the meteoroid stream parameters are used. In our reduced sample of 1182 meteor shower data, for each shower averaged geocentric and corresponding heliocentric parameters are given. Assuming that the impact of individual averaging of meteoroid parameters is not a significant obstacle to our idea, using these parameters, a search for duplicates among the MDC shower data can be done in a manner analogous to the search for streams among orbits of individual meteoroids. For this purpose, it is sufficient to use the cluster analysis methods, which have been used for years in a search for meteoroid streams, with the difference that, this time, the identification of groups in the MDC will be made among the mean radiants or orbital elements or their combinations. As a result, the groups (MSS) thus identified will consist of duplicates (the shower multi-solutions) we are looking for. We performed several cluster analyses among 494 solutions in the P1 partition, and among 688 solutions collected in the P2 partition. We decided to search the two partitions separately so that the cluster analysis would be performed with threshold values of orbital similarity determined for each partition separately. This solution reduced the unfavourable influence coming from the domination of orbits with relatively small inclinations to the ecliptic in the studied sample of 1182 showers. We used a single linkage method (a variant of the general hierarchical cluster analysis method) successfully used by a number of authors for the meteoroid stream identification or searching for grouping among the asteroids: Southworth & Hawkins (1963); Lindblad (1971); \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline No & Shower Code & \(N\) & \(DH_{Min}\) & \(<DH>\) & No & Shower Code & \(N\) & \(DH_{Min}\) & \(<DH>\) \\ \hline \hline 1 & 0001/OAP & 9 & 0.062 & 0.048 & 51 & 0197/OAVO & 2 & 0.316 & 0.316 \\ 2 & 2000/00/STA & 11 & 0.182 & 0.205 & 52 & 0199/OADC & 2 & 0.107 & 0.107 \\ 3 & 0003/00/SIA & 2 & 0.065 & 0.065 & 53 & 0220/00/ZCA & 2 & 0.434 & 0.434 \\ 4 & 0004/0GEM & 5 & 0.020 & 0.022 & 54 & 0212/01/X & 3 & 0.303 & 0.221 \\ 5 & 0005/00/SDA & 8 & 0.078 & 0.097 & 55 & 0215/00/NPI & 4 & 0.170 & 0.138 \\ 6 & 0009/00/DRA & 4 & 0.186 & 0.126 & 56 & 0216/00/SPI & 5 & 0.460 & 0.255 \\ 7 & 0010/00/EVI & 3 & 0.322 & 0.284 & 57 & 0219/00/SAR & 4 & 0.373 & 0.398 \\ 8 & 0012/00/KCG & 4 & 0.117 & 0.114 & 58 & 0220/00/NDR & 2 & 0.144 & 0.144 \\ 9 & 0017/00/NTA & 10 & 0.240 & 0.157 & 59 & 0221/00/DSX & 5 & 0.115 & 0.082 \\ 10 & 0018/00/AND & 3 & 0.044 & 0.048 & 60 & 0233/00/OCC & 2 & 0.160 & 0.160 \\ 11 & 0019/00/MON & 5 & 0.044 & 0.047 & 61 & 0250/00/NOO & 3 & 0.053 & 0.066 \\ 12 & 0021/00/AVB & 3 & 0.124 & 0.097 & 62 & 0253/00/CMI & 2 & 0.347 & 0.347 \\ 13 & 0025/00/NAO & 2 & 0.210 & 0.210 & 63 & 0254/00/PHO & 2 & 0.199 & 0.199 \\ 14 & 0026/00/NDA & 7 & 0.131 & 0.106 & 64 & 0256/00/RCN & 4 & 0.201 & 0.192 \\ 15 & 0028/00/SOA & 2 & 0.113 & 0.113 & 65 & 0257/00/ORS & 5 & 0.123 & 0.161 \\ 16 & 0033/00/NIA & 5 & 0.158 & 0.144 & 66 & 0388/01/OER & 2 & 0.023 & 0.023 \\ 17 & 0047/00/DLI & 2 & 0.096 & 0.096 & 67 & 0343/02/NPI & 6 & 0.103 & 0.050 \\ 18 & 0061/00/THA & 2 & 0.071 & 0.071 & 68 & 0388/00/CTA & 3 & 0.127 & 0.133 \\ 19 & 006/00/SSG & 3 & 0.214 & 0.168 & 69 & 0390/00/THA & 2 & 0.135 & 0.135 \\ 20 & 0076/00/KAQ & 2 & 0.142 & 0.142 & 70 & 0446/00/DPC & 2 & 0.010 & 0.010 \\ 21 & 0088/00/ORR & 2 & 0.362 & 0.362 & 71 & 0451/00/CAM & 3 & 0.107 & 0.086 \\ 22 & 009/00/NCC & 6 & 0.162 & 0.147 & 72 & 0456/00/MPS & 4 & 0.041 & 0.042 \\ 23 & 0097/00/SCC & 4 & 0.218 & 0.207 & 73 & 0459/00/JEO & 3 & 0.050 & 0.041 \\ 24 & 0100/00/XSA & 2 & 0.209 & 0.209 & 74 & 0460/00/LOP & 3 & 0.044 & 0.052 \\ 25 & 0112/01/NDL & 2 & 0.070 & 0.070 & 75 & 0464/00/KLY & 3 & 0.030 & 0.031 \\ 26 & 0113/00/SDL & 2 & 0.172 & 0.172 & 76 & 0470/00/AMD & 3 & 0.075 & 0.065 \\ 27 & 0115/00/DCS & 5 & 0.257 & 0.264 & 77 & 0486/00/NZP & 2 & 0.041 & 0.041 \\ 28 & 0121/00/NHY & 3 & 0.269 & 0.253 & 78 & 0490/00/DGE & 2 & 0.149 & 0.149 \\ 29 & 0124/00/SVI & 2 & 0.226 & 0.226 & 79 & 0505/00/AIC & 3 & 0.117 & 0.090 \\ 30 & 0127/00/MCA & 2 & 0.041 & 0.041 & 80 & 0525/00/ICY & 2 & 0.031 & 0.031 \\ 31 & 0128/00/MKA & 2 & 0.315 & 0.315 & 81 & 0538/00/FFA & 2 & 0.092 & 0.092 \\ 32 & 0133/00/PU & 2 & 0.139 & 0.139 & 82 & 0644/00/JLL & 2 & 0.256 & 0.256 \\ 33 & 0144/00/APS & 3 & 0.094 & 0.072 & 83 & 0651/00/OAV & 2 & 0.096 & 0.096 \\ 34 & 0150/01/SOP & 2 & 0.343 & 0.343 & 84 & 0709/00/LCZ & 2 & 0.142 & 0.142 \\ 35 & 0152/00/NOC & 4 & 0.644 & 0.447 & 85 & 0757/00/CCY & 4 & 0.035 & 0.033 \\ 36 & 0153/00/OCE & 3 & 0.081 & 0.074 & 86 & 1046/00/FIS & 2 & 0.078 & 0.078 \\ 37 & 0154/00/DEA & 2 & 0.178 & 0.178 & 0.17 Zappala et al. (1990); Lindblad (1992); Jopek (1993b); Zappala et al. (1994); Zappala et al. (1995); Jopek et al. (1999a, 2003, 2010a); Jopek (2020). ### Method-I -- cluster analysis among the orbital data Our first approach, (method-I), takes advantage of the orbital similarity of meteoroids, calculated by the D-function; in the cluster analysis, the single linking method was used, (see Southworth & Hawkins 1963; Jopek & Froeschle 1997). The orbital D-distances were calculated by the orbital similarity function \(DH\) described in Jopek (1993a); Williams et al. (2019). The orbital similarity thresholds were found separately for groups of \(M{=}2,3,4,5,...\) members; threshold for \(M=10\) was applied for all groups for which \(M>10\). All thresholds corresponded to a low probability (less than 1 per cent) of chance grouping. Threshold values were calculated using the method presented in Jopek (2020); for both P1- and P2- partitions they are listed in Table 4. In the cluster analysis, we restricted the individual thresholds to the maximum value corresponding to \(M=10\). Such a decision is justified by the fact that only one of the MSS in Table 1 contain more than 10 duplicates, and our preliminary calculations showed that the threshold values of orbital similarity for the \(DH\) function determined for \(M>10\) were too large and led to results that were difficult to interpret. This may have to do with the limitations of the statistical approach for estimating threshold values as mentioned in the Jopek et al. (2003) paper. As a reminder, the orbital similarity thresholds used in method-I are only applicable in the cluster analysis performed by the single linking method. Of course, this does not apply to groups with only two members. The threshold values given in Table 4 corresponding to \(M=2\) do not depend on the choice of cluster analysis algorithm. We note that method to identify the duplicates will probably have to include a cluster analysis e.g. linking procedure (single or other linking), since a linking of more than two solutions may occur non-trivial. Let us consider, for example, three solutions, A, B, and C. With the help of a method, we find that B is the duplicate of A, therefore the solution B should be one of the solutions of shower solution A is belonging to. Further, we find that C is the duplicate of \begin{table} \begin{tabular}{c c c c c c c c c} \hline No & Shower Code & \(N\) & \(DH_{Min}\) & \(<DH>\) & No & Shower Code & \(N\) & \(DH_{Min}\) & \(<DH>\) \\ \hline \hline 1 & 0006/00/DYR & 6 & 0.045 & 0.041 & 51 & 0404/00/D�STO & 3 & 0.132 & 0.097 \\ 2 & 0007/00/PER & 3 & 0.011 & 0.010 & 52 & 0428/00/D�STO & 2 & 0.067 & 0.067 \\ 3 & 0008/00/D�ST & 5 & 0.062 & 0.049 & 53 & 0429/00/ACB & 3 & 0.041 & 0.048 \\ 4 & 0010/00/QUA & 6 & 0.050 & 0.050 & 54 & 0431/00/D�TP & 2 & 0.062 & 0.062 \\ 5 & 0013/00/LEO & 7 & 0.158 & 0.129 & 55 & 0439/00/ASK & 2 & 0.077 & 0.077 \\ 6 & 0015/03/HS & 3 & 0.149 & 0.118 & 56 & 0444/00/2C & 2 & 0.045 & 0.045 \\ 7 & 0016/00/HYD & 3 & 0.095 & 0.106 & 57 & 0450/00/AED & 2 & 0.061 & 0.061 \\ 8 & 0020/04/COM & 3 & 0.259 & 0.207 & 58 & 0458/00/JEC & 3 & 0.081 & 0.086 \\ 9 & 0022/00/LMI & 2 & 0.037 & 0.037 & 59 & 0465/00/AXC & 3 & 0.064 & 0.083 \\ 10 & 0023/00/EGE & 2 & 0.099 & 0.099 & 60 & 0466/01/AOC & 2 & 0.086 & 0.086 \\ 11 & 0031/00/ETA & 5 & 0.109 & 0.104 & 61 & 0479/00/SOO & 3 & 0.072 & 0.065 \\ 12 & 0032/00/D�LT & 2 & 0.321 & 0.321 & 62 & 0480/00/TATA & 3 & 0.066 & 0.056 \\ 13 & 0040/02/CYCY & 2 & 0.259 & 0.259 & 63 & 0481/00/OLM & 3 & 0.031 & 0.026 \\ 14 & 0093/00/VEL & 2 & 0.412 & 0.412 & 64 & 0480/00/NSU & 2 & 0.041 & 0.041 \\ 15 & 0105/00/OCN & 2 & 0.134 & 0.134 & 65 & 0491/00/DCC & 2 & 0.093 & 0.093 \\ 16 & 0106/00/API & 3 & 0.329 & 0.329 & 66 & 0494/00/DEL & 2 & 0.028 & 0.028 \\ 17 & 0107/00/D�CH & 3 & 0.358 & 0.388 & 67 & 0497/01/DAB & 2 & 0.048 & 0.048 \\ 18 & 0108/00/TETO & 3 & 0.344 & 0.344 & 68 & 0498/00/D�MP & 2 & 0.096 & 0.096 \\ 19 & 0110/00/AAN & 5 & 0.109 & 0.094 & 69 & 0500/00/IPV & 3 & 0.114 & 0.105 \\ 20 & 0118/00/GNO & 2 & 0.798 & 0.798 & 70 & 0502/00/DRV & 3 & 0.042 & 0.039 \\ 21 & 0151/01/PAEU & 2 & 0.155 & 0.155 & 71 & 0506/00/FEV & 2 & 0.055 & 0.055 \\ 22 & 0175/02/JIPG & 4 & 0.103 & 0.094 & 72 & 0507/00/LAN & 2 & 0.326 & 0.326 \\ 23 & 0183/00/PATU & 3 & 0.294 & 0.312 & 73 & 0510/00/3RC & 2 & 0.068 & 0.068 \\ 24 & 0184/02/GDR & 2 & 0.006 & 0.006 & 74 & 0512/00/RPU & 2 & 0.370 & 0.370 \\ 25 & 0187/00/PCA & 3 & 0.314 & 0.280 & 75 & 0519/00/BAQ & 2 & 0.041 & 0.041 \\ 26 & 0191/00/EH & 2 & 0.049 & 0.049 & 76 & 0520/00/MBC & 2 & 0.041 & 0.041 \\ 27 & 0319/00/ILE & 3 & 0.143 & 0.122 & 77 & 0523/00/AGC & 2 & 0.030 & 0.029 \\ 28 & 0320/00/CBS & 2 & 0.281 & 0.280 & 78 & 0520/00/SLD & 3 & 0.032 & 0.033 \\ 29 & 0321/00/TCB & 2 & 0.089 & 0.089 & 79 & 0529/00/EOF & 3 & 0.041 & 0.046 \\ 30 & 0322/00/LBO & 2 & 0.072 & 0.072 & 80 & 0530/01/ECV & 2 & 0.069 & 0.069 \\ 31 & 0323/00/XCB & 4 & 0.100 & 0.120 & 81 & 0531/00/GAQ & 3 & 0.253 & 0.238 \\ 32 & 0324/00/EPR & 2 & 0.250 & 0.250 & 82 & 0533/00/3XA & 3 & 0.037 & 0.040 \\ 33 & 0326/00/EPG & 2 & 0.175 & 0.175 & 83 & 0537/00/KAU & 2 & 0.141 & 0.141 \\ 34 & 0327/00/BEO & 2 & 0.377 & 0.377 & 84 & 0545/00/XCAE & 2 & 0.135 & 0.135 \\ 35 & 0330/00/SSE & 3 & 0.157 & 0.121 & 85 & 0549/00/FAN & 2 & 0.105 & 0.105 \\ 36 & 0331/00/AMF & 3 & 0.050 & 0.045 & 86 & 0552/00/PSO & 2 & 0.250 & 0.250 \\ 37 & 033/03/00/UCU & 2 & 0.100 & 0.100 & 87 & 0555/00/OCP & 2 & 0.282 & 0.282 \\ B, therefore solution C should be the solution of the same shower as B, i.e. the shower containing A. But, applying the method to pair A and C, we find that C is not a duplicate of A, therefore a controversy, that C should not be the solution of shower containing A, occurs. The single linking method can solve this problem. ### Method-II -- maximum-sigma approach In the second method (method-II), we used the idea of similarity between two showers proposed by Koseki (2020). Koseki compared three shower parameters: the ecliptic coordinates of the shower radiant (the Sun-centered longitude and the radiant latitude) and the ecliptic longitude of the Sun at the time of shower activity. To assess the similarity of the two showers, Koseki calculated the differences between the relevant parameters and compared them with selected critical values. However, the author did not explain why he chose the critical values and not others, as well as he did not apply a cluster analysis to the whole data set, limiting his research to comparing each time only two showers. Which in our view is a very limited approach. Therefore, in this study, we decided to extend Koseki's approach by comparing both geocentric and heliocentric shower parameters, as well as performing cluster analysis. Moreover, in our approach, we have justified the choice of such and not other critical values of differences of the compared quantities. When we want to determine whether a shower solution is an autonomous or a duplicate, we should be able to justify the difference between the parameters of that shower and the others in the database. So, we examine a set of mean parameters and in case of an autonomous shower, we demand that the compared solutions must significantly differ at least in one of these parameters. To evaluate whether the difference among two values of the meteor shower parameter is significant, we propose similar approach which was recently applied in cosmology to reason that the Hubble constant determined by two methods is actually different, (see e.g. Jones et al., 2020; Di Valentino et al., 2021; Perivolaropoulos and Skara, 2021). Namely, the so-called three-sigma rule of thumb (or 3-\(\sigma\) rule). In empirical science, this rule expresses a conventional heuristic that nearly all values lie within three standard deviations of the mean (Kazmier, 2003). With respect to meteor showers, it can therefore be argued that if the critical difference of two determinations of some shower's parameter lies outside the \(\pm 3\)-\(\sigma\) interval, then the two meteor shower solutions being compared are not duplicates. Unfortunately, the method based on the 3-\(\sigma\) difference of a parameter can be used only for a small fraction of the meteor showers, since the determination errors of their parameters are unknown for majority solutions in the MDC database. This circumstance forces us to propose and use a less exact, but generally applicable method, which we refer, hereafter, as the "maximum-\(\sigma\) method" and which is outlined below. In December 2022, in addition to the shower average parameters, the MDC contained so-called LookUp Tables (see Hajdukova et al., 2023) available for 127 showers. The contents of the LookUp Tables made it possible to calculate the standard deviations of the shower parameters of interest and, in a further step, to construct the cumulative distribution of these 1-\(\sigma\) values. In Fig. 3, there are shown these distributions for eight shower's parameters. Specifically, panels (a), (b), (c), (f)-(h) of the figure show the distributions of \(\sigma\) of the sun-centered ecliptic coordinates of mean radiant (\(\lambda_{sc},\beta\)), geocentric velocity (\(V_{g}\)), argument of perihelion (\(\omega\)), longitude of ascending node (\(\Omega\)), and inclination (\(i\)), respectively. In these distributions, the steep increase is followed by a quasi constant behaviour (a plateau). It means that an essential part of the considered showers has the 1-\(\sigma\) value of given mean parameter within the interval delimited by the beginning of the plateau. In more detail we found \(\sigma(\lambda_{sc})=5^{\circ},\sigma(\beta)=3^{\circ},\sigma(V_{g})=2.5\, \mathrm{km\,s^{-1}},\,\sigma(\omega)=9^{\circ},\,\sigma(\Omega)=7.5^{\circ},\) and \(\sigma(i)=5^{\circ}\). These values can be regarded as "maximum \(\sigma\)s". In Fig. 3, each maximum \(\sigma\) is indicated with an arrow. In the proposed method to find the duplicate solutions, the maximum \(\sigma\), instead of \(3\sigma\), is regarded as a critical difference of mean values of given parameter. If this difference is larger than maximum \(\sigma\) at least for one parameter, then the examined solutions are autonomous or false-positive. Otherwise these are the duplicate solutions. In the cumulative distributions of the 1-\(\sigma\)s of mean perihelion distance and eccentricity (Fig. 3d,e), there is no clear constant behaviour (maybe, we could consider \(\sigma(q)=0.08\,\mathrm{au}\) as the maximum deviation of \(q\); but this is relatively large value). These two parameters are therefore useless for our purpose and are therefore not taken into account. Finally, we take into account the 1-\(\sigma\)s of sun-centered ecliptic longitude, ecliptic latitude, geocentric velocity, argument of perihelion, longitude of ascending node, and inclination. Hence, we propose that two showers A and B are considered duplicates, if the following conditions apply: \[\begin{array}{ll}|(\lambda_{sc})_{A}-(\lambda_{sc})_{B}|&<\sigma(\lambda_{ sc})=5^{\circ}\\ |\beta_{A}-\beta_{B}|&<\sigma(\beta)=3^{\circ}\\ |V_{gA}-V_{gB}|&<\sigma(V_{g})=2.5\,\mathrm{km\,s^{-1}}\\ |\omega_{A}-\omega_{B}|&<\sigma(\omega)=9^{\circ}\\ |\Omega_{A}-\Omega_{B}|&<\sigma(\Omega)=7.5^{\circ}\\ |i_{A}-i_{B}|&<\sigma(i)=5^{\circ}\end{array} \tag{1}\] The maximum-\(\sigma\) criterion (1) allows us to determine whether we are dealing with a duplicate or a false duplicate for a pair of solutions only. Therefore, in order to determine whether we are dealing with a group of duplicates of a given shower, it is necessary to apply criterion (1) in \begin{table} \begin{tabular}{c c|c c} \hline \multicolumn{2}{c|}{Partition 1} & \multicolumn{2}{c}{Partition 2} \\ \hline M & \(DH\pm\sigma\) & \(DH\pm\sigma\) \\ \hline \hline 2 & 0.0399 & 0.001 & 0.059 & 0.001 \\ 3 & 0.079 & 0.001 & 0.136 & 0.002 \\ 4 & 0.099 & 0.001 & 0.182 & 0.002 \\ 5 & 0.112 & 0.001 & 0.208 & 0.002 \\ 6 & 0.121 & 0.001 & 0.226 & 0.002 \\ 7 & 0.128 & 0.001 & 0.244 & 0.002 \\ 8 & 0.134 & 0.001 & 0.257 & 0.002 \\ 9 & 0.139 & 0.001 & 0.267 & 0.002 \\ 10 & 0.142 & 0.001 & 0.274 & 0.002 \\ \hline \hline \end{tabular} \end{table} Table 4: The orbital \(DH\) similarity thresholds and their uncertainties applied in the search for duplicates among MDC shower data. On the left, threshold values for 498 orbits (P1- partition), on the right for 684 orbits, (P2-partition). The thresholds correspond to the reliability level \(>99\%\) for each group of \(M\) members and the distance functions \(DH\). The values provided in this table are closely related with the single-linkage method and the size of the MDC sample used in this study. the cluster analysis. Analogous to method I, we used a cluster analysis algorithm based on a single linking procedure. In this approach, the inequalities (1) play an identical role to the D-function in cluster analysis among orbital data. As in method-I, cluster analyses was performed separately in P1- 494 orbits and P2- 688 orbits. ### Method-I and method-II, important differences The duplicate search methods used in this study are not equivalent to each other. In our opinion, this is an advantageous property, as it strengthens the reliability of convergent results obtained by these methods. Both methods use the same cluster analysis algorithm, but the way of evaluating the similarity of the parameters of two meteor showers is fundamentally different. In the method-I, the heliocentric Keplerian elements of the orbits are compared. With the \(D_{H}\) function used, differences in eccentricities, perihelion distances, and inclinations of the orbits are measured. However, the difference in the orientation of the apsidal lines is calculated as the difference in the angular positions of the perihelion points measured from the common point of intersection of the two orbits. Figure 4 illustrates this issue. However, this means that in an automatic search by method-I, it will not always be possible to identify as separate groups the twin meteoroid streams or the northern and southern branches of the showers. Their separation will have to be done'manually'. For example, we may encounter such situations in the case of the Orionids and eta-Aquariids showers, or in the case of the Northern and Southern Taurids. For the method-II, this kind of situation will not occur. The similarity of the showers is evaluated separately for each parameter, geocentric and heliocentric. We will return to this issue in later sections of our paper. Figure 4: Illustration of the difference in the orientation of the apsid lines in \(D_{H}\)-function. \(E\) — stands for ecliptic; \(O_{1}\) denotes the orbit of the northern branch of the meteoroid stream; \(O_{2}\) — the orbit of the southern branch of the meteoroid stream, \(N_{1},N_{2}\) are ascending and descending nodes of the orbits where meteors have been observed; \(P_{1},P_{2}\) denotes perihelion points of both orbit; \(N\) — common node of both orbit; \(\pi\) — in red, denotes the difference in the orientation of the apsid lines of the two orbits, measured from the common node \(N\). Figure 3: Cumulative distributions of the standard deviations of eight mean parameters of the showers in the IAU MDC with the Lookup Tables available. ## 4 General results General results are collected in Table 5. Method-I. Using \(D_{H}\) function, among 1182 orbits jointly 85 MSS was found comprising 456 duplicates. Some MSS consisted of only 2 solutions, the largest consisted of 62 solutions. The percentage of all duplicates was 38.7% of the shower solutions taken from our 1182 subset of the MDC. Method-II. Applying the maximum-sigma approach among 1182 showers, total 142 MSS were identified, comprising 433 duplicates, they accounted for 36.6% of the MDC showers. For the purpose of comparison, the last row of Table 5 shows the corresponding values calculated based on the data provided to the MDC by their authors. As suggested by these authors, it was found that in our subset of the MDC, 532 (45.0%) duplicates represent 185 multi-solution meteor showers. This outcome allows us to conclude that method-I and method-II do not produce results fully consistent with what is stated in the MDC. Compared to MDC, our methods produce smaller numbers of MSS groups (85and 142 in comparison to 185 stated in the MDC) which contain \(\sim 100\) fewer duplicates (456, 433 in comparison to 532 according to the MDC). These are significant differences in the number of MSS as well as the overall number of duplicates, suggesting that authors providing meteor data to the MDC quite often misclassify their showers. We found that about dozens of the showers supplied to the MDC are unlikely to be duplicates. In Table 5 we can clearly see that method-I and method-II are not equivalent. This does not mean, however, that the methods used produce completely inadequate results. As far as the total number of duplicates is concerned, both methods I and II gave convergent results, 456 and 433 duplicates, respectively. The results obtained differ for the number of MSS identified, what means that the duplicates were classified into groups with different numbers of members, see Table 6. Before presenting and discussing the results obtained, we would like to recall here, that for the method-I it was possible to estimate the probability of identifying MSS groups on a chance basis, it was less than 1 per cent, see Section 3.1. For method-II, we do not have such an estimate. Similarly, we have no idea what is the reliability of the MSS findings made by the authors providing data to the MDC database. ## 5 Results and discussion. Less complex cases. More detailed results are given in Table 6. In the Table, we see how many of the MSS were identified by method-I and method-II, and how many were reported in the MDC database. One can see that, in the case of a small number of duplicates (\(N_{DU}>3\)), the number of MSS reported in the MDC and obtained by method-II are far greater than the number of MSS identified by method-I. On the other hand, the most numerous MSS group containing 62 duplicates were identified only by the method-I. At this stage, it is clear to us that the methods used in this study (and probably any other methods) cannot identify MSS groups as they are given in the MDC. Among other things because the authors providing data to the MDC followed unknown different criteria than we did in classifying the shower into a particular group. However, we are confident that the proposed approach of verifying the content of duplicates in the MDC database allows to eliminate all obvious problematic cases, thus improving the MDC database. In the following subsections 5.1-5.2, we present results obtained directly (automatically) using our software. In subsection 5.3-6.1 we present results that, due to the nature of the method-I and also due to the limitations of statistical determination of orbital similarity thresholds -- required additional action on our part. ### New MSS identified in the present study Among the 1182 shower solutions taken from the MDC, we identified 7 new MSS, listed in Table 7. Two duplicates were identified in each new MSS. The reliability of this result is confirmed by the low threshold values of orbital similarity, corresponding to \(<1\%\) probability of identifying each pair by chance, by method-I, and by the fact that they were also identified by method-II. \begin{table} \begin{tabular}{c|c c c} & \multicolumn{2}{c}{\(N_{MSS}\)} \\ \(N_{DU}\) & MDC & M-I & M-II \\ \hline \hline [MISSING_PAGE_POST] ... &... &... &... \\ 22 & 0 & 1 & 0 \\ 23 & 0 & 0 & 0 \\... &... &... &... \\ 62 & 0 & 1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 6: The table shows the numbers \(N_{MSS}\) of multi-solution showers consisting of \(N_{DU}\) duplicates, identified by method-I and method-II. The values given in the second column were obtained based on the classification of the showers given in the MDC. \begin{table} \begin{tabular}{c c c c} & \(N_{M}\) & \(N_{D}\) & \(N_{D}\%\) \\ \hline \hline Method I & 85 & 456 & 38.7 \\ Method II & 142 & 433 & 36.6 \\ In MDC & 185 & 532 & 45.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Total number of multi-solution showers \(N_{M}\), number and percentage of duplicates \(N_{D}\) off all MSS identified among 1182 showers in the MDC database by method-I and method-II. The last line lists the values of these figures based on the findings of the data providers to the MDC database. ### MSS fully confirmed in the present search Table 8 lists the codes of 30 MSS identified identically by the two methods used. In each of the MSS, the same duplicates and the same number of duplicates were identified by both methods, exactly as they are listed in the MDC database. As one can see in Table 8 the \(DH_{M_{f}}\) threshold values with which the method-I identified MSS are always greater than the corresponding \(DH_{min}\) values. This supports the statement that among these MSS we are not dealing with false duplicates. The authors providing the data for these showers classified them in the right way. At the same time, this means that among the remaining 150 cases, some misclassification of duplicates is possible. ### MSS not identified in the present search Among the 185 of MSS reported in the MDC database, 56 groups were not identified as MSS by the two methods used. The Table 9 states the orbital \(DH\)- similarity values of the showers associated with that MSS. The \(DH_{min}\) is the smallest threshold that would have to be used in the cluster analysis to identify by method-I all members of the given MSS as reported in the MDC database. The \(DH_{M_{f}}\) is the maximum threshold with which the given MSS should be identified by the method-I. It should be noted that, as mentioned in section 3.1, for P-1- and P2- partitions the orbital similarity threshold values were determined separately. In Table 9, almost all the threshold values \(DH_{Min}\) are too large (the values highlighted in bold) compared to the limiting \(DH_{M_{l}}\) values, the maximum acceptable threshold value to reliably identify, using method-I, a group of the size in column \(N_{MDC}\). For example, \(DH_{min}\)=0.315 for 128/00/MKA or \(DH_{min}\)=0.798 for 0118/00/GNO are much to large, because to identify a MSS with 2 duplicates only, the acceptable threshold value \(DH_{M_{l}}=0.040\) or \(DH_{M_{l}}=0.063\). Such small thresholds were used in method-I in the cluster analysis and it ensures that the probability of identifying a group of two duplicates by chance was less than 1%. For 40 MSS, all duplicates were rejected, and the rejected showers did not enter another MSS group. In 13 MSS, on the other hand, after rejecting false-duplicates, only single solutions remained, which often entered another MSS group. As a result, we can claim that except of 3/00/SIA, 127/00/MCA, 321/00/TCB remaining 53 MSS listed in Table 9 should be considered as the false MSS, and duplicates assigned with them should receive a stand-alone shower status in the MDC database. We also recognise the possibility that, in the MDC database, there are erroneous values among the shower parameters, the codes for which are given in Table 9. For this reason, it would be appropriate to ask (if possible) the authors providing the data to the MDC to check the correctness of the shower parameter values or the MSS classifications accordingly. As an example of the problematic case, we will present the details of the shower 152/NOC (Table 9, line 21) with very high \(DH_{min}=0.644\) value. In the MDC database the 152/NOC shower, named the Northern Daytime omega-Cetids, is represented by 4 solutions listed in Table 10. As can be seen in Table 10, compared to the others, solution 2 has clearly different values of orbital angular ele \begin{table} \begin{tabular}{r r r r} & \multicolumn{1}{c}{Shower Code} & \(N\) & \(DH_{M_{f}}\) \\ \hline \hline 1 & 00246/00/AMO & 2 & 0.059 \\ & 1196/00/ZCM & & \\ 2 & 0575/00/SAU & 2 & 0.059 \\ & 1151/00/NPA & & \\ 3 & 0658/00/EDR & 2 & 0.039 \\ & 1110/00/CEP & & \\ 4 & 1071/00/HDD & 2 & 0.059 \\ & 1108/00/HTR & & \\ 5 & 1089/00/CTS & 2 & 0.039 \\ & 1157/00/FCD & & \\ 6 & 1097/00/DOH & 2 & 0.059 \\ & 1161/00/THT & & \\ 7 & 1099/00/JED & 2 & 0.059 \\ & 1107/00/JID & & \\ \hline \hline \end{tabular} \end{table} Table 7: List of 7 new MSS found among 1182 meteor data identified by both of the applied methods. The individual columns means: the MDC codes designation of the MSS, \(N\) — number of duplicates provided for each MSS by both methods, \(DH_{M_{l}}\) — for method-I, the maximum acceptable threshold value to reliably identify each pair. \begin{table} \begin{tabular}{r r r r r} & \multicolumn{1}{c}{Shower Code} & \(N_{MDC}\) & \(DH_{Min}\) & \(DH_{M_{f}}\) \\ \hline \hline 1 & 0004/00/GEM & 5 & 0.020 & 0.039 \\ 2 & 0006/00/LYR & 6 & 0.045 & 0.059 \\ 3 & 0007/00/PER & 3 & 0.011 & 0.059 \\ 4 & 0010/00/QUA & 6 & 0.050 & 0.059 \\ 5 & 0016/00/HYD & 3 & 0.095 & 0.136 \\ 6 & 0019/00/MON & 5 & 0.044 & 0.079 \\ 7 & 0022/00/LMI & 2 & 0.037 & 0.059 \\ 8 & 0110/00/AAN & 5 & 0.109 & 0.136 \\ 9 & 0171/00/ARI & 5 & 0.090 & 0.099 \\ 10 & 0184/02/CDR & 2 & 0.006 & 0.059 \\ 11 & 0191/00/ERI & 2 & 0.049 & 0.059 \\ 12 & 0250/00/NOO & 3 & 0.053 & 0.079 \\ 13 & 0323/00/XCB & 4 & 0.100 & 0.136 \\ 14 & 0331/00/AHY & 3 & 0.050 & 0.059 \\ 15 & 0341/02/XUM & 2 & 0.025 & 0.059 \\ 16 & 0429/00/ACB & 3 & 0.041 & 0.059 \\ 17 & 0444/00/ZCS & 2 & 0.045 & 0.059 \\ 18 & 0446/00/DPC & 2 & 0.010 & 0.039 \\ 19 & 0458/00/JEC & 3 & 0.081 & 0.136 \\ 20 & 0464/00/KLY & 3 & 0.030 & 0.039 \\ 21 & 0465/00/XXC & 3 & 0.064 & 0.136 \\ 22 & 0497/01/DAB & 2 & 0.048 & 0.059 \\ 23 & 0506/00/FEV & 2 & 0.055 & 0.059 \\ 24 & 0519/00/BAQ & 2 & 0.041 & 0.059 \\ 25 & 0523/00/AGC & 2 & 0.030 & 0.059 \\ 26 & 0525/00/ICY & 2 & 0.031 & 0.039 \\ 27 & 0526/00/SLD & 3 & 0.032 & 0.059 \\ 28 & 0529/00/EHY & 3 & 0.041 & 0.059 \\ 29 & 0570/00/FBH & 2 & 0.024 & 0.059 \\ 30 & 1044/00/EPU & 2 & 0.034 & 0.059 \\ \hline \hline \end{tabular} \end{table} Table 8: List of 30 MSS for which exactly the same duplicates were found by both of the applied methods and as given in the MDC. The individual columns means: the MDC codes designation of the MSS, \(N_{MDC}\) — number of duplicates provided for the MSS in the MDC, \(DH_{Min}\) — minimum threshold required to identify all members of the MSS’s provided in the MDC. The last column \(DH_{M_{l}}\) gives the maximum acceptable threshold value to reliably identify, using Method-I, a group of the size in column \(N_{MDC}\). ments. In particular, it has a much smaller orbit inclination value. Hence, in our opinion, solution 2 was wrongly classified for this MSS shower. This solution was published by Nilsson (1964), the shower was identified among radio-observed meteors. Nilsson did not propose a name for this shower, and in his paper in Table 4 he identified it as 'Gr. 61.5.3'. This shower consists of 10 members. Nilsson's identification turned out to be correct, it was confirmed by a completely different cluster analysis method by Jopek et al. (1999a); in Table 2 of this work, the stream was named mu-Arietids. And it is unclear to us why this shower was classified in Jenniskens (2006) book as another solution of the 152/NOC shower. Additional cluster analysis performed without this solution showed that the other remaining 3 solutions of the 152/NOC were identified exactly as given in the MDC database. The example discussed above leads us to believe that we have a similar situation for many of the other showers listed in Table 10 as well. These will be presented and discussed in details in our next publication. ### MSS identified only by method-II The results presented in the previous sections were for MSS identically identified or unidentified by both method I and method II. In the current chapter, we present results obtained only with method-II. In Table 11, we report 20 MSS showers occurring in the MDC database and fully confirmed but by method-II only. At the same time, according to method-I, none of the duplicates in question are members of any MSS. The reason of it is that -- according to MDC -- the duplicates belonging to these showers are too dissimilar. Indeed, the acceptable \(DH_{MI}\) thresholds given in Table 11 are always less than those that would have to be used in a cluster analysis to consider duplicates belonging to a given MSS as properly classified. However, unlike the contents of Table 9, this time the differences between \(DH_{min}\) and \(DH_{MI}\) values are noticeably smaller. There could be two reasons for this, either the thresholds used in method-I are too stringent (which is possible), or method-II is too tolerant; e.g. because method-II does not take into account differences in the orbital elements e and q, which determine the shape and size of the orbit. It is almost certain that introducing these elements into the conditions 1 would at least, reduce the number of MSS listed in Table 11. But other reasons are also possible, such as errors in the parameters of the showers supplied to the MDC base and also failure to meet the assumptions we mentioned in section 3, namely the impact of individual averaging of meteorod parameters causing additional differences between the parameters to increase. Due to the small differences between the \(DH_{min}\) and \(DH_{MI}\) values, at this stage of our research, we considered the MSS showers confirmed by either method to be fully confirmed. ## 6 Results and discussion. More difficult cases. The inconsistencies in the results obtained by the two methods, discussed in the previous section do not exhaust all the inconsistencies we encountered in our study. We will present them in the following subsections. ### MSS sufficiently confirmed The results given in the previous sections were selected 'automatically' using our software. Here we present results confirming the existence of a given MSS in the MDC database. However, some of them were not confirmed 'automatically' by both methods. Sometimes the members of the given MSS were picked out of the more complex identified groups in a'manual' manner. Table 12 lists seven MSS, identified 'automatically', for which the numbers of duplicates identified by both methods are more or less close to, the number as reported in the MDC database. Compared to what is given in the MDC database, the differences are quantitative, consisted of deleting or adding new members to the group. Hence, in our opinion, all MSS listed in Table 12 can be considered sufficiently confirmed. ### MSS confirmed after manual action However, in a dozen cases, MSS identification required manual action in the results obtained automatically. Here we are referring to MSS identified with a distinctly larger number of duplicates, that included other groups treated as separate MSS in the MDC database. An extreme case was 2/000/STA (Southern Taurids) in which method-I identified 62 duplicates. The actual meteorod streams may have a different structure in the parameter space used in the method-I and method-II. These parameters may occupy volumes that resemble hyper-spheres in five-dimensional space (Geminids), but equally well they may occupy more extended volumes (Taurids). The reason for the complexity of the results discussed in this section is related, among others, with the property of the single linking algorithm used in the cluster analysis in both methods. With this algorithm, the so-called chain effect is possible. As a result of this, successively connected group members form a chain-like structure, which, if too high a threshold value is applied, leads to the identification of an unrealistic group of orbits. For these reasons, determining by statistical methods the correct threshold values of orbital similarity taking into account only the abundance of the meteorod stream may lead to unrealistic results. This chain effect is clearly stronger in the case of the method-I for the shower 2/00/STA -- Southern Taurids. An ecliptic group of 62 duplicates was identified by method-I, including all the duplicates of the Northern and Southern Taurids showers, but also 41 duplicates belonging to a dozen other MSS. Another reason of the variation in results is attributable to the properties of the parameter's similarity metrics used, as we mentioned in Section 3.3. Table 13 contains a list of 27 MSS obtained manually from results identified automatically. All of the MSS in this table can be considered confirmed, but we believe it would be advisable to examine each of them in a separate detailed study. ## 7 Conclusion and discussion of future action Two methods were used to search for MSS among 1182 meteor shower solutions selected from MDC database. The obtained results confirmed the effectiveness of the proposed approach of identifying duplicates. We have shown that in order to detect and verify duplicate meteor showers, it is possible to apply the objective proposal instead of the subjective approach used so far. However, it appears that to reveal the duplicate showers is not a simple task. One must face to a wide variety of the properties of various meteor showers. There are known not only the compact showers with the parameters each ranging in a relatively narrow interval, but also the showers with a largely enough dispersed parameters. Sometimes, a shower possesses some structural features and it is only a matter of convention to regard these features as the substructures of given shower or as the autonomous showers. In our study a number of results of varying significance were obtained: (i) seven new MSS represented by two or more parameter sets were discovered, (ii) for 30 MSS there was full agreement between our results and those reported in the MDC database, (iii) for 20 MSS the same duplicates as given in the MDC, were found only by one method, (iv) we found 34 MSS for which the number of the same duplicates found by both method is close to the corresponding number in the MDC database, (v) for 56 MSS listed in the MDC no duplicates were found by any of the applied methods. We consider the identification of 34 + 56 problematic cases in the MDC database, among which at least some duplicates were misclassified, to be a particularly important result. The correction of these cases will significantly improve the content of the MDC database. As shown in Section 5.3, such an adjustment is possible, but it always requires a meticulous approach, so we decided to pursue it in subsequent studies. Determining the correct duplicates of the MSS is important when it comes to giving a shower its established status and, consequently, its official shower name. In Hajdukova et al. (2023) paper, one of the criteria that must be met for this purpose is -- the shower must be represented by at least two sets of parameters (duplicates) determined by independent authors. Anyway, we are convinced that our work helps to identify the problems related to the duplicity problem and a wide discussion in the meteor-research community will follow. ###### Acknowledgements. This work was supported, in part, by the VEGA - Slovak Grant Agency for Science, grant No. 2/0009/22. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
2310.00143
Static solutions to symplectic curvature flow in dimension four
This article studies special solutions to symplectic curvature flow in dimension four. Firstly, we derive a local normal form for static solutions in terms of holomorphic data and use this normal form to show that every complete static solution to symplectic curvature flow in dimension four is Kahler-Einstein. Secondly, we perform an exterior differential systems analysis of the soliton equation for symplectic curvature flow and use the Cartan-Kahler theorem to prove a local existence and generality theorem for solitons.
Gavin Ball
2023-09-29T21:05:51Z
http://arxiv.org/abs/2310.00143v2
# Static solutions to symplectic curvature flow in dimension four ###### Abstract. This article studies special solutions to symplectic curvature flow in dimension four. Firstly, we derive a local normal form for static solutions in terms of holomorphic data and use this normal form to show that every complete static solution to symplectic curvature flow in dimension four is Kahler-Einstein. Secondly, we perform an exterior differential systems analysis of the soliton equation for symplectic curvature flow and use the Cartan-Kahler theorem to prove a local existence and generality theorem for solitons. ###### Contents * 1 Introduction * 2 Structure equations * 3 Static solutions in dimension four * 4 Local existence of soliton solutions ## 1. Introduction An almost-Kahler manifold \((X,\Omega,J)\) consists of a even-dimensional manifold \(X\) endowed with a symplectic form \(\Omega\) and a compatible almost complex structure \(J.\) Together, \(\Omega\) and \(J\) define a Riemannian metric \(g\) on \(X.\) One perspective on almost-Kahler geometry is to fix a symplectic form \(\Omega\) on \(X,\) choose a compatible \(J,\) and think of \(J\) and \(g\) as auxiliary tools used to study the symplectic geometry of \((X,\Omega)\). In this direction, symplectic curvature flow is a degenerate parabolic evolution equation for almost-Kahler structures introduced by Streets-Tian [10] given by \[\begin{split}\frac{\partial}{\partial t}\Omega&=-2 \rho,\\ \frac{\partial}{\partial t}g&=-2\rho^{1,1}-2\,\mathrm{ Ric}^{2,0+0,2},\end{split} \tag{1.1}\] where \(\rho\) is the Chern-Ricci form of \((\Omega,J)\) and Ric is the Ricci tensor of \(g.\) The flow (1.1) preserves the symplectic condition \(d\Omega=0\) and restricts to Kahler-Ricci flow in the case where \(J\) is integrable. The idea behind symplectic curvature flow is to evolve an initial almost-Kahler structure on \(X\) towards some canonical structure. In their initial paper, Streets & Tian prove short time existence for (1.1), but it is to be expected that symplectic curvature flow will encounter singularities in general. Analogy with other geometric flows, especially Ricci flow, suggests that singularity formation should be modeled on _soliton_ solutions to (1.1). **Definition 1.1**.: _An almost-Kahler manifold \((X,\Omega,J,g)\) is a symplectic curvature flow soliton if there exists a constant \(\lambda\in\mathbb{R}\) and a vector field \(V\) on \(X\) such that_ \[\begin{split}\lambda\Omega+\mathcal{L}_{V}\Omega&=-2 \rho,\\ \lambda g+\mathcal{L}_{V}g&=-2\rho^{1,1}-2\,\mathrm{ Ric}^{2,0+0,2}.\end{split} \tag{1.2}\] _If \(\lambda>0,=0,\,or<0\), we say the soliton is expanding, steady, or shrinking respectively. If the vector field \(V\) vanishes identically, then we say \((X,\Omega,J,g)\) is a static solution to symplectic curvature flow._ The soliton solutions are precisely the almost-Kahler structures which evolve by rescaling and diffeomorphisms along (1.1). The static solutions are the structures which evolve purely by rescaling. For Ricci flow, the static solutions are the Einstein metrics, and so static solutions to symplectic curvature flow can be thought of as the almost-Kahler analogues of Einstein metrics. ### Results In this paper, we study soliton and static solutions to symplectic curvature flow in dimension four using the techniques of exterior differential systems and the moving frame. These techniques are well-suited to the non-linear, overdetermined PDE system (1.2). Our first main result is Theorem 3.3, which gives a local normal form for static solutions. We summarize it here as: **Theorem 1.2**.: _Every four-dimensional static solution to symplectic curvature flow is steady (\(\lambda=0\)). If \((X^{4},\Omega,J,g)\) is a static solution and \(p\in X\) is a point where the Nijenhuis tensor of \(J\) is non-vanishing, then there is a neighbourhood of \(p\) with complex coordinates \(z_{1},z_{2}\) and a holomorphic function \(h(z_{1})\) such that the almost-Kaler structure on \(X\) is given by Equation (3.6)._ The geometry of the local normal form (3.6) is constrained enough to preclude the existence of non-trivial complete static solutions. **Corollary 1.3**.: _A complete static solution to symplectic curvature flow in dimension four is Kahler-Einstein._ This corollary can be compared to results by Streets-Tian [10] and Kelleher [6] on static solutions. By contrast to their techniques, our calculations are primarily local in nature. We also note that Pook [9] has constructed compact static solutions to symplectic curvature flow in dimensions \(n(n+1)\). The key ingredient in the proof of Theorem 3.3 is an integrability condition that must be satisfied by any static solution. The existence of this integrability condition implies that the overdetermined PDE system describing static solutions is not involutive in the sense of exterior differential systems. It is then natural to ask if the more general soliton system is involutive. Our second main result, Theorem 4.1, shows that the soliton system is involutive. Hence, the Cartan-Kahler Theorem may be applied to prove the existence of solutions. We summarize this result here as: **Theorem 1.4**.: _The overdetermined PDE system for four-dimensional symplectic curvature flow solutions is involutive and symplectic curvature flow solitons exist locally._ In SS4.1 we provide an explicit example of homogeneous symplectic curvature flow soliton. Further examples have appeared in [8] and explicit solutions to symplectic curvature flow have been analyzed in [5, 7, 9]. ## 2. Structure equations Let \(X\) be a \(4\)-manifold endowed with an almost Kahler structure \(\left(\Omega,J\right),\) that is to say \(\Omega\) is a symplectic form on \(X\) and \(J\) is a \(\Omega\)-compatible almost complex structure on \(X\). Together, \(\Omega\) and \(J\) determine a Riemannian metric \(g\) on \(X.\) Let \(\mathcal{P}\) denote the \(\mathrm{U}(2)\)-structure on \(X\) determined by the almost Kahler structure \(\left(\Omega,J\right)\). Explicitly, \(\pi:\mathcal{P}\to X\) is the principal \(\mathrm{U}(2)\)-bundle defined by \[\mathcal{P}=\left\{u:T_{p}X\to\mathbb{C}^{2}\mid u\text{ is a complex linear isomorphism},\ \ u^{*}\Omega_{\mathrm{Std}}=\Omega_{p}\right\},\] where \(\Omega_{\mathrm{Std}}\) denotes the standard Kahler form on \(\mathbb{C}^{2},\) \[\Omega_{\mathrm{Std}}=\tfrac{i}{2}\left(e^{1}\wedge\overline{e^{1}}+e^{2} \wedge\overline{e^{2}}\right).\] Let \(\eta\) denote the \(\mathbb{C}^{2}\)-valued tautological \(1\)-form on \(\mathcal{P},\) defined by \(\eta(v)=u(\pi_{*}(v)),\) for \(v\in T_{u}\mathcal{P}.\) Denote the components of \(\eta\) with respect to the standard basis \(e_{1},\)\(e_{2}\) of \(\mathbb{C}^{2}\) by \(\eta_{1},\eta_{2}.\) The forms \(\eta_{1},\eta_{2}\) are a basis for the semi-basic forms on \(\mathcal{P},\) and they encode the almost complex structure \(J\) in the sense that a \(1\)-form \(\theta\) on \(X\) is a \((1,0)\)-form if and only if the pullback \(\pi^{*}\theta\) lies in \(\mathrm{span}(\eta_{1},\eta_{2}).\) We also have that, on \(\mathcal{P},\) \[\Omega =\tfrac{i}{2}\eta_{i}\wedge\overline{\eta_{i}},\] \[g =\eta_{i}\cdot\overline{\eta_{i}},\] where \(1\leq i\leq 2\) and the unitary summation convention is employed (as it will be in the remainder of this article). ### The first structure equation On \(\mathcal{P},\) Cartan's first structure equation reads \[d\eta_{i}=-\kappa_{i\overline{j}}\wedge\eta_{j}-\xi_{ij}\wedge\overline{\eta_{j}}, \tag{2.1}\] where \(\kappa_{i\overline{j}}=-\overline{\kappa_{j\overline{i}}}\) and \(\xi_{ij}=-\xi_{ji}.\) Here, \(\kappa\) is the \(\mathfrak{u}(2)\)-valued connection form for the _Chern connection_ on \(X.\) The \(\Lambda^{2}_{\mathbb{C}}\)-valued \(1\)-form \(\xi_{ij}\) is \(\pi\)-semibasic, so that there exist functions \(A_{ij\overline{k}}\) and \(N_{ijk}\) on \(X\) such that \[\xi_{ij}=A_{ij\overline{k}}\eta_{k}+N_{ijk}\overline{\eta_{k}}.\] It is easily checked that the symplectic condition \(d\Omega=0\) implies \(A_{ij\overline{k}}=0.\) Cartan's first structure equation (2.1) may therefore be rewritten as \[d\eta_{i}=-\kappa_{i\overline{j}}\wedge\eta_{j}+N_{ijk}\,\overline{\eta_{j}} \wedge\overline{\eta_{k}}. \tag{2.2}\] The tensor \(N=N_{ijk}\,(\overline{\eta_{i}\wedge\eta_{j}})\otimes\overline{\eta_{k}}\) descends to \(X\) to give a well-defined section of \(\Lambda^{0,2}\otimes\Lambda^{0,1}.\) In fact, \(N\) is simply the _Nijenhuis tensor_ of the almost complex structure \(J.\) The structure equations (2.2) are valid in any even dimension. However, they may be written in a simpler form in complex dimension \(2,\) by exploiting the fact that \(\Lambda^{2}_{\mathbb{C}}\) is \(1\)-dimensional, and this simplification is worthwhile because it leads to a simplification of the second structure equation. Let \(\varepsilon_{ij}\) denote the totally skew-symmetric symbol with \(\varepsilon_{12}=1/2.\) Let \(N_{i}\) denote the functions on \(\mathcal{P}\) defined by \[N_{ijk}=\varepsilon_{ij}N_{k}.\] The first structure equation (2.2) may be rewritten as \[d\eta_{i}=-\kappa_{i\overline{j}}\wedge\eta_{j}+\varepsilon_{ij}N_{k}\, \overline{\eta_{j}}\wedge\overline{\eta_{k}}. \tag{2.3}\] ### The second structure equations The identity \(d^{2}=0\) applied to equation (2.3) implies the following equations: \[\begin{split} dN_{i}=&\overline{\varepsilon_{jk}}A_{ ij}\eta_{k}+B\,\eta_{i}+(F_{ij}+\varepsilon_{ij}H)\,\overline{\eta_{j}}- \kappa_{i\overline{j}}N_{j}-\kappa_{j\overline{j}}N_{i},\\ d\kappa_{i\overline{j}}=&-\kappa_{i\overline{k}} \wedge\kappa_{k\overline{j}}+K_{i\overline{j}k\overline{l}}\overline{\eta_{k} }\wedge\eta_{l}+\left(R+N_{l}\overline{N_{l}}\right)\left(-\frac{1}{3}\eta_{i} \wedge\overline{\eta_{j}}-\frac{1}{3}\delta_{i\overline{j}}\eta_{k}\wedge \overline{\eta_{k}}\right)\\ &+N_{i}\overline{N_{j}}\eta_{k}\wedge\overline{\eta_{k}}+Q_{i \overline{k}}\eta_{k}\wedge\overline{\eta_{j}}+Q_{k\overline{j}}\eta_{i}\wedge \overline{\eta_{k}}-\frac{1}{2}A_{ik}\overline{\eta_{j}}\wedge\overline{\eta_ {k}}+\frac{1}{2}\overline{A_{jk}}\eta_{i}\wedge\eta_{k}\\ &+2B\,\varepsilon_{ik}\overline{\eta_{j}}\wedge\overline{\eta_{k} }-2\overline{B}\overline{\varepsilon_{jk}}\eta_{i}\wedge\eta_{k},\end{split} \tag{2.4}\] for functions \(A_{ij},B,F_{ij},H,K_{ij\overline{k}l},Q_{i\overline{j}},R\) on \(\mathcal{P}\) having the following symmetries: \[\begin{split} A_{ij}=A_{ji},& K_{i\overline{j}k \overline{l}}=K_{k\overline{j}i\bar{l}}\quad Q_{i\overline{j}}=-\overline{Q_{ j\overline{i}}}\\ F_{ij}=F_{ji},& K_{i\overline{j}k\overline{l}}=K_{i \overline{l}k\overline{j}}\quad Q_{i\overline{i}}=0\\ & K_{i\overline{i}k\overline{l}}=0\\ & K_{i\overline{j}k\overline{l}}=\overline{K_{j\overline{i}k }}\end{split}\] Each of these functions takes values in an irreducible \(\mathrm{U}(2)\)-representation, and the right-hand-side of the equation for \(d\kappa\) in (2.4) represents the irreducible decomposition of the curvature of the Chern connection of \(X.\) The equations (2.4) are called the second structure equations of the almost Kahler structure \((g,\Omega).\) By a classical theorem of Cartan, the functions \(A_{ij},B,F_{ij},K_{i\overline{j}k\overline{l}},Q_{i\overline{j}},R\) form a complete set of second-order invariants of \((g,\Omega).\) The second equation of (2.4) gives the curvature of the Chern connection on \(X.\) The Chern-Ricci form is the trace of this curvature: \[d\kappa_{i\overline{i}}=-R\eta_{i}\wedge\overline{\eta_{i}}-4i\,\mathrm{Im} \left(\overline{B}\eta_{1}\wedge\eta_{2}\right)-2Q_{i\overline{j}}\overline{ \eta_{i}}\wedge\eta_{j}.\] #### 2.2.1. The curvature of \(g\) The Riemann curvature tensor of a \(4\)-dimensional manifold is a section of a vector bundle modeled on the \(\mathrm{SO}(4)\) representation \[\mathrm{Sym}^{2}\left(\Lambda^{2}\mathbb{R}^{4}\right)\cong\mathbb{R}\oplus \mathrm{Sym}^{2}_{0}\mathbb{R}^{4}\oplus\mathrm{Sym}^{2}_{0}\left(\Lambda^{2}_ {+}\mathbb{R}^{4}\right)\oplus\mathrm{Sym}^{2}_{0}\left(\Lambda^{2}_{-} \mathbb{R}^{4}\right).\] Corresponding to each irreducible component, we have the scalar curvature \(r,\) the traceless Ricci curvature \(\mathrm{Ric}^{0},\) and the self-dual \(W^{+}\) and anti-self dual \(W^{-}\) Weyl tensors respectively. The second structure equations (2.4) can be compared with the structure equations of a Riemannian manifold to write the components of the Riemann curvature tensor of \(g\) in terms of the first and second order invariants of the almost Kahler structure. The result of this calculation is: \[\begin{split}\operatorname{Scal}(g)&=-8N_{i}\overline {N_{i}}-8R,\\ \operatorname{Ric}(g)&=A_{ij}\overline{\eta_{i}}\cdot \overline{\eta_{j}}+\overline{A_{ij}}\eta_{i}\cdot\eta_{j}+\left(Q_{i\bar{j}} +N_{i}\overline{N_{j}}\right)\overline{\eta_{i}}\cdot\eta_{j}-\left(\tfrac{1} {2}R+N_{k}\overline{N_{k}}\right)\eta_{i}\cdot\overline{\eta_{i}},\\ W^{+}(g)&=\left(4N_{i}\overline{N_{i}}-4R\right) \Omega^{2}-8iB\,\Omega\cdot\left(\eta_{1}\wedge\eta_{2}\right)+8i\overline{B} \,\Omega\cdot\left(\overline{\eta_{1}\wedge\eta_{2}}\right)\\ &+2H\,\left(\eta_{1}\wedge\eta_{2}\right)^{2}+2\overline{H}\, \left(\overline{\eta_{1}\wedge\eta_{2}}\right)^{2}-4N_{i}\overline{N_{i}}\left( \eta_{1}\wedge\eta_{2}\right)\cdot\left(\overline{\eta_{1}\wedge\eta_{2}} \right),\\ W^{-}(g)&=-2K_{i\bar{j}k\bar{l}}\left(\overline{ \eta}_{i}\wedge\eta_{j}\right)\cdot\left(\overline{\eta_{k}}\wedge\eta_{l} \right),\end{split}\] where here we are viewing the Ricci tensor as a symmetric 2-tensor, and the self-dual and anti-self-dual Weyl tensors as sections of \(\operatorname{Sym}_{0}^{2}\left(\Lambda_{\pm}^{2}TX\right).\) #### 2.2.2. Symplectic curvature flow The right hand side of the symplectic curvature flow equation (1.1) may also be written in terms of second and first order invariants of \(\left(g,\Omega\right).\) We have \[\begin{split}\frac{\partial}{\partial t}\Omega&=4R \,\Omega+4iQ_{i\bar{j}}\,\overline{\eta_{i}}\wedge\eta_{j}-8\,\operatorname{ Im}\left(\overline{B}\eta_{1}\wedge\eta_{2}\right),\\ \frac{\partial}{\partial t}g&=4R\,g-8Q_{i\bar{j}} \overline{\eta_{i}}\cdot\eta_{j}-4\operatorname{Re}(A_{ij}\overline{\eta_{i}} \cdot\overline{\eta_{j}}),\\ \frac{\partial}{\partial t}J&=\left(8\operatorname{ Im}\left(\overline{B}\eta_{1}\wedge\eta_{2}\right)-4\operatorname{Re} \left(A_{ij}\overline{\eta_{i}}\wedge\overline{\eta_{j}}\right)\right)g^{-1}, \end{split} \tag{2.5}\] ## 3. Static solutions in dimension four We now study static solutions to symplectic curvature flow in dimension 4. These are solutions which evolve strictly by rescaling under the flow, so for such a solution we must have \[\begin{split}\frac{\partial}{\partial t}\Omega&= \lambda\Omega,\\ \frac{\partial}{\partial t}g&=\lambda g,\\ \frac{\partial}{\partial t}J&=0.\end{split} \tag{3.1}\] Comparing with (2.5), we see that the static almost Kahler structures are characterized by the following conditions on their second order invariants: \[R=\frac{\lambda}{4},\ \ B=0,\ \ Q_{i\bar{j}}=0,\ \ A_{ij}=0. \tag{3.2}\] Therefore, the first and second structures equations for a static solution reduce to \[\begin{split} d\eta_{i}&=-\kappa_{i\overline{j}} \wedge\eta_{j}+\varepsilon_{ij}N_{k}\,\overline{\eta_{j}}\wedge\overline{\eta_{k }},\\ dN_{i}&=(F_{ij}+\varepsilon_{ij}H)\,\overline{\eta_{j} }-\kappa_{i\overline{j}}N_{j}-\kappa_{j\overline{j}}N_{i},\\ d\kappa_{i\overline{j}}&=-\kappa_{i\overline{k}} \wedge\kappa_{i\overline{j}}+K_{i\bar{j}k\bar{l}}\overline{\eta_{k}}\wedge \eta_{l}+\left(\frac{\lambda}{4}+N_{l}\overline{N_{l}}\right)\left(-\tfrac{1} {3}\eta_{i}\wedge\overline{\eta_{j}}-\tfrac{1}{3}\delta_{i\bar{j}}\eta_{k} \wedge\overline{\eta_{k}}\right)\\ &\quad+N_{i}\overline{N_{j}}\eta_{k}\wedge\overline{\eta_{k}}+Q_{i \bar{k}}\eta_{k}\wedge\overline{\eta_{j}}.\end{split}\] Differentiating the second equation, we find \[0=d^{2}N_{i}\wedge\overline{\eta_{1}\wedge\overline{\eta_{2}}}=\left(F_{ij} \overline{N_{j}}+\frac{1}{2}\varepsilon_{ij}H\overline{N_{j}}\right)\eta_{1} \wedge\eta_{2}\wedge\overline{\eta_{1}\wedge\eta_{2}}.\] The equations \[F_{ij}\overline{N_{j}}+\frac{1}{2}\varepsilon_{ij}H\overline{N_{j}}=0,\ \ i=1,2 \tag{3.3}\] must therefore be satisfied by any static almost Kahler structure. These equations give a restriction on the second-order invariants of a static solution which is not an algebraic consequence of (3.2), so the static equations (3.2) are not involutive in the sense of exterior differential systems. The equation (3.3) may be simplified by adapting coframes. Suppose \(N\) is non-zero at a point \(p\in X\). Say a coframe \(u\in\mathcal{P}_{p}\) is \(N\)-adapted if \(N_{2}=0\) at \(u\in\mathcal{P}.\) The group \(\operatorname{U}(2)\) acts transitively on \(\Lambda^{0,2}\otimes\Lambda^{0,1}\) with stabilizer \(\operatorname{U}(1)\times\operatorname{U}(1)\), so the collection of all \(N\)-adapted coframes is a \(\operatorname{U}(1)\times\operatorname{U}(1)\)-bundle over the locus in \(X\) where \(N\neq 0.\) For simplicity, we shall assume from now on \(N\) is nowhere vanishing on \(X\) (otherwise restrict to the open dense locus where \(N\neq 0\)). Denote the bundle of \(N\)-adapted coframes by \(\mathscr{P}^{\prime}\to X.\) Equations (3.3) imply that, on \(\mathscr{P}^{\prime},\) \[F_{11}=0,\ \ F_{12}=\tfrac{1}{2}H.\] Differentiating the identity \(N_{2}=0,\) we find that on \(\mathscr{P}^{\prime}\) \[0=F_{22}\overline{\eta_{2}}-N_{1}\kappa_{2\bar{1}}.\] Define \(G=N_{1}F_{22},\) so we have \(\kappa_{2\bar{1}}=G\,\overline{\eta_{2}}.\) Let us also define \(\mathbb{R}\)-valued \(1\)-forms \(\alpha=-i\kappa_{1\bar{1}}\) and \(\beta=-i\kappa_{2\bar{2}}.\) The forms \(\alpha\) and \(\beta\) together define a connection on the \(\operatorname{U}(1)\times\operatorname{U}(1)\)-bundle \(\mathscr{P}^{\prime}.\) The structure equations restricted to \(\mathscr{P}^{\prime}\) now read \[d\eta_{1}= i\alpha\wedge\eta_{1}+N_{1}\overline{\eta_{1}\wedge\eta_{2}},\] \[d\eta_{2}= i\beta\wedge\eta_{2}+G\eta_{1}\wedge\overline{\eta_{2}},\] \[dN_{1}= H\overline{\eta_{2}}+2iN_{1}\alpha+iN_{1}\beta,\] \[d\alpha= i\left(\tfrac{1}{3}|N_{1}|^{2}-\tfrac{1}{6}\lambda-K_{1\bar{1}1 \bar{1}}\right)\eta_{1}\wedge\overline{\eta_{1}}+iK_{22\bar{2}1}\eta_{1} \wedge\overline{\eta_{2}}-iK_{1\bar{1}1\bar{2}}\eta_{2}\wedge\overline{\eta_{ 1}}\] \[+i\left(\tfrac{2}{3}|N_{1}|^{2}-\tfrac{1}{12}\lambda+|G|^{2}+K_{1 \bar{1}1\bar{1}}\right)\eta_{2}\wedge\overline{\eta_{2}},\] \[d\beta= i\left(-\tfrac{1}{3}|N_{1}|^{2}-\tfrac{1}{12}\lambda+K_{1\bar{1} 1\bar{1}}\right)\eta_{1}\wedge\overline{\eta_{1}}-iK_{2\bar{2}2\bar{1}}\eta_{ 1}\wedge\overline{\eta_{2}}+iK_{1\bar{1}1\bar{2}}\eta_{2}\wedge\overline{\eta_ {1}}\] \[+i\left(-\tfrac{2}{3}|N_{1}|^{2}-\tfrac{1}{6}\lambda-|G|^{2}-K_{1 \bar{1}1\bar{1}}\right)\eta_{2}\wedge\overline{\eta_{2}}\] The identities \(d^{2}\eta_{i}=0\) imply \[K_{1\bar{1}1\bar{2}} =0,\ \ K_{2\bar{2}2\bar{1}}=0,\] \[K_{1\bar{1}1\bar{1}} =\tfrac{1}{3}|N_{1}|^{2}+\tfrac{1}{12}\lambda-|G|^{2},\] \[d\,G =G_{1}\eta_{1}+G_{2}\overline{\eta_{2}}-iG\alpha+2iG\beta,\] for some \(\mathbb{C}\)-valued functions \(G_{\bar{1}}\) and \(G_{2}\) on \(\mathscr{P}^{\prime}.\) The identity \(d^{2}\alpha=0\) implies \(GG_{2}=0,\) so \(G_{2}\) must vanish identically on \(\mathscr{P}^{\prime}\) (since the vanishing of \(G\) implies the vanishing of \(G_{2}\)). Next, differentiating the identity \(\kappa_{2\bar{1}}=G\,\overline{\eta_{2}}\) implies \[K_{121\bar{2}}=-\overline{G_{\bar{1}}},\ \ \ K_{2\bar{1}2\bar{1}}=-G_{\bar{1}}.\] The identity \(d^{2}G=0\) yields \[N_{1}G_{\bar{1}}\eta_{1}\wedge\overline{\eta_{1}\wedge\eta_{2}}+G\left(3|N_{1} |^{2}-\tfrac{1}{2}\lambda\right)\overline{\eta_{2}}\wedge\eta_{1}\wedge\eta_{ 2}=0,\] leading to two possibilities: 1. \(G\) vanishes identically on \(\mathscr{P};\) or 2. \(|N_{1}|^{2}=\tfrac{1}{6}\lambda\) and \(G_{\bar{1}}=0\) on \(\mathscr{P}^{\prime}.\) Let us suppose case (2) holds. Differentiating \(|N_{1}|^{2}=\tfrac{1}{6}\lambda\) implies \(H=0\) on \(\mathscr{P},\) so we have \[dN_{1}=2iN_{1}\alpha+iN_{1}\beta.\] The identity \(d^{2}N_{1}\) then implies \(|N_{1}|^{2}=0,\) contradicting our assumption that \(N\) is non-zero. Therefore case (2) is not possible and we must instead have case (1) holding: \(G=0\) on \(\mathscr{P}^{\prime}.\) Finally, the identity \(d^{2}N_{1}\wedge\overline{\eta_{2}}=0\) implies that \(\lambda N_{1}=0,\) so we must have \(\lambda=0\) for any non-trivial static solution to symplectic curvature flow. At this stage, the structure equations have simplified to \[d\eta_{1}= i\alpha\wedge\eta_{1}+N_{1}\overline{\eta_{1}\wedge\eta_{2}},\] \[d\eta_{2}= i\beta\wedge\eta_{2}, \tag{3.4}\] \[dN_{1}= H\overline{\eta_{2}}+2iN_{1}\alpha+iN_{1}\beta,\] \[d\alpha= i|N_{1}|^{2}\eta_{2}\wedge\overline{\eta_{2}}\] \[d\beta= -i|N_{1}|^{2}\eta_{2}\wedge\overline{\eta_{2}}.\] It is easy to check that equations (3.4) represent an involutive prescribed coframing problem in the sense of [2]. The primary invariants are the real and imaginary parts of the function \(N_{1},\) the free derivatives are the real and imaginary parts of \(H\) and the tableau of free derivatives is equivalent to the standard Cauchy-Riemann tableau \[\begin{bmatrix}x&y\\ -y&x\end{bmatrix}.\] Therefore, Theorem 3 of [2] may be applied to show that non-trivial static solutions to symplectic curvature flow exist locally and depend on two functions of one variable. We summarize the conclusions of this section in the following theorem. **Theorem 3.1**.: _A non-trivial static solution to symplectic curvature flow in dimension 4 must have \(\lambda=0.\) Real analytic non-trivial static solutions exist locally and depend on two functions of one variable in the sense of exterior differential systems._ _Remark 3.2_.: We will show in the following subsection that any static solution must be real analytic. #### 3.0.1. Integrating the structure equations In this subsection we shall integrate the structure equations (3.4) to obtain a local normal form for four-dimensional static solutions to symplectic curvature flow. This local normal form will allow us to draw conclusions on the global structure of such solutions. We begin by noting that \(d\left(\alpha+\beta\right)=0,\) so we have locally \(\beta=-\alpha+dg\) for some function \(g.\) The function \(g\) may be integrated away in the structure equations, so we may assume we are working on the \(\mathrm{U}(1)\)-subbundle \(\mathcal{P}^{\prime\prime}\to X\) where \(\beta=-\alpha.\) The distribution \(\eta_{1}=0\) descends from \(\mathcal{P}^{\prime\prime}\) to \(X\) to give a well-defined real codimension-two distribution on \(X.\) Each leaf of the resulting foliation has a metric given by the restriction of \(\left|N_{1}\right|^{2}\overline{\eta_{2}}.\)\(\eta_{2},\) and the equations \[d\left(N_{1}\overline{\eta_{2}}\right) =2i\alpha\wedge\left(N_{1}\overline{\eta_{2}}\right),\] \[d\alpha =-i\left(N_{1}\overline{\eta_{2}}\right)\wedge\overline{\left(N_{ 1}\overline{\eta_{2}}\right)}\] imply that this metric has constant curvature \(-4.\) It follows that if \(U\subset X\) is simply-connected, then there exists a \(\mathbb{C}\)-valued function \(z_{1}\) and an \(\mathbb{R}\)-valued function \(s\) on \(\mathcal{P}^{\prime\prime}|_{U}\) such that \[N_{1}\,\overline{\eta_{2}}=\frac{e^{is}\,dz_{1}}{1-\left|z_{1}\right|^{2}}, \quad\alpha=-\frac{i}{2}\frac{\overline{z_{1}}dz_{1}-z_{1}d\overline{z_{1}}}{ 1-\left|z_{1}\right|^{2}}+ds.\] We restrict to the locus \(s=0.\) This amounts to restricting the \(\mathrm{U}(1)\)-structure \(\mathcal{P}^{\prime\prime}\) to an \(\{e\}\)-structure over \(U\). The equation \[d\left(N_{1}dz_{1}\right)=\frac{1}{2}\frac{N_{1}z_{z}}{1-\left|z_{1}\right|^{2 }}d\overline{z_{1}}\wedge dz_{1}\] implies \[N_{1}=\frac{h(z_{1})}{\sqrt{1-\left|z_{1}\right|^{2}}} \tag{3.5}\] for some holomorphic function \(h\) of a single complex variable. The \(d\eta_{1}\) equation in (3.4) implies \[d\begin{bmatrix}\overline{\eta_{1}}\\ \overline{\eta_{1}}\end{bmatrix}=-\frac{1}{2}\frac{1}{1-\left|z_{1}\right|^{2} }\begin{bmatrix}\overline{z_{1}}dz_{1}-z_{1}d\overline{z_{1}}&2\,dz_{1}\\ 2\,d\overline{z_{1}}&z_{1}d\overline{z_{1}}-\overline{z_{1}}dz_{1}\end{bmatrix} \wedge\begin{bmatrix}\eta_{1}\\ \overline{\eta_{1}}\end{bmatrix}.\] It follows that \[d\left(\frac{\eta_{1}+z_{1}\overline{\eta_{1}}}{\sqrt{1-\left|z_{1}\right|^{2 }}}\right)=0,\] so there exists a coordinate \(z_{2}\) on \(U\) with \[\eta_{1}=\frac{dz_{2}-z_{1}d\overline{z_{2}}}{\sqrt{1-\left|z_{1}\right|^{2}}}.\] The coordinate \(z_{2}\) is unique up to addition of a constant. We have now proven the first part of the following theorem. The second part follows by reversing the steps above. **Theorem 3.3**.: _Let \((X,\Omega,g)\) be a non-trivial 4-dimensional static solution to symplectic curvature flow and suppose \(p\in X\) is a point where the Nijenhuis tensor is non-vanishing. Then there is a local neighbourhood of \(p\) with complex coordinates \(z_{1}\) and \(z_{2}\) and a holomorphic function \(h(z_{1})\) such that the symplectic form \(\Omega\) and metric \(g\) on \(X\) are given by_ \[\begin{split}\Omega&=\frac{i}{2}\left(\frac{dz_{1} \wedge d\overline{z_{1}}}{|h(z_{1})|^{2}\left(1-|z_{1}|^{2}\right)}+dz_{2} \wedge d\overline{z_{2}}\right),\\ g&=\frac{dz_{1}\cdot d\overline{z_{1}}}{|h(z_{1})| ^{2}\left(1-|z_{1}|^{2}\right)}+\frac{1+|z_{1}|^{2}}{1-|z_{1}|^{2}}dz_{2} \cdot d\overline{z_{2}}-\operatorname{Re}\left(\frac{\overline{z_{1}}dz_{2} ^{2}}{1-|z_{1}|^{2}}\right).\end{split} \tag{3.6}\] _Conversely, let \(h(z_{1})\) be a meromorphic function on the unit disk and let \(\Sigma\subset\mathbb{C}\) be the subset of the unit disk where \(h(z_{1})\) has no zeros or poles. Then equation (3.6) defines a static solution to symplectic curvature flow on the 4-manifold \(\Sigma\times\mathbb{C}.\)_ The local normal form of Theorem 3.3 is strong enough to derive global consequences. **Corollary 3.4**.: _There are no non-trivial complete static solutions to symplectic curvature flow in dimension 4._ Proof.: Let \(L\) be a leaf of the foliation \(\eta_{1}=0\) on \(X\) and let \(\widehat{L}\) denote the universal cover of \(L.\) The function \(z_{1}:\widehat{L}\to\mathbb{C}\) and holomorphic function \(h(z_{1})\) exist globally on \(\widehat{L},\) and \(z_{1}\) identifies \(\widehat{L}\) with the unit disk \(\mathbb{D}\subset\mathbb{C}.\) Formula (3.6) implies that the boundary \(|z_{1}|=1\) is at finite distance, and the function \(|N_{1}|^{2}\) blows up there. Therefore the metric induced on \(\widehat{L}\) is incomplete, hence the metric \(g|_{L}\) is incomplete, hence \(g\) is incomplete. ## 4. Local existence of soliton solutions In the previous section we showed that the static system was non-involutive in the sense of exterior differential systems. This non-involutivity lead to several compatibility conditions which placed severe restrictions on the local geometry of a static solution. By contrast, the soliton equation is involutive as shall be explained in this section. The upshot is a local existence and uniqueness theorem for soliton solutions. The soliton equations are \[\begin{split}\frac{\partial}{\partial t}\Omega&= \lambda\Omega+\mathcal{L}_{V}\Omega,\\ \frac{\partial}{\partial t}g&=\lambda g+\mathcal{L}_{ V}g,\end{split} \tag{4.1}\] where \(\lambda\in\mathbb{R}\) is a constant and \(V\) is a vector field on \(X.\) Let \(V\) be a vector field on an almost-Kahler 4-manifold \(X,\) thought of as a \(\mathrm{U}(2)\)-equivariant map \(\mathcal{B}\to\mathbb{C}^{2}\) and let \(V_{i}\) be the components of this map with respect to the standard basis of \(\mathbb{C}^{2}.\) There exist functions \(U,\)\(S_{ij},\)\(W_{i\bar{j}},\) and \(Y\) with symmetries \(W_{i\bar{l}}=0,\)\(S_{ij}=S_{ji}\) such that, on \(\mathcal{B},\) \[dV_{i}+\kappa_{i\overline{j}}V_{j}=\left(W_{i\bar{j}}+Y\delta_{i\bar{j}} \right)\eta_{j}+\left(S_{ij}+U\epsilon_{ij}\right)\overline{\eta_{j}}. \tag{4.2}\] These functions appear in the Lie derivatives of \(\Omega\) and \(g\) with respect to \(V\): \[\begin{split}\mathcal{L}_{V}\Omega&=-\frac{1}{2} \left(W_{i\bar{j}}+\overline{W_{j\bar{i}}}\right)\overline{\eta_{i}}\wedge \eta_{j}+2\,\operatorname{Re}\left(U\right)\Omega+\operatorname{Im}\left( \left(Y-V_{i}\overline{N_{i}}\right)\eta_{1}\wedge\eta_{2}\right)\\ \mathcal{L}_{V}g&=2\,\operatorname{Re}\left(\overline {S_{ij}}\eta_{i}\cdot\eta_{j}\right)+4\operatorname{Re}\left(\overline{ \epsilon_{kj}}V_{k}\overline{N}_{i}\eta_{i}\cdot\eta_{j}\right)+\left(W_{i\bar {j}}+\overline{W}_{j\bar{i}}\right)+2\,\operatorname{Re}(U)g\overline{\eta}_{ i}\cdot\eta_{j}.\end{split}\] Comparing with the symplectic curvature flow equations (2.5), we see that the second-order invariants of a soliton solution satisfy \[\begin{split} Q_{i\bar{j}}&=-\frac{1}{4}\operatorname {Re}(W_{i\bar{j}}),&\quad R=\frac{1}{2}\operatorname{Re}(Y)+ \frac{1}{4}\lambda,\\ A_{ij}&=-\frac{1}{2}S_{ij}+\frac{1}{2}\varepsilon_{ ik}\overline{V_{k}}N_{j}+\frac{1}{2}\varepsilon_{jk}\overline{V_{k}}N_{i},& \quad B=\frac{1}{8}U+\frac{1}{8}\overline{V_{i}}N_{i}.\end{split} \tag{4.3}\] Conversely, if \((X,\Omega,J)\) is an almost-Kahler 4-manifold with a vector field \(V\) so that equations (2.3), (2.4), (4.2), (4.3) are satisfied on the \(\mathrm{U}(2)\)-bundle \(\mathcal{B},\) then \((X,\Omega,J)\) is a soliton for symplectic curvature flow. **Theorem 4.1**.: _Solitons for the symplectic curvature flow exist locally and depend on 10 functions of 3 variables in the sense of exterior differential systems._ Proof.: We first reduce the problem on constructing symplectic curvature flow solitons to a prescribed coframing problem in the style of [2]. By the above paragraph, the U(2)-coframe bundle \(\mathbb{B}\to X\) of a symplectic curvature soliton carries 1-forms \(\eta\) and \(\kappa\) and functions \(N_{i}\), \(V_{i}\), \(W_{i\bar{j}}\), \(Y\), \(S_{ij}\), \(U\), \(F_{ij}\) and \(K_{i\bar{j}k\bar{l}}\) satisfying equations (2.3), (2.4), (4.2), (4.3). Conversely, if \(M\) is an 8-manifold together with 1-forms \(\eta_{i}\) and \(\kappa_{i\bar{j}}\) and functions \(N_{i}\), \(V_{i}\), \(W_{i\bar{j}}\), \(Y\), \(S_{ij}\), \(U\), \(F_{ij}\) and \(K_{i\bar{j}k\bar{l}}\) satisfying equations (2.3), (2.4), (4.2), (4.3), then an argument similar to Theorem 1 of [3] implies that (after possibly shrinking \(M\)) the 4-dimensional leaf space \(X\) of the integrable plane field \(\eta=0\) carries an almost-Kahler structure which is a soliton for the symplectic curvature flow and \(M\) may be identified an an open set in the U(2)-bundle \(\mathcal{B}\to X\) associated to this almost-Kahler structure. The prescribed coframing problem (2.3), (2.4), (4.2), (4.3) is written in a form where it is natural to attempt to apply Theorem 3 of [2]. The 'primary invariants' are the functions \(N_{i}\), \(V_{i}\), \(\operatorname{Re}(Y)\), \(\operatorname{Re}(W_{i\bar{j}})\), \(U\), \(S_{ij}\), and \(K_{i\bar{j}k\bar{l}}\) while the derived invariants consist of the covariant derivatives of these functions with respect to \(\kappa_{i\bar{j}}\), taking the identity \(d^{2}\eta=0\) into account. However, the Cartan's existence theorem cannot be applied directly because the tableau of free derivatives is not involutive. One may compute that it has Cartan characters \((24,22,13,1)\), while the dimension of the prolongation is 106, so Cartan's test fails. Nevertheless, the system does become involutive after one prolongation. We omit the details here due to length, but after prolongation the tableau of free derivatives has Cartan characters \((56,31,10,0)\) and the dimension of the prolongation is \(148=(56)+2(31)+3(10)+4(0)\), so Cartan's test is passed and the system is involutive. Existence and generality in the real analytic category then follows from Theorem 3 of [2]. _Remark 4.2_ (Real analyticity).: Streets-Tian [10] prove that symplectic curvature flow is parabolic modulo diffeomorphism. Similarly, it can be shown that the soliton system is elliptic modulo diffeomorphism. Therefore, symplectic curvature flow solitons which are \(C^{1,\alpha}\) in harmonic coordinates for some \(\alpha>0\) must be real analytic in those coordinates. _Remark 4.3_ (The gradient case).: The equation \(d^{\flatflat}=0\) implies \[U=\overline{V_{i}}N_{i},\ \ \operatorname{Im}(Y)=0,\ \ \operatorname{Im}(W_{i\bar{j}})=0.\] Adjoining these equations to (2.3), (2.4), (4.2), (4.3) gives a prescribed coframing problem whose local solutions correspond to gradient solitons. However, in contrast to the general case, this system is not involutive even after prolongation because the equation \(d^{2}Y=0\) yields a restriction on the 3-jet of a solution. In the language of exterior differential systems, this problem has _intrinsic torsion_. Unfortunately, the restriction on the 3-jet is more algebraically complicated than the equations encountered in SS3 and the existence and local generality of gradient solitons remains unknown. _Remark 4.4_ (Comparison to Laplacian flow).: Symplectic curvature flow has a formal similarity to the Laplacian flow of closed G\({}_{2}\)-structures in that both are flows of geometric structures with torsion defined by closed differential forms. This formal similarity extends to the local existence theory for static and soliton solutions. The static solutions to Laplacian flow are the _eigenforms_, analyzed in [1] where it was shown that the relevant EDS is not involutive. By contrast, the general Laplace soliton system is well-behaved. In a recent paper Bryant [4] has shown that the EDS describing Laplace solitons is involutive. ### An example Let G = SL\({}_{2}\mathbb{R}\ltimes\mathbb{R}^{2}\) denote the group of volume preserving affine transformations of \(\mathbb{R}^{2}\) and write the left-invariant Maurer-Cartan form \(\mu\) of G as \[\mu=\begin{bmatrix}0&0&0\\ \alpha_{1}&\alpha_{3}&\beta-\alpha_{4}\\ \alpha_{2}&-\beta-\alpha_{4}&-\alpha_{3}\end{bmatrix}.\] Let \(\mathrm{S}^{1}\) denote the circle subgroup of \(\mathrm{G}\) generated by the action of the vector field dual to \(\beta.\) For each pair of non-zero numbers \((a,b)\in\mathbb{R}^{2}\) the 2-form \(\Omega_{a,b}\) and metric \(g_{a,b}\) defined by \[\Omega_{a,b} =a^{2}\,\alpha_{1}\wedge\alpha_{2}+b^{2}\,\alpha_{3}\wedge\alpha_{4},\] \[g_{a,b} =a^{2}\left(\alpha_{1}^{2}+\alpha_{2}^{2}\right)+b^{2}\left( \alpha_{3}^{2}+\alpha_{4}^{2}\right),\] descend to the 4-dimensional quotient \(X=\mathrm{G}/\mathrm{S}^{1}\) to define an almost Kahler structure on \(X.\) The \(U(2)\)-invariants of this structure may be computed via the Maurer-Cartan equation \(d\mu=-\mu\wedge\mu.\) For this structure, we find that the right hand side of the symplectic curvature flow equations are given by \[4R\,\Omega+4iQ_{i\bar{j}}\,\overline{\eta_{i}}\wedge\eta_{j}-8\,\operatorname{ Im}\left(\overline{B}\eta_{1}\wedge\eta_{2}\right) =4\,\alpha_{3}\wedge\alpha_{4},\] \[4R\,g-8Q_{i\overline{j}}\overline{\eta_{i}}\cdot\eta_{j}-4\operatorname{Re}( A_{i\bar{j}}\overline{\eta_{i}}\cdot\overline{\eta_{j}}) =4\left(\alpha_{3}^{2}+\alpha_{4}^{2}\right).\] Therefore, the 1-parameter family of almost-Kahler structures on \(X\) defined by \[\Omega(t) =a^{2}\,\alpha_{1}\wedge\alpha_{2}+\sqrt{4t+b^{4}}\,\alpha_{3} \wedge\alpha_{4},\] \[g(t) =a^{2}\left(\alpha_{1}^{2}+\alpha_{2}^{2}\right)+\sqrt{4t+b^{4}} \left(\alpha_{3}^{2}+\alpha_{4}^{2}\right).\] gives a solution to symplectic curvature flow with initial condition \(\Omega(0)=\Omega_{a,b},\)\(g(0)=g_{a,b}.\) For any \(a,\)\(a^{\prime}\) the structures \((\Omega_{a,b},g_{a,b})\) and \((\Omega_{a^{\prime},b},g_{a^{\prime},b})\) are diffeomorphism equivalent, so in fact each \((\Omega(t),g(t))\) defines a soliton solution to symplectic curvature flow. _Remark 4.5_.: The manifold \(X\) defined above is non-compact and \(\mathrm{G}\) does not admit a uniform lattice, so it has no compact quotients. However, \(\mathrm{G}\) does admit cocompact lattices \(\Gamma\) (for example \(\Gamma=\mathrm{SL}_{2}\mathbb{Z}\ltimes\mathbb{Z}^{2}\)) and these give quotients \(\Gamma\backslash X\) with finite volume. The 1-parameter family of almost-Kahler structures \((\Omega(t),g(t))\) descends to each \(\Gamma\backslash X\) to give a solution of symplectic curvature flow, however the diffeomorphism between the structures \((\Omega_{a,b},g_{a,b})\) and \((\Omega_{a^{\prime},b},g_{a^{\prime},b})\) does not descend to \(\Gamma\backslash X,\) so \((\Omega(t),g(t))\) does not give a soliton solution on \(\Gamma\backslash X.\)
2309.16642
A stable-compact method for qualitative properties of semilinear elliptic equations
We study the uniqueness of reaction-diffusion steady states in general domains with Dirichlet boundary data. Here we consider "positive" (monostable) reactions. We describe geometric conditions on the domain that ensure uniqueness and we provide complementary examples of nonuniqueness. Along the way, we formulate a number of open problems and conjectures. To derive our results, we develop a general framework, the stable-compact method, to study qualitative properties of nonlinear elliptic equations.
Henri Berestycki, Cole Graham
2023-09-28T17:50:00Z
http://arxiv.org/abs/2309.16642v2
# The steady states of positive reaction-diffusion equations with Dirichlet conditions ###### Abstract. We study the uniqueness of reaction-diffusion steady states in general domains with Dirichlet boundary data. Here we consider "positive" (monostable) reactions. We describe geometric conditions on the domain that ensure uniqueness and we provide complementary examples of nonuniqueness. Along the way, we formulate a number of open problems and conjectures. To derive our results, we develop a general framework, the _stable-compact method_, to study qualitative properties of nonlinear elliptic equations. ## 1. Overview and main results We study the uniqueness of steady states of reaction-diffusion equations in general domains with Dirichlet boundary. These steady states solve semilinear elliptic equations of the form \[\begin{cases}-\Delta u=f(u)&\text{in }\Omega,\\ u=0&\text{on }\partial\Omega\end{cases} \tag{1.1}\] in domains \(\Omega\subset\mathbb{R}^{d}\), which need not be bounded. The classification of solutions of (1.1) is a fundamental question in semilinear elliptic theory. It is, moreover, prerequisite to understanding the dynamics of the parabolic form of (1.1), which models a host of systems in the natural sciences. In [11], we considered (1.1) when the reaction \(f\) is of strong-KPP type. There, we found that positive bounded solutions are unique under quite general conditions on \(\Omega\). In contrast, we showed that slightly weaker assumptions on the reaction can easily lead to multiple solutions. Thus, the classification of solutions of (1.1) with general reactions is more complex. Here, we take up this question. We assume that the nonlinearity \(f\colon\,[0,\infty)\to\mathbb{R}\) is \(\mathcal{C}^{1,\gamma}\) for some \(\gamma\in(0,1]\) and \(f(0)=f(1)=0\). As a consequence, \(0\) solves (1.1) and \(1\) is a supersolution, in the sense that it satisfies (1.1) with \(\geq\) in place of equalities. We also assume that \(f|_{(1,\infty)}<0\) and \(f^{\prime}(1)<0\), so that the reaction drives large values down toward its stable root \(1\). Then the maximum principle implies that all positive bounded solutions of (1.1) take values between \(0\) and \(1\). We are thus primarily interested in the behavior of \(f\) on \((0,1)\). We say that a reaction \(f\) is _positive_ if (P) \(f|_{(0,1)}>0\) and \(f^{\prime}(0^{+})>0\). This is sometimes termed "monostability." We use distinct terminology to emphasize the positive derivative at \(u=0\), which plays a significant role in our analysis. In many sources, monostability only denotes the first condition in (P). For the domain, we assume that \(\Omega\) is open, nonempty, connected, and uniformly \(\mathcal{C}^{2,Y}\) smooth. For a precise definition of the final notion, see Definition A.1 in [11]. Here, we merely note that this hypothesis includes a uniform interior ball condition. We now present our main contributions. We begin by establishing uniqueness on several classes of structured domains. **Definition 1.1**.: A domain \(\Omega\) is _exterior-star_ if \(\Omega^{c}\) is star-shaped. It is _strongly exterior-star_ if \(\Omega^{c}\) is star-shaped about a nonempty open set of points. **Theorem 1.1**.: _Suppose \(\Omega\) is strongly exterior-star and \(\Omega^{c}\) is compact. Then (1.1) has a unique positive bounded solution._ This is a simplified form of Theorem 2.1 below, which also covers some domains with unbounded complements. The uniqueness in Theorem 1.1 is somewhat surprising, as it is not the norm. Indeed, on every bounded domain, there exists a positive reaction (depending on the domain) such that (1.1) supports multiple positive solutions [11, Proposition 1.4]. On the other hand, if we hold the reaction fixed, solutions on the one-dimensional interval \((0,L)\) are unique provided \(L\) is sufficiently large [10, Lemma 2.5]. Here, we extend this result to dilation in multiple dimensions. **Theorem 1.2**.: _Fix a positive reaction \(f\) and a domain \(\Omega\). There exists \(\underline{\kappa}(f,\Omega)>0\) such that for all \(\kappa\geq\underline{\kappa}\), (1.1) has a unique bounded positive solution on the dilated domain \(\kappa\Omega\)._ We next consider uniqueness on epigraphs: domains bounded by the graph of a function. Given \(\phi\colon\mathbb{R}^{d-1}\to\mathbb{R}\), its epigraph is the open set \[\Omega\coloneqq\big{\{}(x^{\prime},y)\in\mathbb{R}^{d-1}\times\mathbb{R}\ \mid y>\phi(x^{\prime})\big{\}}.\] In Theorem 1.2(d) of [9], Caffarelli, Nirenberg, and the first author showed that (1.1) has a unique positive bounded solution on \(\Omega\) provided \(\phi\) is _uniformly Lipschitz_: \(\operatorname{Lip}\phi<\infty\). We extend this result to a much broader class of epigraphs. Our strongest result is somewhat technical, so we defer it to Section 4. Here, we illustrate the general result through a particularly evocative example. **Definition 1.2**.: We say the epigraph \(\Omega\) is _asymptotically flat_ if for all \(1\leq i,j\leq d-1\), \[\partial_{i}\left(\frac{\partial_{j}\phi}{\sqrt{\left|\nabla\phi\right|^{2}+1 }}\right)\to 0\quad\text{as }|x^{\prime}|\to\infty.\] That is, the curvature of the boundary \(\partial\Omega\) vanishes at infinity. Many epigraphs of interest are asymptotically flat but not uniformly Lipschitz; the parabola \(\{y>x^{2}\}\) is a natural example. This distinction arises whenever \(\phi\) grows superlinearly at infinity in a consistent fashion. We show that such superlinear growth does not impede uniqueness. **Theorem 1.3**.: _If \(\Omega\) is an asymptotically flat epigraph, then (1.1) admits a unique positive bounded solution \(u\)._ This is a consequence of the more general Theorem 4.2 presented below. In the plane, convex epigraphs are automatically asymptotically flat. **Corollary 1.4**.: _If \(d=2\) and \(\Omega\) is a convex epigraph, then (1.1) has a unique positive bounded solution._ The exterior-star and epigraph properties are the central structural assumptions in Theorems 1.1 and 1.3. However, both include additional technical conditions: the former assumes that \(\Omega^{\mathsf{c}}\) is compact, while the latter assumes that \(\partial\Omega\) flattens at infinity. It is not clear that these technical conditions are necessary. We are led to ask: does uniqueness hold on any exterior-star domain? On any epigraph? We collect a number of such open problems in Section 8. On the other hand, the fundamental structural assumptions in Theorems 1.1-1.3_are_ essential. If one relaxes these assumptions in a bounded region, nonuniqueness can arise. We state an informal result here; for a rigorous version, see Theorem 5.2. **Theorem 1.5**.: _Given a domain \(\Omega_{0}\), if we attach a "pocket" to \(\Omega_{0}\) via a sufficiently narrow bridge, then there exists a positive reaction \(f\) such that (1.1) admits multiple positive bounded solutions on the composite domain._ We depict this operation in Figure 1. This demonstrates the importance of the structural assumptions in our prior results. For example, we can attach a pocket to an asymptotically flat epigraph to produce multiple solutions. Thus even a compact violation of the epigraph structure suffices for multiplicity. In our proof of Theorem 1.5, we construct two distinct solutions. Taking a different approach, we show that in some cases (1.1) can admit uncountably many solutions. **Proposition 1.6**.: _Let \(f\) be a \(\mathcal{C}^{2}\) positive reaction such that \(f^{\prime\prime}(0^{+})>0\). Then there exists \(L>0\) such that (1.1) admits a positive bounded solution on the strip \(\mathbb{R}\times(0,L)\) that varies in the first coordinate. As a consequence, (1.1) admits a continuum of distinct positive bounded solutions._ The strip enjoys a translation invariance, so we can view this proposition as a form of symmetry breaking: a symmetric domain supports asymmetric solutions. Figure 1. A domain \(\Omega_{0}\) augmented by a pocket \(\Pi\). This shows that positive reactions behave quite differently on Dirichlet strips than on the line. After all, on \(\mathbb{R}\), the only positive bounded solution of (1.1) is the constant \(1\). In contrast, we show that boundary absorption on the strip can cause a positive reaction to exhibit _bistable_ behavior. And indeed, bistable reactions admit oscillatory solutions on \(\mathbb{R}\) that are not translation-invariant. For further discussion in this direction, see Section 6. Technically, we approach Proposition 1.6 through the lens of "spatial dynamics" [25, 26]. We think of the first coordinate as time and view (1.1) as a second-order dynamical system. We then seek nontrivial limit cycles. This effort is complicated by the fact that the phase space is infinite-dimensional--it consists of functions on \((0,L)\). The theory of spatial dynamics is well-equipped to treat this difficulty, and we are able to prove Proposition 1.6 via standard methods. The above results are linked by a common perspective that we term the "stable-compact" method. This is a general approach to the qualitative properties of elliptic equations. We focus on uniqueness, but other properties naturally arise in both our arguments and conclusions, including symmetry, monotonicity, and stability. The method rests on the decomposition of the domain \(\Omega\) into a "stable" part and "compact" part. In the former, solutions of (1.1) are linearly stable and thus obey a maximum principle. This is conducive to uniqueness, so we can focus on the complementary compact part. There, solutions enjoy some form of compactness relative to a context-dependent deformation. (The compact part may be unbounded; compactness refers to solutions, not to the domain.) Using this deformation, compactness and the strong maximum principle yield uniqueness. In Section 7, we examine the proofs of our main results through this stable-compact lens. Given its generality, the stable-compact method naturally encompasses earlier work. For example, it can be discerned in the moving plane method as interpreted by the first author and Nirenberg [4]. We anticipate that the method will prove of use in a variety of contexts in elliptic and parabolic theory. Naturally, our efforts to classify solutions of certain semilinear elliptic problems intersect an enormous body of literature. Here, we highlight a handful of connections that strike us as particularly germane. For a more complete view of the subject, we direct the reader to the references therein. #### Structured domains In this paper, we tackle a rather broad class of reactions by imposing various structural conditions on the domain. We owe much to the rich literature on the moving plane and sliding methods starting with the seminal works of Alexandrov [1], Serrin [33] and Gidas, Ni, and Nirenberg [21]. In particular, we draw inspiration from the versions of the moving plane and sliding methods of the first author and Nirenberg [4]. Further results more closely related to the present work include those of Esteban and Lions on monotonicity in coercive epigraphs [17] and the extensive collaboration of the first author with Caffarelli and Nirenberg, who studied qualitative elliptic properties in half-spaces [6, 8], cylinders [7], and epigraphs [9]. In particular, our Theorem 4.2 is a direct generalization of the results of [9] on uniformly Lipschitz epigraphs. Structured reactionsAlternatively, one can consider structured _reactions_ on general domains. Rabinowitz took this approach in [30], which established uniqueness for "strong-KPP" reactions on all smooth bounded domains. We introduced the terminology "strong-KPP" in [11] to distinguish a certain concavity property of \(f\) from the weaker "KPP condition"; see Definition 7.1 for details. In [11], we already used the stable-compact method (without calling it such) to study strong-KPP uniqueness in general _unbounded_ domains. Other boundary conditionsHere we treat Dirichlet boundary conditions, but the Neumann problem is also well-motivated by applications. Neumann steady states are much simpler: Rossi showed that \(1\) is the unique positive bounded steady state on general domains [31, Corollary 2.8]. His approach built on that of the first author, Hamel, and Nadirashvili, who established the same for (weak) KPP reactions in domains satisfying a mild geometric condition [12, Theorem 1.7]. One can also consider the intermediate Robin boundary condition. In [11], we were able to treat Dirichlet and Robin conditions in a unified framework for strong-KPP reactions. However, many of the methods we employ in the present work do not readily extend to the Robin problem. As a matter of fact, most of the results we derive here are open under Robin conditions; we discuss this further in Section 8. Other reactionsWhile we focus here on positive reactions, there are other important reaction classes. Bistable reactions are a notable example arising in biology, materials science, and physics. Even on the line, bistable reactions support infinitely many solutions, which inspires Proposition 1.6 above. To manage this menagerie, one can consider solutions satisfying additional properties such as stability or monotonicity. The latter is the subject of de Giorgi's celebrated conjecture that monotone bistable solutions on the whole space are one-dimensional. This deep conjecture has spurred a great deal of work in both the positive [2, 20, 32] and negative directions [29]. The last reference exploits a remarkable relationship with the theory of minimal surfaces. In a different vein, if \(f(s)=s^{\alpha}\) for some \(\alpha>0\), then (1.1) becomes the Lane-Emden equation from mathematical physics. Its behavior in the whole- and half-space has garnered significant attention [15, 16, 18, 22]. **Organization**. We devote the first three sections of the body of the paper to uniqueness on exterior-star domains (Section 2), large dilations (Section 3), and epigraphs (Section 4). We then exhibit nonuniqueness on domains with pockets (Section 5) and on cylinders (Section 6). In Section 7, we discuss these results in the context of the stable-compact decomposition. We present a variety of open problems in Section 8. Finally, Appendix A contains supporting ODE arguments. ## Acknowledgments We warmly thank Bjorn Sandstede for guiding us through the literature on spatial dynamics, which plays a central role in Section 6. CG was supported by the NSF Mathematical Sciences Postdoctoral Research Fellowship program under grant DMS-2103383. ## 2. Exterior-star domains We open our investigation of (1.1) on the exterior-star domains introduced in Definition 1.1. We are primarily motivated by "exterior domains" for which \(\Omega^{c}\) is compact, as in Theorem 1.1. However, our proof applies to complements of some unbounded star-shaped domains, so we state a more general form here. **Theorem 2.1**.: _Suppose \(\Omega\) is strongly exterior-star and \(\Omega^{c}\) coincides with a convex set outside a bounded region. Then (1.1) has a unique positive bounded solution \(u\). Moreover, if \(x_{*}\) lies in the interior of the star centers of \(\Omega^{c}\), then \(u\) is strictly increasing in the radial coordinate centered at \(x_{*}\)._ In particular, this implies Theorem 1.1 as well as the following. **Corollary 2.2**.: _If \(\Omega^{c}\) is convex, then (1.1) has a unique positive bounded solution._ Thanks to this corollary, uniqueness holds on the complements of balls, cylinders, and convex paraboloids. ### Star properties Before proving Theorem 2.1, we explore Definition 1.1 in greater detail. To facilitate the discussion, we define some complementary terms. **Definition 2.1**.: A closed set \(K\) is _star-shaped_ about a point \(x_{*}\in\mathbb{R}^{d}\) if for all \(y\in K\), the line segment from \(y\) to \(x_{*}\) lies within \(K\). If \(K\) has \(\mathcal{C}^{1}\) boundary, we say it is _strictly_ star-shaped if \(K\) is star-shaped about some \(x_{*}\in\operatorname{int}K\) and no ray from \(x_{*}\) is tangent to \(\partial K\) at their intersection. Finally, \(K\) is _strongly_ star-shaped if it is star-shaped about a nonempty open set. We are naturally interested in the relationships between these three definitions. First, one can readily construct a smooth compact \(K\) that is star-shaped but neither strictly nor strongly star-shaped. For example, the boundary of the hourglass in Figure 2 contains open line segments on each signed coordinate axis. It follows that \(K\) is star-shaped precisely about the origin. This immediately implies that \(K\) is not strongly star-shaped. Moreover, segments of the boundary are tangent to rays through the star-center \(0\), so \(K\) is not strictly star-shaped either. In fact, if \(K\) is compact, the strict and strong notions of star-shapedness are equivalent. **Lemma 2.3**.: _Suppose \(\partial K\) is \(\mathcal{C}^{1}\). If \(K\) is strongly star-shaped, then it is strictly star-shaped. If \(K\) is compact, then the converse holds._ Proof.: First suppose \(K\) is strongly star-shaped and (without loss of generality) the set of star-centers of \(K\) contains an open ball \(B\) about \(0\). If \(y\in\partial K\), then \(K\) contains the convex hull of \(B\cup\{y\}\), which includes a truncated open cone with axis through \(0\) and \(y\). Hence this axis is not tangent to \(\partial K\) at \(y\), and \(K\) is strictly star-shaped. Now suppose \(K\) is compact and strictly star-shaped about a point \(x_{*}\in\operatorname{int}K\). Because \(\partial K\) is uniformly \(\mathcal{C}^{1}\), one can check that there is a continuous family \((\Gamma_{y})_{y\in\partial K}\) of truncated open cones such that \(\Gamma_{y}\) has vertex \(y\) and \[x_{*}\in U\coloneqq\bigcap_{y\in\partial K}\Gamma_{y}.\] Moreover, the continuity of the family \(\Gamma\) and the compactness of \(\partial K\) imply that \(U\) is open. Each point in \(U\) is a star-center of \(K\), so \(K\) is strongly star-shaped. If \(K\) is permitted to be unbounded, the strict and strong notions differ. Figure 3. A hyperbolic set that is strictly but not strongly star-shaped. Figure 2. A star-shaped hourglass that is neither strictly nor strongly star-shaped. _Example 1_.: The hyperbolic set \(K=\{xy\leq 1\}\subset\mathbb{R}^{2}\) in Figure 3 is strictly but not strongly star-shaped. Indeed, like the hourglass in Figure 2, \(K\) is star-shaped precisely about the origin. We will make essential use of an important geometric property of strongly exterior-star domains. **Lemma 2.4**.: _Suppose \(\Omega\) is strongly exterior-star about \(0\). Then_ \[\operatorname{dist}(\kappa\Omega,\Omega^{c})>0 \tag{2.1}\] _for all \(\kappa>1\) and_ \[\lim_{\kappa\to\infty}\operatorname{dist}(\kappa\Omega,\Omega^{c})\to\infty. \tag{2.2}\] Proof.: Because \(\Omega^{c}\) is star-shaped about \(0\), (2.1) is equivalent to the statement that \[\operatorname{dist}(\kappa\partial\Omega,\partial\Omega)>0.\] Towards a contradiction, suppose there exists \(\kappa>1\) and a sequence of pairs \((y_{n},z_{n})_{n\in\mathbb{N}}\subset\partial\Omega\times\partial\Omega\) such that \[\lim_{n\to\infty}|\kappa z_{n}-y_{n}|=0. \tag{2.3}\] By Lemma 2.3, \(\Omega^{c}\) is strictly star-shaped about \(0\). It follows that \(\kappa\partial\Omega\cap\partial\Omega=\emptyset\). By compactness, we must have \(|y_{n}|,|z_{n}|\to\infty\) as \(n\to\infty\). Because \(\Omega\) is strongly exterior-star about \(0\), \(\Omega^{c}\) is star-shaped about a ball \(B\) of radius \(r>0\) centered at \(0\). Let \(C_{n}\) denote the cone between \(y_{n}\) and \(B\), with axis \(\ell_{n}\coloneqq\mathbb{R}y_{n}\). In the vicinity of \(\kappa^{-1}y_{n}\in\ell_{n}\), the cone's radius is close to \((1-\kappa^{-1})r>0\). It follows that \(C_{n}\) contains the open ball \(B_{n}\) with center \(\kappa^{-1}y_{n}\) and radius \((1-\kappa^{-1})r/2>0\) once \(n\) is sufficiently large. We emphasize that the radius of \(B_{n}\) is independent of \(n\). Now, (2.3) implies that \(\left|z_{n}-\kappa^{-1}y_{n}\right|\to 0\) as \(n\to\infty\). Therefore \(z_{n}\in B_{n}\) for \(n\gg 1\). But then \(z_{n}\in C_{n}\subset\operatorname{int}\Omega^{c}\), which contradicts \(z_{n}\in\partial\Omega\). This implies (2.1). To see (2.2), let \(\delta\coloneqq\operatorname{dist}(2\Omega,\Omega^{c})>0\). Given \(n>\delta\), we dilate by \(\delta^{-1}n>1\). Because \(\Omega^{c}\) is star-shaped about \(0\), \(\Omega^{c}\subset\delta^{-1}n\Omega^{c}\). It follows that \[\operatorname{dist}(2\delta^{-1}n\Omega,\Omega^{c})\geq\operatorname{dist}(2 \delta^{-1}n\kappa\Omega,\delta^{-1}n\Omega^{c})=\delta^{-1}n\operatorname{ dist}(2\Omega,\Omega^{c})=n.\] Since \(n\) was arbitrary, (2.2) follows. ### Uniqueness on exterior-star domains We now return to the study of (1.1) on exterior-star domains. We begin with an auxiliary result that we will use throughout the paper. **Lemma 2.5**.: _Given a positive reaction \(f\) and \(s\in(0,1)\), there exists \(R(f,s)\in\mathbb{R}_{+}\) such that if \(u\) is a positive solution of (1.1) on a domain \(\Omega\), then \(u(x)\geq s\) if \(\operatorname{dist}(x,\Omega^{c})\geq R\)._ Proof.: Fix \(s\in(0,1)\) and let \[\rho\coloneqq\inf_{r\in(0,s]}\frac{f(r)}{r}.\] By (P), \(\rho>0\). Therefore, there exists \(R>0\) such that \(\rho\) is the principal Dirichlet eigenvalue of \(-\Delta\) in the ball \(B_{R}\) of radius \(R\) centered at the origin. Let \(v\) denote the corresponding positive principal eigenfunction, normalized by \(\|v\|_{\infty}=s\). We note that \(v\) is radially decreasing, so \(v(0)=s\). We extend \(v\) by \(0\) outside \(B_{R}\); then \(\kappa v\) is a subsolution of (1.1) for all \(\kappa\in[0,1]\). Now let \(u\) be a positive solution of (1.1) on a domain \(\Omega\). Fix \(x\in\Omega\) such that \(\operatorname{dist}(x,\partial\Omega)\geq R\). Let \(v_{x}\coloneqq v(\ \cdot\ -x)\) denote the translate of \(v\) to \(x\). Because \(u>0\) on the compact set \(B_{R}(x)\), there exists \(\underline{\kappa}>0\) such that \(u\geq\underline{\kappa}v_{x}\). If we continuously raise \(\kappa\) from \(\underline{\kappa}\) to \(1\), the strong maximum principle prevents \(\kappa v_{x}\) from touching \(u\). It follows that \(u>v_{x}\), and in particular \(u(x)\geq v_{x}(x)=v(0)=s\). Motivated by Lemma 2.5, we introduce the notation \[\Omega[R]\coloneqq\{x\in\Omega\ |\ \operatorname{dist}(x,\partial\Omega)>R\} \quad\text{for }R>0. \tag{2.4}\] Once \(R\) is sufficiently large, Lemma 2.5 implies that solutions of (1.1) are close to \(1\) on \(\Omega[R]\). Recalling that \(f^{\prime}(1)<0\), these solutions are _stable_ on \(\Omega[R]\) and obey a maximum principle. The following relies on the generalized principal eigenvalue introduced in [13]: \[\lambda(-\mathcal{L},\Omega)\coloneqq\sup\big{\{}\lambda\ |\ \exists\psi\in W^{2,d}_{ \operatorname{loc}}(\Omega)\text{ s.t. }\psi>0,\ (\mathcal{L}+\lambda)\psi\leq 0\big{\}}. \tag{2.5}\] **Lemma 2.6**.: _There exists \(R(f)>0\) such that the following holds. Let \(u\) be a positive bounded solution of (1.1) on \(\Omega\) and let \(v\colon\Omega[R]\to[0,1]\) be a subsolution of (1.1). If \(u\geq v\) on \(\partial\Omega[R]\), then \(u\geq v\) in \(\Omega[R]\)._ Proof.: Because \(f^{\prime}(1)<0\) and \(f\in\mathcal{C}^{1}\), there exists \(\theta(f)\in(0,1)\) such that \(f^{\prime}|_{[\theta,1]}\leq-|f^{\prime}(1)|/2<0\). Let \(R(f,\theta)\in\mathbb{R}_{+}\) be the radius provided by Lemma 2.5, which depends only on \(f\). By Lemma 2.5, \(u\geq\theta\) in \(\Omega[R]\). We now define \(w\coloneqq v-u\) and \(P\coloneqq\{w>0\}\subset\Omega[R]\). Within \(P\) we have \(\theta\leq u<v\leq 1\). By the mean value theorem and the definition of \(\theta\), we find \[0=-\Delta w-\left(\frac{f(v)-f(u)}{v-u}\right)w=-\Delta w+qw\] for some \(q\geq|f^{\prime}(1)|/2>0\) in \(P\). Set \(q\equiv|f^{\prime}(1)|/2\) in \(P^{\mathrm{c}}\) and consider the operator \(\mathcal{L}\coloneqq\Delta-q\) on \(\mathbb{R}^{d}\). If we extend the positive part \(w_{+}\) by \(0\) to \(\Omega[R]^{\mathrm{c}}\), it remains continuous because \(w_{+}=0\) on \(\partial\Omega[R]\) by hypothesis. Moreover, as the maximum of two subsolutions, \(-\mathcal{L}w_{+}\leq 0\) in the sense of distributions. So \(w_{+}\) is a generalized subsolution of \(\mathcal{L}\) on \(\mathbb{R}^{d}\). Now, Proposition 2.1(ii) of [11] yields \(\lambda(-\mathcal{L},\mathbb{R}^{d})\geq\inf q>0\). Because this eigenvalue is positive, Proposition 3.1 of [11] states that \(\mathcal{L}\) satisfies the maximum principle on \(\mathbb{R}^{d}\) (see also Theorem 1 in [28]). Therefore \(w_{+}\leq 0\), i.e., \(w=0\). (Technically, we have not satisfied the hypotheses of the proposition because the potential \(q\) is merely in \(L^{\infty}\) and \(w_{+}\) is a subsolution in the sense of distributions. Neither poses a challenge--the proof goes through as written.) Thus \(v\leq u\) in \(\Omega[R]\), as desired. We emphasize that Lemmas 2.5 and 2.6 are independent of the domain, and in particular do not require the exterior-star condition. But using these auxiliary results, we can prove uniqueness on suitable exterior-star domains. Proof of Theorem 2.1.: Assume without loss of generality that \(\Omega\) is strongly exterior-star about \(0\). Define \[\Omega_{+}\coloneqq\Omega[R]\quad\text{and}\quad\Omega_{-}\coloneqq\Omega \setminus\overline{\Omega}_{+},\] where \(R(f)>0\) is provided by Lemma 2.6. By (2.2) in Lemma 2.4, there exists \(\overline{\kappa}>0\) such that \(\overline{\kappa}\Omega\subset\Omega_{+}\). Now fix two positive bounded solutions \(u,v\) of (1.1). Given \(\kappa>1\), define the dilation \[v_{\kappa}(x)\coloneqq v\left(\frac{\cdot}{\kappa}\right),\] which satisfies \[\begin{cases}-\Delta v_{\kappa}=\kappa^{-2}f(v_{\kappa})&\text{in }\kappa\Omega, \\ v_{\kappa}=0&\text{on }\kappa\partial\Omega.\end{cases}\] Because \(\kappa>1\) and \(f>0\), we see that \(v_{\kappa}\) is a subsolution of our original equation on the dilated domain \(\kappa\Omega\). Extending \(v_{\kappa}\) by \(0\), it is a generalized subsolution on \(\Omega\). Now define \[\kappa_{*}\coloneqq\inf\{\kappa>1\mid v_{\kappa}\leq u\text{ in }\Omega_{-}\}.\] Because \(\overline{\kappa}\Omega\subset\Omega_{+},v_{\overline{\kappa}}\equiv 0\) on \(\Omega_{-}\), so \(\kappa_{*}\leq\overline{\kappa}\). We wish to show that \(\kappa_{*}=1\), so towards a contradiction suppose \(\kappa_{*}>1\). By continuity, \(v_{\kappa_{*}}\leq u\) in \(\Omega_{-}\). Applying Lemma 2.6, we see that in fact \(v_{\kappa_{*}}\leq u\) in \(\Omega\). Moreover, \(v_{\kappa_{*}}\neq u\) because \(\kappa_{*}>1\), so the strong maximum principle implies \(v_{\kappa_{*}}<u\) in \(\Omega\). We wish to upgrade this strict inequality to a form of uniform inequality. Take \(\kappa\in(1,\kappa_{*})\). We claim that \[\inf_{\Omega_{-}\cap\kappa\Omega}(u-v_{\kappa_{*}})>0. \tag{2.6}\] To see this, suppose to the contrary that there exists a sequence of points \(x_{n}\) in \(\Omega_{-}\cap\kappa\Omega\) such that \[v_{\kappa_{*}}(x_{n})\geq u(x_{n})-\tfrac{1}{n}\quad\text{for all }n\in \mathbb{N}.\] By (2.1) in Lemma 2.4, \(\kappa\overline{\Omega}\subset\Omega\). It follows that \((x_{n})_{n\in\mathbb{N}}\) cannot have an accumulation point because \(v_{\kappa_{*}}<u\) in \(\kappa\overline{\Omega}\). So \(x_{n}\to\infty\) as \(n\to\infty\). We next claim that \[\liminf_{n\to\infty}u(x_{n})>0. \tag{2.7}\] We first observe that (2.1) in Lemma 2.4 implies that \(\liminf_{n\to\infty}\operatorname{dist}(x_{n},\partial\Omega)>0\). We next use the hypothesis that \(\Omega^{c}\) coincides with a convex set \(\Gamma\) outside a bounded region. Hence for \(n\gg 1\), in the vicinity of \(x_{n}\), \(\Omega\) coincides with \(\Gamma^{c}\). In particular, by the convexity of \(\Gamma\), \(\operatorname{dist}_{\Omega}(x_{n},\Omega[R])=\operatorname{dist}(x_{n}, \Omega[R])<R\). (Here \(\operatorname{dist}_{\Omega}\) denotes the intrinsic distance in the metric space \(\Omega\).) So once \(n\) is large, \(x_{n}\) is uniformly bounded away from \(\partial\Omega\) and uniformly close to \(\Omega[R]\). By the construction of \(R\) in Lemma 2.6, \(\inf_{\Omega[R]}u>0\). Therefore (2.7) follows from the Harnack inequality. Finally, we center around \(x_{n}\) and extract subsequential limits \(u^{\infty},v^{\infty}_{\kappa_{*}},\Omega^{\infty}\), and \((\kappa\Omega)^{\infty}\) of \(u(\cdot\cdot+x_{n}),v_{\kappa_{*}}(\cdot+x_{n}),\Omega-x_{n}\), and \(\kappa\Omega-x_{n}\), respectively. Because \(x_{n}\in\Omega_{-}\), \(\operatorname{dist}(x_{n},\partial\Omega)\leq R\). It follows that \(\Omega_{\infty}\) is not the whole space: \(\operatorname{dist}(0,\partial\Omega^{\infty})\leq R\). Moreover, (2.1) of Lemma 2.4 implies that \(0\in\overline{(\kappa\Omega)}^{\infty}\subset\Omega^{\infty}\). Now, Schauder estimates imply that \(u^{\infty}\) solves (1.1) on \(\Omega^{\infty}\) and \(v^{\infty}_{\kappa_{*}}\leq u^{\infty}\) is a subsolution. By (2.7), \(u^{\infty}(0)>0\), so by the strong maximum principle \(u^{\infty}>0\) in \(\Omega^{\infty}\). On the other hand, because \(v^{\infty}_{\kappa_{*}}\equiv 0\) outside \((\kappa\Omega)^{\infty}\), \(v^{\infty}_{\kappa_{*}}\neq u^{\infty}\). Thus the strong maximum principle further implies that \(v^{\infty}_{\kappa_{*}}<u^{\infty}\) in \(\Omega^{\infty}\). This contradicts the definition of \(x_{n}\), which forces \(v^{\infty}_{\kappa_{*}}(0)=u^{\infty}(0)\). The claim (2.6) follows. Now, Schauder estimates imply that \(\nabla v\) is uniformly bounded. It follows that the family \((v_{\kappa})_{\kappa\geq 1}\) is continuous in \(L^{\infty}\). In particular, (2.6) implies that there exists \(\kappa<\kappa_{*}\) such that \(v_{\kappa}\leq u\) in \(\Omega_{-}\). This contradicts the definition of \(\kappa_{*}\) and shows that in fact \(\kappa_{*}=1\). That is, \(v\leq u\). By symmetry, \(u=v\) as desired. Moreover, this argument shows that \(u_{\kappa}\leq u\) for all \(\kappa\geq 1\), so \(u\) is increasing in the radial coordinate centered at zero. Applying the strong maximum principle to \(\partial_{r}u\), it is _strictly_ increasing. Shifting, this holds about any point \(x_{*}\) in the interior of the star centers of \(\Omega\). In the above proof, we used the hypothesis of far-field convexity to show that solutions cannot vanish locally uniformly at infinity in \(\Omega\). In the absence of this hypothesis, one can construct strongly exterior-star domains in which solutions _do_ vanish in this manner; see Figure 4. Thus to remove this hypothesis from Theorem 2.1, one would need to develop a proof that permits such decay, perhaps by including additional regions in \(\Omega_{+}\). This seems rather delicate. Because Theorem 2.1 already covers a host of interesting bounded and unbounded complements, we are content to leave this question to future work. If \(\Omega^{c}\) is bounded, one may naturally wonder whether the exterior-star hypothesis is necessary. Perhaps all "exterior domains" (complements of compact sets) enjoy uniqueness? This is not the case; for an example, see Figure 9 below. Figure 4. A “slanted comb” \(\Omega\) whose complement \(K\) is strongly star-shaped about the magenta disk. If the channels enclosing the sequence \((x_{n})_{n\in\mathbb{N}}\) are sufficiently narrow, all solutions of (1.1) vanish locally uniformly in the vicinity of \(x_{n}\) as \(n\to\infty\). ## 3. Dilated domains In [10], we studied the behavior of (1.1) on one-dimensional bounded intervals as a stepping stone toward results on the half-space. As noted in the introduction, uniqueness cannot hold in general on a bounded interval. Indeed, to every interval \([0,L]\) one can associate a positive reaction \(f_{L}\) such that (1.1) admits multiple positive solutions on \([0,L]\) with reaction \(f_{L}\). However, in Lemma 2.5 of [10], we show that for a _fixed_ reaction, uniqueness does hold on sufficiently large intervals. That is, given a positive reaction \(f\), there exists a length \(L_{f}>0\) such that (1.1) has a unique positive solution on \([0,L]\) whenever \(L>L_{f}\). In this section, we investigate this phenomenon in higher directions. Throughout the remainder of the section, let us fix a positive reaction \(f\) and a domain \(\Omega\subset\mathbb{R}^{d}\) satisfying our standing assumptions. Given a dilation factor \(\kappa\in\mathbb{R}_{+}\), we investigate uniqueness for (1.1) on the dilation \(\kappa\Omega\). That is, we consider \[\begin{cases}-\Delta u=f(u)&\text{in }\kappa\Omega,\\ u=0&\text{on }\kappa\partial\Omega.\end{cases} \tag{3.1}\] The aim of the section is Theorem 1.2, which ensures uniqueness provided \(\kappa\) exceeds some threshold depending on \(f\) and \(\Omega\). In contrast to other uniqueness results in this paper, we make no structural assumptions on \(\Omega\). This is due to the fact that in the stable-compact framework, the problem (3.1) becomes _purely stable_ once \(\kappa\) is sufficiently large. There is no need for a compact part with its attendant deformation. The pure stability of (3.1) (at large \(\kappa\)) is thus responsible for the generality of Theorem 1.2. We note that \(\Omega\) need not be bounded, although the bounded case seems of particular interest. Our approach to Theorem 1.2 rests on the following observation. As \(\kappa\to\infty\), the domain \(\kappa\Omega\) locally looks like a whole space or half-space modulo isometry. Precisely, using the notion of locally uniform limit introduced in [11, Definition A.2], the sequence \((\kappa\Omega)_{\kappa\geq 1}\) has exactly two limits as \(\kappa\to\infty\): the whole space \(\mathbb{R}^{d}\) and the half-space \(\mathbb{H}^{d}\coloneqq\mathbb{R}^{d-1}\times\mathbb{R}_{+}\) (up to isometry). The constant \(1\) is the unique positive bounded solution of (1.1) on \(\mathbb{R}^{d}\). Likewise, (1.1) has a unique positive bounded solution \(\varphi\) on \(\mathbb{H}^{d}\), which is a function of the distance to \(\partial\mathbb{H}^{d}\)[10, Theorem 1.1(A)]. We will argue below that both \(1\) and \(\varphi\) are strictly _linearly stable_. This is the source of the claimed stability of the problem (3.1) when \(\kappa\gg 1\). To exploit this stability, we must relate solutions on \(\kappa\Omega\) to those on \(\mathbb{R}^{d}\) and \(\mathbb{H}^{d}\). To this end, define \[\Phi_{\kappa}(x)\coloneqq\varphi\big{(}\operatorname{dist}(x,\kappa\partial \Omega)\big{)}. \tag{3.2}\] Here we have used the fact that \(\varphi\) is essentially one-dimensional: writing coordinates as \(x=(x^{\prime},y)\in\mathbb{R}^{d-1}\times\mathbb{R}_{+}=\mathbb{H}^{d}\), \(\varphi\) depends \(y\) alone. The profile \(\varphi\) satisfies \(\varphi(+\infty)=1\), so \(\Phi_{\kappa}\) is close to \(1\) deep in the interior of \(\kappa\Omega\). Moreover, when \(\kappa\gg 1\), the curvature of \(\kappa\partial\Omega\) is very slight, so \(\Phi_{\kappa}\) locally resembles an isometry of \(\varphi\) itself. Thus when \(\kappa\gg 1,\Phi_{\kappa}\) approximately unifies the limiting solutions \(1\) and \(\varphi\) in one object. We show that solutions of (3.1) coalesce around \(\Phi_{\kappa}\). **Lemma 3.1**.: _Let \(\mathcal{U}_{\kappa}\) denote the set of positive bounded solutions of (3.1). Then_ \[\lim_{\kappa\to\infty}\sup_{u\in\mathcal{U}_{\kappa}}\|u-\Phi_{\kappa}\|_{ \mathcal{C}(\kappa\Omega)}=0.\] Proof.: Let \(R\) denote the radius corresponding to \(f\) and \(s=\frac{1}{2}\) in Lemma 2.5. Because \(\Omega\) is uniformly smooth, there exists \(\kappa_{0}(f,\Omega)\in\mathbb{R}_{+}\) such that \(\kappa\Omega\) contains a ball of radius \(R\) for every \(\kappa\geq\kappa_{0}\). Recalling the notation (2.4), define \[\Omega_{\kappa}^{\ell}\coloneqq(\kappa\Omega)[\ell]=\{x\in\kappa\Omega\ |\ \mathrm{dist}(x,\kappa\partial\Omega)>\ell\}\quad\text{for }\ell>0. \tag{3.3}\] Then \(\Omega_{\kappa}^{R}\) is nonempty when \(\kappa>\kappa_{0}\). By Lemma 2.5, \[\inf_{(u,x)\in\mathcal{U}_{\kappa}\times\Omega_{\kappa}^{R}}u(x)\geq\frac{1} {2}. \tag{3.4}\] Recall that \(\kappa\Omega\) locally (and smoothly) converges to the half-space near its boundary. It follows that the intrinsic distance from any point in \(\kappa\Omega\) to \(\Omega_{\kappa}^{R}\) is uniformly bounded provided \(\kappa\gg 1\). Precisely, there exists \(\kappa_{1}(f,\Omega)\in\mathbb{R}_{+}\) such that for all \(\kappa\geq\kappa_{1}\), \[\sup_{x\in\kappa\Omega}\mathrm{dist}_{\kappa\Omega}(x,\Omega_{\kappa}^{R}) \leq 2R.\] Thus (3.4), the Harnack inequality, and uniform smoothness imply that solutions of (3.1) are uniformly positive away from \(\kappa\partial\Omega\). That is, for all \(\delta>0\) and \(\kappa\geq\kappa_{1}\), \[\inf_{(u,x)\in\mathcal{U}_{\kappa}\times\Omega_{\kappa}^{R}}u(x)>0. \tag{3.5}\] Towards a contradiction, suppose there exists \(\varepsilon>0\) and \((u_{\kappa},x_{\kappa})\in\mathcal{U}_{\kappa}\times\kappa\Omega\) such that \(|u_{\kappa}(x_{\kappa})-\Phi_{\kappa}(x_{\kappa})|>\varepsilon\) along some sequence of \(\kappa\) tending to infinity. We restrict to this sequence. If \(\mathrm{dist}(x_{\kappa},\kappa\partial\Omega)\to\infty\) along a subsequence, then we can center around \(x_{\kappa}\) and extract a locally-uniform subsequential limit \(u^{*}\) that solves (1.1) on \(\mathbb{R}^{d}\). Along the same sequence, \(\Phi_{\kappa}\) tends to \(\varphi(+\infty)=1\), so \(u^{*}(0)\leq 1-\varepsilon\). Moreover, (3.4) implies that \(u^{*}(0)\geq\frac{1}{2}\). However, the only bounded solutions of (1.1) on \(\mathbb{R}^{d}\) are \(0\) and \(1\), a contradiction. It therefore follows that the sequence \((x_{\kappa})\) remains a bounded distance from \(\kappa\partial\Omega\). We again center around \(x_{\kappa}\) and extract a locally-uniform subsequential limit \((u^{*},\Omega^{*})\) of \((u_{\kappa},\kappa\Omega)\). Note first that (3.5) implies that \(u^{*}>0\); that is, \(u^{*}\) is a positive bounded solution of (1.1) on \(\Omega^{*}\). Moreover, as \(\kappa\to\infty\), the boundary of \(\kappa\Omega\) flattens, so \(\Omega^{*}=g^{-1}\mathbb{H}^{d}\) for some isometry \(g\) of \(\mathbb{R}^{d}\). Thus by Theorem 1.1(A) of [10], \(u^{*}=\varphi\circ g\). Moreover, \(\Phi_{\kappa}\) also converges to the limit \(\Phi^{*}\coloneqq\varphi\circ g\). But by the definition of \((u_{\kappa},x_{\kappa})\), \(|u^{*}(0)-\Phi^{*}(0)|\geq\varepsilon>0\), a contradiction. The lemma follows. Due to this lemma, (3.1) becomes perturbative around \(\Phi_{\kappa}\) when \(\kappa\to\infty\). We are therefore interested in the linearization of (3.1) about \(\Phi_{\kappa}\). We begin by showing that the half-space solution \(\varphi\) underlying \(\Phi_{\kappa}\) is linearly stable. We tackle this in two steps. We first show that the principal eigenvalue on the half-space \(\mathbb{H}^{d}\) coincides with that on the half-line. We state the result in a rather general form, as it may be of independent interest. **Lemma 3.2**.: _Let \(\mathcal{L}\) be a self-adjoint elliptic operator on a uniformly smooth domain \(\omega\). For any \(d\geq 1\),_ \[\lambda(-(\mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{R}^{d})= \lambda(-\mathcal{L},\omega).\] _In particular, \(\lambda(-\Delta-f^{\prime}(\varphi),\mathbb{H}^{d})=\lambda(-\partial_{x}^{2} -f^{\prime}(\varphi),\mathbb{R}_{+})\)._ Proof.: We draw on the generalized principal eigenvalues \(\lambda\) and \(\lambda^{\prime}\) defined in [13]. We defined the former in (2.5); the latter is given by \[\lambda^{\prime}(-\mathcal{L},\Omega)\coloneqq\inf\big{\{}\lambda\mid\exists \psi\in W^{2,d}_{\mathrm{loc}}(\Omega)\cap L^{\infty}(\Omega)\text{ s.t. }\psi>0,\psi|_{\partial\Omega}=0,(\mathcal{L}+\lambda)\psi\geq 0 \big{\}}.\] We can view any function \(\psi\colon\omega\to\mathbb{R}\) as a function on \(\omega\times\mathbb{R}^{d}\) that is constant in the second factor. It follows that any supersolution \(\psi\) satisfying the definition (2.5) of \(\lambda(-\mathcal{L},\omega)\) yields a supersolution for \(\lambda(-(\mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{R}^{d})\). The same is true for the subsolutions defining \(\lambda^{\prime}\). Therefore \[\lambda(-(\mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{R}^{d}) \geq\lambda(-\mathcal{L},\omega)\quad\text{and}\quad\lambda^{\prime}(-( \mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{R}^{d})\leq\lambda^{ \prime}(-\mathcal{L},\omega). \tag{3.6}\] On the other hand, \(\mathcal{L}\) and \(\mathcal{L}+\Delta_{\mathbb{R}^{d}}\) are self-adjoint, so [13, Theorem 1.7(i)] implies that \[\lambda(\mathcal{L},\omega)=\lambda^{\prime}(\mathcal{L},\omega)\quad\text{ and}\quad\lambda(-(\mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{R}^{d})= \lambda^{\prime}(-(\mathcal{L}+\Delta_{\mathbb{R}^{d}}),\omega\times\mathbb{ R}^{d}). \tag{3.7}\] The lemma follows from (3.6) and (3.7). This short argument demonstrates the utility of expressing \(\lambda\) as both a supremum and an infimum. We next show stability on the half-line. **Lemma 3.3**.: _We have \(0<\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})\leq|f^{\prime} (1)|\)._ Proof.: Let \(\mathcal{H}\coloneqq-\partial_{x}^{2}-f^{\prime}(\varphi)\), which we view as a Schrodinger operator on the Dirichlet half-line. For the sake of brevity, let \(\lambda_{\mathcal{H}}\coloneqq\lambda(\mathcal{H},\mathbb{R}_{+})\) denote the generalized principal eigenvalue. By Proposition 2.3(vi) of [13], \[\lambda_{\mathcal{H}}=\inf_{\psi\in H^{1}_{0}(\mathbb{R}_{+})\setminus\{0\}} \frac{\int_{\mathbb{R}_{+}}\psi\mathcal{H}\psi}{\int_{\mathbb{R}_{+}}\psi^{2}}.\] This is the classical Rayleigh quotient formula for the principal eigenvalue. By standard functional analysis for symmetric operators, \[\lambda_{\mathcal{H}}=\min\operatorname{Spec}(\mathcal{H},\mathbb{R}_{+}),\] where \(\operatorname{Spec}(\mathcal{H},\mathbb{R}_{+})\) denotes the Dirichlet spectrum of \(\mathcal{H}\) on \(L^{2}(\mathbb{R}_{+})\). We now recall that \(\varphi\) satisfies the ODE \[-\varphi^{\prime\prime}=f(\varphi),\quad\varphi(0)=0. \tag{3.8}\] Moreover, \(\varphi^{\prime}>0\) and \(\varphi(+\infty)=1\). Hence \(-f^{\prime}(\varphi)\to|f^{\prime}(1)|>0\) at infinity. An elementary calculation then implies that \(\varphi\to 1\) exponentially quickly at infinity. Because \(f^{\prime}\in\mathcal{C}^{\prime}\), the same holds for the limit \(-f^{\prime}(\varphi)\to|f^{\prime}(1)|\). In particular, \(f^{\prime}(\varphi)+|f^{\prime}(1)|\in L^{1}(\mathbb{R}_{+})\). It follows from Theorem 9.38 of [34] that \(\mathcal{H}\) has no singular continuous spectrum and its absolutely continuous spectrum is \([|f^{\prime}(1)|,\infty)\). (While that theorem is stated for operators on \(\mathbb{R}\), it also holds on \(\mathbb{R}_{+}\), as the author notes below the proof.) Hence \(\lambda_{\mathcal{H}}\leq|f^{\prime}(1)|\). If \(\lambda_{\mathcal{H}}=|f^{\prime}(1)|>0\), we are done. Suppose otherwise, so that \(\mathcal{H}\) has spectrum below \(|f^{\prime}(1)|\). By the above results, \(\mathcal{H}\) has only pure point spectrum below \(|f^{\prime}(1)|\). Thus there exists a principal eigenfunction \(\psi>0\) in \(H^{1}(\mathbb{R}_{+})\) solving \[\begin{cases}-\psi^{\prime\prime}-f^{\prime}(\varphi)\psi=\lambda_{\mathcal{H }}\psi&\text{on $\mathbb{R}_{+}$},\\ \psi(0)=0.\end{cases}\] Also, differentiating (3.8), we find \[\begin{cases}-(\varphi^{\prime})^{\prime\prime}-f^{\prime}(\varphi)\varphi^{ \prime}=0&\text{on $\mathbb{R}_{+}$},\\ \varphi^{\prime\prime}(0)=0.\end{cases}\] So \(\mathcal{H}\varphi^{\prime}=0\). It is straightforward to show that both \(\psi\) and \(\varphi^{\prime}\) decay exponentially at infinity. Hence \(\psi\) and \(\varphi^{\prime}\) are exponentially-localized positive eigenfunctions of \(\mathcal{H}\), albeit with different boundary data. The Hopf lemma (or elementary ODE uniqueness theory) yields \(\psi^{\prime}(0),\varphi^{\prime}(0)>0\). Integrating over \(\mathbb{R}_{+}\), we find: \[\lambda_{\mathcal{H}}\int_{\mathbb{R}_{+}}\psi\varphi^{\prime} =-\int_{\mathbb{R}_{+}}[\psi^{\prime\prime}+f^{\prime}(\varphi) \psi]\varphi^{\prime}\] \[=-\int_{\mathbb{R}_{+}}[(\varphi^{\prime})^{\prime\prime}+f^{ \prime}(\varphi)\varphi^{\prime}]\psi+(\psi\varphi^{\prime\prime}-\psi^{ \prime}\varphi^{\prime})\big{|}_{0}^{\infty}\] \[=\psi^{\prime}(0)\varphi^{\prime}(0)>0.\] Now \(\psi,\varphi^{\prime}>0\) in \(\mathbb{R}_{+}\), so we must have \(\lambda_{\mathcal{H}}>0\), as desired. We now turn to the stability of the function \(\Phi_{\kappa}\) defined in (3.2). **Proposition 3.4**.: _We have_ \[\lim_{\kappa\to\infty}\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa\Omega) =\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})>0.\] As a general principle, the principal eigenvalue is upper semicontinuous in the operator and domain; see Lemma 2.2 of [11] for an example. One can thus conclude from "soft" analysis (and Lemma 3.2) that \[\limsup_{\kappa\to\infty}\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa \Omega)\leq\lambda(-\Delta-f^{\prime}(\varphi),\mathbb{H}^{d})=\lambda(- \partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+}).\] It is more challenging, however, to obtain a matching lower bound as \(\kappa\to\infty\). For this purpose, we make essential use of a beautiful result of Lieb [27]. **Theorem 3.5**.: _Let \(V\) be a uniformly \(C^{Y}(\Omega)\) potential. Then for all \(R>0\),_ \[\inf_{x\in\mathbb{R}^{d}}\lambda\big{(}-\Delta+V,\Omega\cap B_{R}(x)\big{)} \leq\lambda(-\Delta+V,\Omega)+\lambda(-\Delta,B_{1})R^{-2}. \tag{3.9}\] In a certain sense, this bound states that the principal eigenvalue is "local:" \(\lambda\) can be approximated to accuracy \(\varepsilon\) by examining the eigenvalue problem at spatial scale \(\varepsilon^{-1/2}\). This is crucial for our purposes, as \(\Omega\) only resembles the half-space locally. In [27], Lieb stated Lemma 3.5 for \(V\equiv 0\). However, his argument readily extends to other potentials. For the sake of completeness, we include a proof here. Proof.: Fix \(\varepsilon>0\). By the Rayleigh quotient formula for the principal eigenvalue of self-adjoint operators, there exist \(f\in\mathcal{C}_{0}^{\infty}(\Omega)\) and \(g\in\mathcal{C}_{0}^{\infty}(B_{R})\) with unit \(L^{2}\) norms such that \[\int_{\Omega}\big{(}|\nabla f|^{2}+Vf^{2}\big{)}<\lambda(-\Delta+V,\Omega)+ \frac{\varepsilon}{2}\quad\text{and}\quad\int_{B_{R}}|\nabla g|^{2}<\lambda(- \Delta,B_{R})+\frac{\varepsilon}{2}. \tag{3.10}\] We extend \(g\) by \(0\) to \(\mathbb{R}^{d}\) and define \[h_{x}(y)\coloneqq f(y)g(y-x)\] for \(x\in\mathbb{R}^{d}\) and \(y\in\Omega\). Next, we define, \[T(x)\coloneqq\int_{\Omega}\big{(}|\nabla h_{x}|^{2}+Vh_{x}^{2}\big{)}\quad \text{and}\quad D(x)\coloneqq\int_{\Omega}h_{x}^{2}.\] Fubini and our \(L^{2}\)-normalization of \(f\) and \(g\) imply that \(\int_{\mathbb{R}^{d}}D=1\) and \[\int_{\mathbb{R}^{d}\times\Omega}V(y)h_{x}(y)^{2}\,\operatorname{dx} \operatorname{dy}=\int_{\mathbb{R}^{d}\times\Omega}V(y)f(y)^{2}g(y-x)^{2}\, \operatorname{dx}\operatorname{d}y=\int_{\Omega}Vf^{2}. \tag{3.11}\] For the gradient term of \(T\), we compute \[|\nabla h_{x}(y)|^{2}=|\nabla f(y)|^{2}g(y-x)^{2}+f(y)^{2}|\nabla g(y-x)|^{2}+ \frac{1}{2}\nabla(f^{2})(y)\cdot\nabla(g^{2})(y-x).\] Writing \(\nabla(g^{2})(y-x)\) as \(-\nabla_{x}[g(y-x)^{2}]\), we see that the final term vanishes when we integrate in \(x\). Thus by Fubini, \[\begin{split}\int_{\mathbb{R}^{d}\times\Omega}|\nabla h_{x}|^{2}& =\int_{\mathbb{R}^{d}\times\Omega}\big{[}|\nabla f(y)|^{2}g(y-x)^{ 2}+f(y)^{2}|\nabla g(y-x)|^{2}\big{]}\\ &=\int_{\Omega}|\nabla f|^{2}+\int_{B_{R}}|\nabla g|^{2}.\end{split} \tag{3.12}\] Combining (3.11) and (3.10) with (3.12), we see that \[\int_{\mathbb{R}^{d}}T(x)<\lambda(-\Delta+V,\Omega)+\lambda(-\Delta,B_{R})+\varepsilon.\] Since \(\int D=1\), we have \[\int_{\mathbb{R}^{d}}\big{[}T(x)-[\lambda(-\Delta+V,\Omega)+\lambda(-\Delta,B _{R})+\varepsilon]D(x)\big{]}<0.\] Therefore \[[\lambda(-\Delta+V,\Omega)+\lambda(-\Delta,B_{R})+\varepsilon]D(x)>T(x) \tag{3.13}\] on a set of positive measure. In particular, there exists \(x\in\mathbb{R}^{d}\) satisfying (3.13). Now, if we substitute \(h_{x}\) into the Rayleigh quotient for \(\lambda\big{(}-\Delta+V,\Omega\cap B_{R}(x)\big{)}\), we obtain \(T(x)/D(x)\). Thus by our choice of \(x\), we see that \[\lambda\big{(}-\Delta+V,\Omega\cap B_{R}(x)\big{)}\leq\lambda(-\Delta+V, \Omega)+\lambda(-\Delta,B_{R})+\varepsilon.\] Since \(\varepsilon>0\) was arbitrary, (3.9) follows from the identity \(\lambda(-\Delta,B_{R})=\lambda(-\Delta,B_{1})R^{-2}\). With this tool in hand, we can tackle Proposition 3.4. Proof of Proposition 3.4.: The result is trivial when \(\Omega=\mathbb{R}^{d}\), so suppose otherwise. We first tackle the hard direction: we use our extension of Lieb's theorem to show the lower bound \[\liminf_{\kappa\to\infty}\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa\Omega )\geq\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+}). \tag{3.14}\] Fix \(\varepsilon>0\) and choose \(R>0\) such that \(\lambda(-\Delta,B_{1})R^{-2}<\varepsilon/2\). Then Theorem 3.5 yields \[\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa\Omega)\geq\inf_{x\in\mathbb{R }^{d}}\lambda\big{(}-\Delta+f^{\prime}(\Phi_{\kappa}),\kappa\Omega\cap B_{R}( x)\big{)}-\frac{\varepsilon}{2}. \tag{3.15}\] We are free to assume that \(x\in S_{\kappa}\coloneqq\{x\in\mathbb{R}^{d}\mid\operatorname{dist}(x,\kappa \Omega)<R\}\), for otherwise \(\kappa\Omega\cap B_{R}(x)=\emptyset\). Since \(\varphi(\infty)=1\) and \(f^{\prime}\) is continuous, there exists \(Y>R\) such that \[-f^{\prime}\circ\varphi(y)\geq|f^{\prime}(1)|-\tfrac{\varepsilon}{2}\quad \text{for all }y\geq Y-R.\] Recalling the set \(\Omega_{\kappa}^{\ell}\) from (3.3), Lemma 3.3 yields \[\inf_{x\in\Omega_{\kappa}^{Y}}\lambda\big{(}-\Delta-f^{\prime}(\Phi_{\kappa}), \kappa\Omega\cap B_{R}(x)\big{)}\geq|f^{\prime}(1)|-\frac{\varepsilon}{2}\geq \lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})-\frac{ \varepsilon}{2}. \tag{3.16}\] Now consider centers \(x\) in the "collar" \(\Sigma_{\kappa}\coloneqq S_{\kappa}\setminus\Omega_{\kappa}^{Y}\), which lies a bounded distance from \(\kappa\partial\Omega\). As \(\kappa\to\infty\), the principal radii of curvature of \(\kappa\partial\Omega\) grow without bound. Thus for sufficiently large \(\kappa\), every point \(x\in\Sigma_{\kappa}\) has a unique nearest point \(p(x)\in\kappa\partial\Omega\) and \(\Phi_{\kappa}\) is \(\mathcal{C}^{2,Y}\) on \(\Sigma_{\kappa}\). Let \(n(x)\) denote the inward unit normal vector to \(\kappa\partial\Omega\) at \(p(x)\). As the boundary flattens, the distance function \(z\mapsto\operatorname{dist}(z,\kappa\partial\Omega)\) comes to resemble the linear coordinate \[y_{x}(z)\coloneqq n(x)\cdot[z-p(x)]\] on \(\kappa\Omega\cap B_{R}(x)\). That is, \[\lim_{\kappa\to\infty}\sup_{x\in\Sigma_{\kappa}}\|\operatorname{dist}(\, \cdot\,,\kappa\partial\Omega)-y_{x}\|_{\mathcal{C}^{2,Y}(\kappa\Omega\cap B_{ R}(x))}=0.\] Hence \(f^{\prime}(\Phi_{\kappa})\to f^{\prime}(\varphi\circ y_{x})\) on \(\kappa\Omega\cap B_{R}(x)\) uniformly in \(x\in\Sigma_{\kappa}\) as \(\kappa\to\infty\). In the same manner, the domain \(\Omega\cap B_{R}(x)\) converges to \(\mathbb{H}_{x}\cap B_{R}(x)\), where \(\mathbb{H}_{x}\) denotes the half-space defined by \(y_{x}>0\). Since \(B_{R}\) has uniformly bounded radius, the principal eigenvalue is continuous in the potential and the domain within \(B_{R}\); see, for instance, [14]. It follows that \[\begin{split}\lim_{\kappa\to\infty}\sup_{x\in\Sigma_{\kappa}}\big{|} \lambda\big{(}-\Delta-f^{\prime}(\Phi_{\kappa}),&\kappa\Omega \cap B_{R}(x)\big{)}\\ &-\lambda\big{(}-\Delta-f^{\prime}(\varphi\circ y_{x}),\mathbb{H}_ {x}\cap B_{R}(x)\big{)}\big{|}=0.\end{split} \tag{3.17}\] Using monotonicity in the domain (see [10, Proposition 2.1(i)]), \[\lambda\big{(}-\Delta-f^{\prime}(\varphi\circ y_{x}),\mathbb{H}_{x}\cap B_{R} (x)\big{)}\geq\lambda\big{(}-\Delta-f^{\prime}(\varphi\circ y_{x}),\mathbb{H}_ {x}\big{)}.\] Now, the principal eigenvalue is invariant under isometry. Rotating, translating, and using Lemma 3.2, we find \[\lambda\big{(}-\Delta-f^{\prime}(\varphi\circ y_{x}),\mathbb{H}_{x}\cap B_{R} (x)\big{)}\geq\lambda\big{(}-\Delta-f^{\prime}(\varphi),\mathbb{H}^{d}\big{)} =\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+}).\] In light of (3.17), we have \[\liminf_{\kappa\to\infty}\inf_{x\in\Sigma_{\kappa}}\lambda\big{(}-\Delta-f^{ \prime}(\Phi_{\kappa}),\kappa\Omega\cap B_{R}(x)\big{)}\geq\lambda(-\partial_{x }^{2}-f^{\prime}(\varphi),\mathbb{R}_{+}).\] Combining this with (3.16), we see that \[\liminf_{\kappa\to\infty}\inf_{x\in S_{\kappa}}\lambda\big{(}-\Delta-f^{ \prime}(\Phi_{\kappa}),\kappa\Omega\cap B_{R}(x)\big{)}\geq\lambda(-\partial_ {x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})-\frac{\varepsilon}{2}.\] Then (3.15) yields \[\liminf_{\kappa\to\infty}\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa \Omega)\geq\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})-\varepsilon.\] Since \(\varepsilon>0\) was arbitrary, (3.14) follows. We now show a matching upper bound. For each \(\kappa>0\), let \(g_{\kappa}\) be an isometry of \(\mathbb{R}^{d}\) such that if \(\Omega_{\kappa}\coloneqq g_{\kappa}^{-1}(\kappa\Omega)\), then \(0\in\partial\Omega_{\kappa}\) and \(e_{n}\) is the inward unit normal vector to \(\partial\Omega_{\kappa}\) at \(0\). We showed above that \(\Omega_{\kappa}\to\mathbb{H}^{d}\) locally uniformly in a sense made precise in Definition A.2 in [10]. Similarly, we showed that \(\Phi_{\kappa}\circ g_{\kappa}\to\varphi\) locally uniformly in \(\mathcal{C}^{2,\gamma}\). Hence by isometry-invariance, Lemma 2.2 of [10], and Lemma 3.2, \[\limsup_{\kappa\to\infty}\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa \Omega)\leq\lambda(-\Delta-f^{\prime}(\varphi),\mathbb{H}^{d})=\lambda(- \partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+}). \tag{3.18}\] Strictly speaking, [10, Lemma 2.2] only treats the operator \(-\Delta\). However, the inclusion of a locally convergent potential like \(-f^{\prime}(\Phi_{\kappa}\circ g_{\kappa})\) does not change the proof; we do not repeat it here. The proposition now follows from (3.14), (3.18), and Lemma 3.3. With this spectral estimate in place, we can prove uniqueness in (3.1). Proof of Theorem 1.2.: Let \(\mu\coloneqq\lambda(-\partial_{x}^{2}-f^{\prime}(\varphi),\mathbb{R}_{+})\), which is positive by Lemma 3.3. Because \(f^{\prime}\) is uniformly continuous, there exists \(\delta>0\) such that \[|f^{\prime}(s_{1})-f^{\prime}(s_{2})|\leq\frac{\mu}{3}\quad\text{when }|s_{1}-s_{2}|\leq\delta. \tag{3.19}\] Recall that \(\mathcal{U}_{\kappa}\) denotes the set of positive bounded solutions of (3.1). By Lemma 3.1 and Proposition 3.4, there exists \(\underline{\kappa}(\Omega,f)>0\) such that for all \(\kappa\geq\underline{\kappa}\), we have \[\sup_{u\in\mathcal{U}_{\kappa}}\|u-\Phi_{\kappa}\|_{\mathcal{C}(\kappa\Omega)} \leq\delta \tag{3.20}\] and \[\lambda(-\Delta-f^{\prime}(\Phi_{\kappa}),\kappa\Omega)\geq\frac{2\mu}{3}. \tag{3.21}\] Now fix \(\kappa\geq\underline{\kappa}\) and two solutions \(u,v\in\mathcal{U}_{\kappa}\). By the mean value theorem and (3.20), there exists \(r\colon\Omega\to[0,1]\) such that \(|r-\Phi_{\kappa}|\leq\delta\) and \[f(u)-f(v)=f^{\prime}(r)(u-v).\] Let \(w\coloneqq u-v\) and \(\mathcal{L}\coloneqq\Delta+f^{\prime}(r)\). Then (3.1) yields \[\begin{cases}\mathcal{L}w=0&\text{in }\kappa\Omega,\\ w=0&\text{on }\kappa\partial\Omega.\end{cases} \tag{3.22}\] Because \(|r-\Phi_{\kappa}|<\delta\), (3.19) implies that \(f^{\prime}(r)=f^{\prime}(\Phi_{\kappa})+q\) for some remainder \(q\) satisfying \(|q|\leq\mu/3\). So \(-\mathcal{L}\geq-\Delta-f^{\prime}(\Phi_{\kappa})-\mu/3\). Using (3.21), we find \[\lambda(-\mathcal{L},\kappa\Omega)\geq\lambda(-\Delta-f^{\prime}(\Phi_{\kappa} ),\kappa\Omega)-\frac{\mu}{3}\geq\frac{\mu}{3}>0.\] By Theorem 1 of [28] (a form of the maximum principle), (3.22) implies that \(w=0\). That is, \(u=v\) on \(\kappa\Omega\). Having proved uniqueness on "strongly dilated" domains, we return to a point raised at the beginning of the section. Fix a bounded domain \(\Omega\subset\mathbb{R}^{d}\). By Proposition 1.4 of [11] there exists a positive reaction \(f\) such that (1.1) admits multiple positive bounded solutions. Qualitatively, the reaction we construct is "double-humped" as in Figure 5. It admits a solution \(u^{-}\) whose range lies in the first hump and a larger solution \(u^{+}\) whose range spans both humps. We now consider how these solutions vary as we dilate \(\Omega\) by a factor \(\kappa\geq 1\). The proof of [11, Proposition 1.4] shows that we can arrange \(f^{\prime}(0)>\lambda(-\Delta,\Omega)\), which implies \(f^{\prime}(0)>\lambda(-\Delta,\kappa\Omega)\) for \(\kappa\geq 1\). Then Proposition 1.8 of [11] implies that (3.1) admits a minimal solution \(u_{\kappa}^{-}\). Moreover, the proof of the same proposition shows the existence of a maximal solution \(u_{\kappa}^{+}\). A comparison argument readily shows that the family \((u_{\kappa}^{\pm})_{\kappa\geq 1}\) is increasing in \(\kappa\). We consider the behavior of the pair \(u_{\kappa}^{\pm}\) as \(\kappa\) grows from \(1\). When \(\kappa=1\), we have \(u_{1}^{-}<u_{1}^{+}\). On the other hand, Theorem 1.2 provides \(\kappa(\Omega,f)>1\) such that \(u_{\kappa}^{-}=u_{\kappa}^{+}\) once \(\kappa\geq\underline{\kappa}\). We informally describe the manner in which the branch \(u_{\kappa}^{-}\) might merge with \(u_{\kappa}^{+}\). Initially, we have constructed our pair of solutions so that \(\sup u_{1}^{-}<\frac{1}{2}\). That is, \(u_{1}^{-}\) is confined to the first hump of \(f\). However, Lemma 3.1 implies that \(u_{\kappa}^{\pm}\) must coalesce around \(\Phi_{\kappa}\) as \(\kappa\to\infty\). Since \(\sup\Phi_{\kappa}\to\varphi(\infty)=1\) in this limit, \(\sup u_{\kappa}^{-}\) must eventually cross the threshold \(\frac{1}{2}\). That is, as \(\kappa\) grows, the minimal solution \(u_{\kappa}^{-}\) must eventually grow into the second hump of the reaction \(f\). Once this occurs, our proof of nonuniqueness in [11] breaks down, and there is nothing preventing \(u_{\kappa}^{-}\) from merging with \(u_{\kappa}^{+}\). Of course, the branches \(u_{\kappa}^{\pm}\) may exhibit more complicated behavior between \(\kappa=1\) and \(\kappa=\underline{\kappa}\). Nonetheless, the invasion of the second hump by \(u_{\kappa}^{-}\) is one pathway by which multiple solution branches might merge as \(\kappa\to\infty\). It would be interesting to rigorously confirm this picture on a simple domain like \(\Omega=(0,1)\). Figure 5. A “double-humped” reaction exhibiting nonuniqueness. ## 4. Epigraphs We now turn to uniqueness on epigraphs: domains bounded by the graph of a function. Given a \(\mathcal{C}^{2,Y}_{\mathrm{loc}}\) function \(\phi\colon\mathbb{R}^{d-1}\to\mathbb{R}\), we study its epigraph \[\Omega\coloneqq\big{\{}(x^{\prime},y)\in\mathbb{R}^{d-1}\times\mathbb{R}\mid y >\phi(x^{\prime})\big{\}}.\] We assume that \(\Omega\) is uniformly \(\mathcal{C}^{2,Y}\) as a subset of \(\mathbb{R}^{d}\). Notably, this is much weaker than assuming that \(\phi\) is a uniformly \(\mathcal{C}^{2,Y}\)_function_. This discrepancy is due to the fact that the smoothness of a domain can be measured with respect to different coordinate frames. If the gradient \(\nabla\phi\) is very large but slowly-varying, the epigraph \(\Omega\) can still be quite smooth, because its local \(\mathcal{C}^{2,Y}\) norm can be measured with respect to a frame oriented normal to the graph of \(\phi\). _Example 2_.: The parabolic epigraph \(\Omega\) of the quadratic \(\phi(x)=x^{2}\) is uniformly \(\mathcal{C}^{2,Y}\), but \(\phi\) and \(\phi^{\prime}\) diverge at infinity, so \(\phi\) is not uniformly \(\mathcal{C}^{2,Y}\). We are interested in the uniqueness of positive bounded solutions of (1.1) on the epigraph \(\Omega\). ### Epigraphs of uniformly Lipschitz functions This question has already been resolved for an important subclass of epigraphs: those for which \(\mathrm{Lip}\,\phi<\infty\), where \(\mathrm{Lip}\,\phi\) denotes the global Lipschitz constant of \(\phi\). For convenience, we refer to these as uniformly Lipschitz epigraphs. In [9], Caffarelli, Nirenberg, and the first author studied the qualitative properties of (1.1) in this uniformly Lipschitz setting. For convenience, we restate a form of their main result here. **Theorem 4.1** ([9]).: _Let \(\Omega\) be the epigraph of a function \(\phi\) such that \(\mathrm{Lip}\,\phi<\infty\). Then (1.1) admits a unique positive bounded solution \(u\), and \(\partial_{y}u>0\)._ In this section, we expand this result to a much broader class of epigraphs. The condition \(\mathrm{Lip}\,\phi<\infty\) ensures that the graph of \(\phi\) (the boundary \(\partial\Omega\)) lies between two cones oriented along the \(y\)-axis. Many epigraphs of interest, such as the parabola in Example 2, do not satisfy this condition--they correspond to functions \(\phi\) that grow superlinearly at infinity. We are thus led to a natural question: does the conclusion of Theorem 4.1 hold for _all_ (uniformly \(\mathcal{C}^{2,Y}\)) epigraphs? While we do not fully resolve this question, we make significant progress. ### Asymptotically uniformly Lipschitz epigraphs We study epigraphs that locally resemble uniformly Lipschitz epigraphs at infinity, perhaps after rotation. **Definition 4.1**.: The epigraph \(\Omega\) is _asymptotically uniformly-Lipschitz_ (AUL) if there exists \(M\in\mathbb{R}_{+}\) such that every locally uniform limit of \(\Omega\) at infinity is either \(\mathbb{R}^{d}\) or a rotation of an epigraph with global Lipschitz constant at most \(M\). That is, if we examine \(\Omega\) near a sequence of points tending to \(\infty\), it locally resembles either the whole space or a rotation of a uniformly Lipschitz epigraph of the form discussed above. In particular, if the curvature of \(\partial\Omega\) vanishes at infinity, the only limit domains at infinity are \(\mathbb{R}^{d}\) and an isometry of the half-space \(\mathbb{H}^{d}\), which is evidently uniformly Lipschitz. It follows that asymptotically flat epigraphs in the sense of Definition 1.2 are AUL in the sense of Definition 4.1. On the other hand, the parabola with "steps" in Figure 6(a) is an AUL epigraph that is neither uniformly Lipschitz nor asymptotically flat. We show uniqueness on AUL epigraphs. **Theorem 4.2**.: _If \(\Omega\) is AUL, then (1.1) has a unique positive bounded solution \(u\). Moreover, \(\partial_{y}u>0\)._ This extends and significantly strengthens the main result of [9]. Given that bounded domains do not exhibit uniqueness in this generality, Theorem 4.2 is quite striking. The epigraph structure seems conducive to uniqueness in (1.1). Definition 4.1 encompasses a great variety of "natural" epigraphs. It is, however, not comprehensive: there are uniformly \(\mathcal{C}^{2,\gamma}\) epigraphs that are not AUL. For example, an epigraph with a sequence of ever deeper "wells" of bounded width is not AUL; see Figure 6(b) for an example. Such domains present obstacles to our uniqueness argument; we discuss these difficulties in greater detail in Section 4.5. Before proceeding, we use our main Theorem 4.2 to show the simpler results in the introduction. Proof of Theorem 1.3.: As noted above, asymptotically flat epigraphs are AUL. So Theorem 1.3 is a special case of Theorem 4.2. Proof of Corollary 1.4.: If \(\phi\colon\mathbb{R}\to\mathbb{R}\) in \(\mathcal{C}^{2,\gamma}_{\mathrm{loc}}\) is convex, then the limits \(\phi(\pm\infty)\) exist (though they may be infinite). It follows that the only limits of \(\Omega\) at infinity are either \(\mathbb{R}^{2}\) or isometries of \(\mathbb{H}^{2}\). Hence \(\Omega\) is AUL, and the corollary follows from Theorem 4.2. ### Stability on uniformly Lipschitz epigraphs To prove Theorem 4.2, we use uniformly Lipschitz epigraphs as the "building blocks" of AUL domains. The centerpiece of the argument is a new qualitative property of uniformly Lipschitz epigraphs: the unique solution \(u\) in Theorem 4.1 is _strictly stable_. This stability neatly complements the other qualitative properties shown in [9]. Figure 6. (a) An AUL epigraph that is neither uniformly Lipschitz nor asymptotically flat. (b) A uniformly \(\mathcal{C}^{2,\gamma}\) epigraph with arbitrarily deep “wells” that is not AUL. The well walls are not quite vertical, so \(\Omega\) is still an epigraph of a continuous function. In fact, we will require this strict stability to be uniform over a broad class of epigraphs. To state the result, we recall the notion of a \((\gamma,r,K)\)-smooth domain from Definition A.1 of [11]. We do not restate the full (rather technical) definition, but informally, the boundary of a \((\gamma,r,K)\)-smooth domain has \(C^{2,\gamma}\) norm no larger than \(K\) at spatial scale \(r\). Given \(\gamma\in(0,1)\) and \(r,K,M\in\mathbb{R}_{+}\), let \(\mathcal{G}(r,\gamma,K,M)\) denote the set of all \((r,\gamma,K)\)-smooth epigraphs with global Lipschitz constant at most \(M\). Given \(G\in\mathcal{G}(r,\gamma,K,M)\), let \(u^{G}\) denote the unique positive bounded solution of (1.1) on \(G\) provided by Theorem 4.1. **Proposition 4.3**.: _For all \(\gamma\in(0,1)\) and \(r,K,M\in\mathbb{R}_{+}\),_ \[|f^{\prime}(1)|\geq\inf_{G\in\mathcal{G}(\gamma,r,K,M)}\lambda(-\Delta-f^{ \prime}(u^{G}),G)>0. \tag{4.1}\] The half-space is the simplest epigraph, and we have already shown strict stability in that setting in Lemmas 3.2 and 3.3. So Proposition 4.3 can be viewed as a vast generalization of those lemmas. Its proof, naturally, is significantly more complex. The uniformity in Proposition 4.3 leads to a curious question: which epigraph is "most unstable"? It seems possible that the smoothness parameters in Proposition 4.3 are superfluous--the infimum in (4.1) may depend on \(M\) alone. If this is the case, the only free parameter is the Lipschitz bound \(M\), and we are left with a family of clean geometric optimization problems. For example, which 1-Lipschitz epigraph \(G\) minimizes the eigenvalue in (4.1)? The quarter-plane \(\{\gamma>|x|\}\) is a natural guess, but we leave this line of inquiry to future investigation. Proof of Proposition 4.3.: Fix an epigraph \(G\in\mathcal{G}(\gamma,r,K,M)\) corresponding to a function \(\phi\) with \(\operatorname{Lip}\phi\leq M\) and let \(u\coloneqq u^{G}\). Let \(y^{\prime}\coloneqq y-\phi(x)\), so \(G=\{y^{\prime}>0\}\). Because \(\operatorname{Lip}\phi\leq M\), \[\operatorname{dist}(x,\partial G)\geq\frac{y^{\prime}}{\sqrt{M^{2}+1}}. \tag{4.2}\] Now \(f^{\prime}\) is continuous, so there exists \(\eta\in(0,1)\) such that \(f^{\prime}\leq\frac{2}{3}f^{\prime}(1)<0\) on \((\eta,1)\). Combining Lemma 2.5 with (4.2), we see that there exists \(H>1\) such that \(u>\eta\) where \(y^{\prime}>H\). In particular, \[-f^{\prime}(u)\geq\frac{2}{3}|f^{\prime}(1)|\quad\text{where }y^{\prime}>H. \tag{4.3}\] Let \(\mathcal{L}\coloneqq\Delta+f^{\prime}(u)\) and \(u^{\prime}\coloneqq\partial_{y}u>0\), noting that \(\mathcal{L}u^{\prime}=0\). Given \(R>H+1\), let \(B_{R}^{d-1}\) denote the \(R\)-ball in \(\mathbb{R}^{d-1}\) and let \(\Gamma_{R}\coloneqq B_{R}^{d-1}\times\mathbb{R}\) denote the cylinder of radius \(R\). We work on the truncation \(G_{R}\coloneqq\{0<y^{\prime}<R\}\cap\Gamma_{R}\) of \(G\). Let \[\lambda_{R}\coloneqq\lambda(-\mathcal{L},G_{R}).\] Note that \(G_{R}\) is not smooth: it has corners. This will not pose difficulties. Because \(G_{R}\) is bounded, \(\lambda_{R}\) corresponds to a positive principal eigenfunction \(\psi_{R}\in H_{0}^{1}(G_{R})\). The family \((G_{R})_{R>0}\) exhausts \(G\), so by [13, Proposition 2.3(iv)], \[\lambda(-\mathcal{L},G)=\lim_{R\to\infty}\lambda_{R}. \tag{4.4}\] We wish to show that the eigenvalues \((\lambda_{R})_{R>0}\) are uniformly positive. We note that they are non-increasing in \(R\) because the domains \(G_{R}\) are nested. If \(\lambda_{R}\geq\frac{1}{3}|f^{\prime}(1)|\) for all \(R>0\), we are done. Suppose otherwise, so there exists \(\underline{R}>H+1\) such that \(\lambda_{R}<\frac{1}{3}|f^{\prime}(1)|\) for all \(R>\underline{R}\). In the remainder of the proof, we assume \(R>\underline{R}\). In the following, we let \(\nu\) denote the outward unit normal vector field on the relevant domain of integration. Using \(-\mathcal{L}\psi_{R}=\lambda_{R}\psi_{R}\) and \(\mathcal{L}u^{\prime}=0\) and integrating by parts, we find \[\lambda_{R}\int_{G_{R}}u^{\prime}\psi_{R}=-\int_{\partial G_{R}}u^{\prime} \partial_{\nu}\psi_{R}=\int_{\partial G_{R}}u^{\prime}|\partial_{\nu}\psi_{R}|.\] In the final equality we have used the fact that \(\psi_{R}\) is positive, so \(\partial_{\nu}\psi_{R}<0\) on \(\partial G_{R}\). Let \(\partial_{-}G_{R}\coloneqq\partial G\cap\Gamma_{R}\) denote the bottom boundary of \(G_{R}\). Then we have \[\lambda_{R}>\frac{\int_{\partial_{-}G_{R}}u^{\prime}|\partial_{\nu}\psi_{R}|} {\int_{G_{R}}u^{\prime}\psi_{R}}. \tag{4.5}\] We show that this ratio is uniformly positive. We will argue that a significant fraction of the mass in \(\int_{G_{R}}u^{\prime}\psi_{R}\) is concentrated near the bottom of the domain. To this end, let \(G_{R}^{+}\coloneqq\{H<y^{\prime}<R\}\cap\Gamma_{R}\) denote the portion of \(G_{R}\) that is at least height \(H\) above \(\partial G\). Using (4.3) and our assumption that \(\lambda_{R}<\frac{1}{3}|f^{\prime}(1)|\), we compute \[\begin{split}\int_{G_{R}^{+}}\psi_{R}\leq\frac{3}{|f^{\prime}(1 )|}\int_{G_{R}^{+}}[-f^{\prime}(u)-\lambda_{R}]\psi_{R}\\ =\frac{3}{|f^{\prime}(1)|}\int_{G_{R}^{+}}\Delta\psi_{R}=\frac{3 }{|f^{\prime}(1)|}\int_{\partial G_{R}^{+}}\partial_{\nu}\psi_{R}.\end{split} \tag{4.6}\] Now \(\partial_{\nu}\psi_{R}<0\) on the lateral and top boundaries \(\partial G_{R}\cap\{y^{\prime}>H\}\), so \[\int_{\partial G_{R}^{+}}\partial_{\nu}\psi_{R}\leq\int_{\partial_{-}G_{R}^{+ }}|\partial_{\nu}\psi_{R}|, \tag{4.7}\] where \(\partial_{-}G_{R}^{+}\coloneqq\{y^{\prime}=H\}\cap\Gamma_{R}\) denotes the bottom boundary of \(G_{R}^{+}\). Combining (4.6) and (4.7), we see that \[\int_{G_{R}^{+}}\psi_{R}\leq\frac{3}{|f^{\prime}(1)|}\int_{\partial_{-}G_{R}^ {+}}|\partial_{\nu}\psi_{R}|. \tag{4.8}\] Now let \(G_{R}^{-}\coloneqq\{0<y^{\prime}<H\}\cap\Gamma_{R}\) denote the portion of \(G_{R}\) within height \(H\) of \(\partial G\). We claim that \[\int_{\partial_{-}G_{R}^{+}}|\partial_{\nu}\psi_{R}|\lesssim\int_{G_{R}^{-}} \psi_{R}, \tag{4.9}\] where \(a\lesssim b\) indicates that \(a\leq Cb\) for some constant \(C\) that does not depend on \(R\) but may depend on \(r,y,K,M,\)\(d\), and \(f\). In the following, we denote coordinates on \(G\) by \((x^{\prime},y^{\prime})\) for \(x^{\prime}\in\mathbb{R}^{d}\) and \(y^{\prime}>0\) defined above. We first focus on the "interior" portion \(\partial_{-}G_{R-1}^{+}\) lying at least distance \(1\) from \(\partial G_{R}\). There, interior Schauder estimates imply that \[|\nabla\psi_{R}(x^{\prime},H)|\leq C_{S}\sup_{B_{1/2}(x^{\prime},H)}\psi_{R},\] where \(C_{\mathrm{S}}\) depends only on \(d\) and the Holder norm of \(f^{\prime}(u)\). Moreover, Harnack's inequality yields \[\sup_{B_{1/2}(x^{\prime},H)}\psi_{R}\leq C_{\mathrm{H}}\inf_{B_{1/2}(x^{\prime},H)}\psi_{R}\] for a constant \(C_{\mathrm{H}}\) depending on the same. Combining these bounds and integrating in \(y^{\prime}\), we obtain \[|\nabla\psi_{R}(x^{\prime},H)|\lesssim\int_{H-1/2}^{H}\psi_{R}(x^{\prime},y^{ \prime})\ \mathrm{d}y^{\prime}\quad\text{for all }x^{\prime}\in B_{R-1}^{d-1}.\] Extending the region of integration and integrating over \(B_{R-1}^{d-1}\), we have \[\int_{B_{R-1}^{d-1}}\lvert\nabla\psi_{R}(x^{\prime},H)\rvert\ \mathrm{d}x^{\prime} \lesssim\int_{B_{R-1}^{d-1}}\int_{0}^{H}\psi_{R}(x^{\prime},y^{\prime})\ \mathrm{d}y^{\prime}\ \mathrm{d}x^{\prime}. \tag{4.10}\] Now consider the portion near the lateral boundary of \(G_{R}\). Schauder estimates up to the boundary yield \[|\nabla\psi_{R}(x^{\prime},H)|\leq C_{\mathrm{S}}^{\prime}\sup_{B_{1/2}(x^{ \prime},H)\cap G_{R}}\psi_{R}. \tag{4.11}\] Now define the contraction \[x^{\prime}\coloneqq\frac{|x^{\prime}|-1}{|x^{\prime}|}x^{\prime},\] which moves points in \(B_{R}^{d-1}\) toward the origin by distance \(1\). Then Carleson's inequality implies that \[\psi_{R}(x^{\prime},y^{\prime})\leq C_{\mathrm{C}}\psi_{R}(x^{\prime},y^{ \prime})\quad\text{for all }x^{\prime}\in B_{R}^{d-1}\backslash B_{R-1}^{d-1},\ y^{ \prime}\in(H{-}1/2,H{+}1/2). \tag{4.12}\] That is, the value of \(\psi_{R}\) near the boundary is controlled by its values deeper in the interior. Combining (4.11) and (4.12) with the interior Harnack inequality, we can write \[|\nabla\psi_{R}(x^{\prime},H)|\lesssim\int_{H-1/2}^{H}\psi_{R}(x^{\prime},y^{ \prime})\ \mathrm{d}y^{\prime}\quad\text{for all }x^{\prime}\in B_{R}^{d-1} \setminus B_{R-1}^{d-1}.\] Now \(\iota\) has bounded Jacobian once \(R\geq 2\). It follows that \[\int_{B_{R}^{d-1}\backslash B_{R-1}^{d-1}}\lvert\nabla\psi_{R}(x^{\prime},H) \rvert\ \mathrm{d}x^{\prime}\lesssim\int_{B_{R-1}^{d-1}}\int_{0}^{H}\psi_{R}(x^{ \prime},y^{\prime})\ \mathrm{d}y^{\prime}\ \mathrm{d}x^{\prime}.\] In conjunction with (4.10), we find \[\int_{B_{R}^{d-1}}\lvert\nabla\psi_{R}(x^{\prime},H)\rvert\ \mathrm{d}x^{\prime} \lesssim\int_{B_{R}^{d-1}}\int_{0}^{H}\psi_{R}(x^{\prime},y^{\prime})\ \mathrm{d}y^{\prime}\ \mathrm{d}x^{\prime},\] which is (4.9). We next observe that \(u^{\prime}\) is uniformly positive on \(G_{R}^{-}\) and uniformly bounded on \(G_{R}\). Hence (4.8) and (4.9) yield \[\int_{G_{R}^{+}}u^{\prime}\psi_{R}\lesssim\int_{G_{R}^{+}}\psi_{R}\lesssim\int _{G_{R}^{-}}\psi_{R}\lesssim\int_{G_{R}^{-}}u^{\prime}\psi_{R}.\] These constants are uniform in \(G\), for otherwise we could extract a subsequential limit domain \(G^{*}\in\mathscr{G}\) such that \(\partial_{y}u^{G^{*}}\) violates the strong maximum principle or Hopf lemma. Therefore \[\int_{G_{R}}u^{\prime}\psi_{R}\lesssim\int_{G_{R}^{-}}u^{\prime}\psi_{R}, \tag{4.13}\] as claimed. Moreover, similar reasoning based on (4.12) yields \[\int_{G_{R}^{-}}u^{\prime}\psi_{R}\lesssim\int_{G_{R-1}^{-}}u^{\prime}\psi_{R}. \tag{4.14}\] Here we apply Carleson's inequality in the vicinity of the corner of \(G_{R}\); this is possible because the inequality holds on Lipschitz domains [23, 24]. For the remainder of the analysis, we work on \(\Gamma_{R-1}\) and thus avoid the corners of \(G_{R}\). We claim that \[\psi_{R}(x^{\prime},y^{\prime})\lesssim\partial_{y}\psi_{R}(x^{\prime},0)\,y^ {\prime}\quad\text{in }G_{R-1}^{-}. \tag{4.15}\] To see this, suppose to the contrary that there exists a sequence of graphs \(G^{n}\in\mathscr{G}\), radii \(R_{n}\nearrow\infty\), and points \(x_{n}=(x^{\prime}_{n},y^{\prime}_{n})\in(G_{R_{n}-1}^{n})^{-}\) such that \[\partial_{y}\psi_{R_{n}}(x^{\prime}_{n},0)\,\leq\,\frac{1}{n\,y^{\prime}_{n}} \psi_{R_{n}}(x^{\prime}_{n},y^{\prime}_{n})\quad\text{for all }n\in\mathbb{N}.\] Let \(A_{n}\) denote the affine transformation \[A_{n}x\coloneqq y^{\prime}_{n}x+x_{n}\] and let \[\Psi_{n}\coloneqq\frac{\psi_{R_{n}}\circ A_{n}}{\psi_{R_{n}}(x_{n})}.\] This satisfies \[-\Delta\Psi_{n}=(y^{\prime}_{n})^{2}\big{(}f^{\prime}\circ u\circ A_{n}+ \lambda_{R})\Psi_{n}\quad\text{on }A_{n}^{-1}G_{R_{n}}^{n} \tag{4.16}\] and \[\partial_{y}\Psi_{n}(0,-1)\,\leq\,\frac{1}{n},\] noting that \((0,-1)\in\partial A_{n}^{-1}G_{R_{n}}^{n}\). Because \(\operatorname{Lip}G^{n}\leq M\), \(A_{n}^{-1}G_{R_{n}}^{n}\) contains a ball \(B_{\rho}\) for some \(\rho>0\) independent of \(n\). Also, \(\Psi_{n}(0)=1\) and \(\mathcal{L}_{n}\Psi_{n}=0\) for the linear operator \(\mathcal{L}_{n}\) appearing in (4.16), whose coefficients are uniformly bounded. Thus Schauder estimates allow us to extract a subsequential limit of \((\Psi_{n})\) as \(n\to\infty\). We obtain a nonnegative Dirichlet solution \(\Psi_{\infty}\) of \(-\mathcal{L}_{\infty}\Psi_{\infty}=0\) on a uniformly Lipschitz epigraph \(G_{\infty}\supset B_{\rho}\) with \((0,-1)\in\partial G_{\infty}\) and \[\partial_{y}\Psi_{\infty}(0,-1)=0. \tag{4.17}\] The limit operator \(\mathcal{L}_{\infty}\) is either the Laplacian or a rescaling of \(\Delta+f^{\prime}(u^{G_{\infty}})-\lambda_{R}\), depending on whether \((y^{\prime}_{n})\) tends to \(0\). In either case, it satisfies the strong maximum principle and the Hopf lemma. However, \(\Psi_{\infty}(0)=1\), so by the strong maximum principle \(\Psi_{\infty}>0\). Then (4.17) contradicts the Hopf lemma; this contradiction proves (4.15). Integrating (4.15), we see that \[\int_{G_{R-1}^{-}}u^{\prime}\psi_{R}\lesssim\int_{\partial_{-}G_{R-1}}u^{\prime} \partial_{y}\psi_{R}.\] In light of (4.14), we find \[\int_{G_{R}^{-}}u^{\prime}\psi_{R}\lesssim\int_{\partial_{-}G_{R-1}}u^{\prime} \partial_{y}\psi_{R}.\] Finally, we observe that the tangential derivatives of \(\psi_{R}\) vanish on \(\partial G\), so \[\partial_{y}\psi_{R}=(y\cdot\nu)\partial_{\nu}\psi_{R}.\] Because \(G\) is uniformly Lipschitz, \(-y\cdot\nu\) is uniformly positive. So \[\int_{G_{R}^{-}}u^{\prime}\psi_{R}\lesssim\int_{\partial_{-}G_{R-1}}u^{\prime }\partial_{y}\psi_{R}\lesssim-\int_{\partial_{-}G_{R-1}}u^{\prime}\partial_{y }\psi_{R}=\int_{\partial_{-}G_{R-1}}u^{\prime}|\partial_{\nu}\psi_{R}|. \tag{4.18}\] In light of (4.13) and (4.18), (4.5) implies that \(\lambda_{R}\gtrsim 1\). We emphasize that the implied constant is independent of \(R\), so \(\inf_{R\succ\underline{R}}\lambda_{R}>0\). The proposition follows from (4.4). ### Uniqueness on AUL epigraphs Throughout this subsection, fix an AUL epigraph \(\Omega\). Then \(\Omega\) is \((\gamma,r,K)\)-smooth for some constants \(r,K\in\mathbb{R}_{+}\). It follows from Definition 4.1 and [11, Proposition A.1] that there exists \(M\in\mathbb{R}_{+}\) such that the local limits of \(\Omega\) at infinity are either \(\mathbb{R}^{d}\) or rotations of epigraphs in \(\mathcal{G}(\gamma,r,K,M)\). Define \[\underline{\lambda}\coloneqq\inf_{G\in\mathcal{G}(\gamma,r,K,M)}\lambda(- \Delta-f^{\prime}(u^{G}),G)>0, \tag{4.19}\] which is positive by Proposition 4.3. Using this uniform stability, we show that \(\Omega\) is "stable at infinity." Given \(R\geq 1\), let \(\Omega_{R}\) be a uniformly smooth domain satisfying \[\Omega\setminus\overline{B}_{R+1}\subset\Omega_{R}\subset\Omega\setminus \overline{B}_{R}. \tag{4.20}\] We think of \(\Omega_{R}\) as a smooth approximation of \(\Omega\setminus\overline{B}_{R}\). **Proposition 4.4**.: _Let \(u\) be a positive bounded solution of (1.1). Then_ \[\lim_{R\to\infty}\lambda(-\Delta-f^{\prime}(u),\Omega_{R})\geq\underline{ \lambda}>0. \tag{4.21}\] The proof is similar to that of Proposition 3.4 Proof.: We argue by contradiction. Suppose there exists \(\delta>0\) such that \[\lambda(-\Delta-f^{\prime}(u),\Omega_{R})\leq\underline{\lambda}-3\delta \tag{4.22}\] for all \(R\geq 1\). Fix \(\rho>0\) such that \(\lambda(-\Delta,B_{1})\rho^{-2}\leq\delta\). Theorem 3.5 (due to Lieb) implies that \[\lambda(-\Delta-f^{\prime}(u),\Omega_{R})\geq\inf_{x\in\mathbb{R}^{d}}\lambda \big{(}-\Delta-f^{\prime}(u),\Omega_{R}\cap B_{\rho}(x)\big{)}-\delta. \tag{4.23}\] Taking \(R=n\in\mathbb{N}\) in (4.22) and using (4.23), we see that there exists a sequence \((x_{n})_{n\in\mathbb{N}}\) such that \[\lambda_{n}\coloneqq\lambda\big{(}-\Delta-f^{\prime}(u),\Omega_{n}\cap B_{ \rho}(x_{n})\big{)}\leq\underline{\lambda}-\delta. \tag{4.24}\] The domain \(\Omega_{n}\cap B_{\rho}(x_{n})\) must be nonempty (otherwise the eigenvalue is infinite by convention), so by (4.20) we have \(|x_{n}|\geq n-\rho\). In particular, \(|x_{n}|\to\infty\) as \(n\to\infty.\) By (4.20), \(\Omega_{n}\subset\Omega.\) Since increasing the domain decreases the eigenvalue, (4.24) yields \[\lambda_{n}\coloneqq\lambda\big{(}-\Delta-f^{\prime}(u),\Omega\cap B_{\rho}(x_ {n})\big{)}\leq\underline{\lambda}-\delta. \tag{4.25}\] Again noting that \(\operatorname{dist}(x_{n},\Omega)\leq\rho\), we can extract a nonempty limit \(\Omega^{*}\) of \(\Omega\) along a subsequence that we rename \((x_{n})_{n\in\mathbb{N}}.\) By Definition 4.1, \(\Omega^{*}=\mathbb{R}^{d}\) or there exists a rotation \(\sigma\) such that \(\Omega^{*}=\sigma G\) for some \(G\in\mathcal{G}(y,r,K,M).\) First suppose \(\Omega^{*}=\mathbb{R}^{d}\). This implies \(\operatorname{dist}(x_{n},\Omega^{c})\to\infty\), and by Lemma 2.5 we have \(u\to 1\) uniformly on \(B_{\rho}(x_{n}).\) Then by the continuity of \(\lambda\) on bounded domains (see, e.g., [14]), we obtain \[|f^{\prime}(1)|\leq\lim_{n\to\infty}\lambda_{n}\leq\underline{\lambda}-\delta.\] This contradicts (4.1), so we must have \(\Omega^{*}=\sigma G\). In the vicinity of \(x_{n}\), \(u\) must converge to the unique positive bounded solution \(u^{G}\) on \(G\) (after rotation), for Lemma 2.5 prevents \(u\) from degenerating to \(0\). Again, the continuity of \(\lambda\) on bounded domains yields \[\lim_{n\to\infty}\lambda_{n}=\lambda(-\Delta-f^{\prime}(u^{G}),G\cap B_{\rho}) \geq\lambda(-\Delta-f^{\prime}(u^{G}),G)\geq\underline{\lambda}\] by the definition (4.19) of \(\underline{\lambda}\). This contradicts (4.25). Thus (4.22) is false and (4.21) follows because \(\delta>0\) is arbitrary. Using this asymptotic stability, we prove a maximum principle outside a large ball. Given \(h\geq 0\), define the shift \(v_{h}\coloneqq v(\,\cdot\,-h\mathbf{e}_{y})\) on \(\Omega+h\mathbf{e}_{y}\), which we extend by \(0\) to \(\Omega\). **Proposition 4.5**.: _There exists \(R>0\) such that for all \(h\geq 0\), if \(u\geq v_{h}\) on \(\Omega\cap\partial\Omega_{R}\), then \(u\geq v_{h}\) in \(\Omega_{R}\)._ Proof.: Because \(f^{\prime}\) is uniformly continuous on \((0,1)\), there exists \(\delta>0\) such that \[|f^{\prime}(r)-f^{\prime}(s)|\leq\underline{\lambda}/3\quad\text{for all }r,s\in(0,1)\text{ such that }|r-s|<\delta. \tag{4.26}\] We claim that there exists \(R>0\) such that \(\lambda(-\Delta-f^{\prime}(u),\Omega_{R})\geq 2\underline{\lambda}/3\) and \[\sup_{h\geq 0}\sup_{(\Omega+h\mathbf{e}_{y})\setminus\overline{B}_{R}}(v_{h}- u)\leq\delta. \tag{4.27}\] By Proposition 4.4, it suffices to show (4.27) for sufficiently large \(R\). Suppose for the sake of contradiction that there exists a sequence \((h_{n},x_{n})_{n\in\mathbb{N}}\) such that \(h_{n}\geq 0\), \(x_{n}\in(\Omega+h_{n}\mathbf{e}_{y})\setminus\overline{B}_{n}\), and \[v_{h_{n}}(x_{n})>u(x_{n})+\delta. \tag{4.28}\] By Lemma 2.5, we must have \(\sup_{n\in\mathbb{N}}\operatorname{dist}(x_{n},\partial\Omega)<\infty\). Thus we can extract a subsequence that we rename \((x_{n})\) along which \(\Omega-x_{n}\), \(u(\,\cdot\,+x_{n})\), and \(v_{h_{n}}(\,\cdot\,+x_{n})\) have limits \(\Omega^{*},u^{*}\), and \(v^{*}\). By hypothesis, \(\Omega^{*}=\sigma G\) for some rotation \(\sigma\) and an epigraph \(G\in\mathcal{G}(\gamma,r,K,M)\). By uniqueness on \(G\), \(u^{*}\circ\sigma=u^{G}\), for Lemma 2.5 prevents \(u^{*}=0\). Also, \(v^{*}\circ\sigma\) is a nonnegative subsolution on \(G\). Let \(\mathcal{P}\) denote the parabolic semigroup on \(G\), so that \(w(t,x)\coloneqq(\mathcal{P}_{t}q)(x)\) solves \[\begin{cases}\partial_{t}w=\Delta w+f(w)&\text{in $G$},\\ w=0&\text{on $\partial G$},\\ w(t=0,\,\cdot\,)=q.\end{cases} \tag{4.29}\] Because \(v^{*}\circ\sigma\) is a subsolution, \(\mathcal{P}_{t}(v^{*}\circ\sigma)\) is increasing in \(t\) and thus has a limit \(\mathcal{P}_{\infty}(v^{*}\circ\sigma)\geq v^{*}\circ\sigma\) solving (1.1) on \(G\). By Theorem 4.1, the only two solutions are \(0\) and \(u^{G}\), so \(v^{*}\circ\sigma\leq u^{G}\). Thus \[\liminf_{n\to\infty}v_{h_{n}}(x_{n})\,\leq\limsup_{n\to\infty}u(x_{n}),\] contradicting (4.28). This proves (4.27). We fix the corresponding value of \(R\) in the remainder of the proof. Now fix \(h\geq 0\) and suppose \(w\coloneqq v_{h}-u\leq 0\) on \(\Omega\cap\partial\Omega_{R}\). Because \(u=v_{h}=0\) on \(\partial\Omega\), we also have \(w=0\) on \(\partial\Omega\cap\partial\Omega_{R}\). Hence \(w\leq 0\) on \(\partial\Omega_{R}\). Next, using the mean value theorem, we write \[-\Delta w-f^{\prime}(r)w=0\] for some \(r\) between \(u\) and \(v_{h}\). Let \(P\coloneqq\{w>0\}\cap\Omega_{R}\). Then (4.27) and the definition (4.26) of \(\delta\) imply that \[-\Delta w-f^{\prime}(u)w-qw=0\quad\text{on $P$}\] for some \(q\in\mathcal{C}^{\gamma}(P)\) satisfying \(|q|\leq\underline{\lambda}/3\). Set \(q=0\) on \(P^{c}\) and let \(\mathcal{L}\coloneqq\Delta+f^{\prime}(u)+q\). Then our choice of \(R\) implies that \[\lambda(-\mathcal{L},\Omega_{R})\,\geq\lambda(-\Delta-f^{\prime}(u),\Omega_{R })-\underline{\lambda}/3\geq\underline{\lambda}/3>0.\] By Theorem 1 of [28] or Proposition 3.1 of [11], \(\mathcal{L}\) satisfies the maximum principle on \(\Omega_{R}\). Now \(\mathcal{L}w|_{P}=0\), \(\mathcal{L}0=0\), and \(w\leq 0\) on \(P^{c}\). It follows that \(-\mathcal{L}w_{+}\leq 0\) in the sense of distributions. Since we have \(w\leq 0\) on \(\partial\Omega_{R}\) by hypothesis, the maximum principle implies that \(w_{+}\leq 0\). That is, \(w=0\) and \(v_{h}\leq u\) on \(\Omega\setminus B_{R}\), as desired. As in the proof of Lemma 2.6, we have not quite satisfied the hypotheses of Proposition 3.1 of [11], but the proof goes through nonetheless. **Corollary 4.6**.: _There exists \(H\geq 0\) such that \(u\geq v_{H}\)._ Proof.: Take \(R>0\) as in Proposition 4.5 and let \[h_{R}\coloneqq\sup_{B_{R}^{d-1}}(-\phi).\] Then if \(H\coloneqq h_{R+1}+R+1\), \((\Omega+H\mathbf{e}_{y})\cap B_{R+1}=\emptyset\). By (4.20), \(u\geq 0=v_{H}\) on \(\Omega\cap\partial\Omega_{R}\). Then the corollary follows from Proposition 4.5. We can finally apply a sliding argument to prove uniqueness on \(\Omega\). Proof of Theorem 4.2.: Take \(R>0\) as in Proposition 4.5 and define \[h_{*}\coloneqq\inf\{h\geq 0\mid u\geq v_{h}\}.\] By Corollary 4.6, \(0\leq h_{*}\leq H\). We wish to show that \(h_{*}=0\). Suppose for the sake of contradiction that \(h_{*}>0\). By continuity, we have \(u\geq v_{h_{*}}\). Since \(h_{*}>0\), \(u\neq v_{h_{*}}\). Thus by the strong maximum principle and compactness, \[\inf_{\overline{\Omega}_{h_{*}/2}\cap\overline{B}_{R+1}}(u-v_{h_{*}})>0.\] Hence by continuity, there exists \(\varepsilon\in(0,h_{*}/2)\) such that \(u\geq v_{h_{*}-\varepsilon}\) on \(\Omega\cap\overline{B}_{R+1}\). Then by (4.20) and Proposition 4.5, we have \(u\geq v_{h_{*}-\varepsilon}\) in \(\Omega_{R}\), and hence in \(\Omega\). This contradicts the definition of \(h_{*}\), so in fact \(h_{*}=0\). That is, \(u\geq v\). By symmetry, \(u=v\). Finally, since \(u\geq u_{h}\) for all \(h\geq 0\), we have \(\partial_{y}u\geq 0\). Then the strong maximum principle implies that \(\partial_{y}u>0\). ### A marginally stable epigraph The previous proof points to a broad principle: an epigraph \(\Omega\) enjoys uniqueness whenever its far-field limits are (isometries of) epigraphs with unique, strictly stable solutions. This suggests a possible induction on ever larger classes of epigraphs. At step \(n\in\mathbb{N}\), we might prove the uniqueness and strict stability of positive bounded solutions of (1.1) on epigraphs in some class \(\mathcal{C}_{n}\). By the reasoning in the preceding subsection, this would imply uniqueness on the class \(\mathcal{C}_{n+1}\) of all epigraphs whose far-field limits are isometries of epigraphs in \(\mathcal{C}_{n}\). For instance, we could take the set of uniformly Lipschitz epigraphs as the base case \(\mathcal{C}_{1}\); then \(\mathcal{C}_{2}\) would be the set of AUL epigraphs. We might hope that such a procedure could lead to a proof of uniqueness on _all_ epigraphs. We record two difficulties with this program. First, we have only demonstrated half of the inductive step: we have shown uniqueness but _not_ strict stability for solutions on AUL epigraphs (though stability seems plausible). Second, there exist epigraphs whose solutions are at best marginally stable: **Proposition 4.7**.: _There exists a positive reaction \(f\) and an epigraph \(\Omega\subset\mathbb{R}^{2}\) such that for any positive bounded solution \(u\) of (1.1) on \(\Omega\),_ \[\lambda(-\Delta-f^{\prime}(u),\Omega)\leq 0.\] This proposition asserts nothing about uniqueness on \(\Omega\), and uniqueness may well hold on all epigraphs. Nonetheless, this marginally stable example shows that the putative induction described would break down once \(\Omega\in\mathcal{C}_{n}\). The epigraph we construct in the proof of Proposition 4.7 has an infinite sequence of ever deeper "wells" of some width \(L\), as in Figure 6(b). As a result, \(\Omega\) has the infinite strip \((0,L)\times\mathbb{R}\) as a local limit. By choosing \(f\) and \(L\) carefully, we can ensure that any solution \(u\) has vanishing eigenvalue on this limit strip. We emphasize that \(\Omega\) is not asymptotically uniformly Lipschitz, so Proposition 4.7 does contradict Theorem 4.2. The proof of Proposition 4.7 has a distinct character from the rest of the paper, so we relegate it to Appendix A. ## 5. Non-uniqueness in domains with pockets Thus far, we have largely focused on proving uniqueness in domains satisfying various structural assumptions. In contrast, in this section we describe a robust method to produce examples of nonuniqueness. The idea is so attach a bounded "pocket" \(\Pi\) to a given domain \(\Omega_{0}\) via a narrow "bridge" \(\Gamma\) to form a "composite domain" \(\Omega=\Pi\cup\Gamma\cup\Omega_{0}\); see Figure 7 for an illustration. Given a pocket \(\Pi\), we choose a positive reaction \(f\) such that (1.1) admits multiple positive solutions on \(\Pi\). We show that this nonuniqueness extends to the entire composite domain \(\Omega\) provided the bridge \(\Gamma\) is sufficiently narrow. With this motivation, we study the behavior of solutions on the narrow bridge \(\Gamma=B_{\delta}^{d-1}\times(-L,L)\), where \(\delta,L>0\) and \(B_{\delta}^{d-1}\) denotes the \(\delta\)-ball in \(\mathbb{R}^{d-1}\). Technically, we must augment \(\Gamma\) slightly at either end to smoothly join \(\Pi\) and \(\Omega_{0}\). We elide this point, as it poses no problems for our argument. We denote coordinates on \(\Gamma\) by \((x^{\prime},y)\). Given \(\mu>0\), let \(\mathcal{F}_{\mu}\) denote the set of Lipschitz functions \(f\colon[0,1]\to\mathbb{R}\) such that \(f(0)=f(1)=0\) and \(\operatorname{Lip}f\leq\mu\). In an exception to our standing assumptions, the reactions in \(\mathcal{F}_{\mu}\) need not be smoother than Lipschitz. **Lemma 5.1**.: _Suppose \(\Gamma\subset\Omega\) and \(\partial B_{\delta}^{d-1}\times(-L,L)\subset\partial\Omega\) for some \(L>0\). For every \(\mu>0\), there exists \(\delta(d,\mu,L)>0\) such that for all \(f\in\mathcal{F}_{\mu}\) and every solution \(0\leq u\leq 1\) of (1.1),_ \[u(0,\,\cdot\,)\,\leq\,\frac{1}{4}\phi,\] _where \(\phi\) denotes the positive Dirichlet principal eigenfunction of \(-\Delta\) on \(B_{\delta}^{d-1}\) such that \(\left\|\phi\right\|_{\infty}=1\)._ Thus solutions of (1.1) on \(\Omega\) are small at the midpoint of a narrow bridge. Proof.: Let \(\phi\) denote the positive principal eigenfunction of \(-\Delta\) on \(B_{\delta}^{d-1}\) normalized by \(\left\|\phi\right\|_{\infty}=1\). Let \(\alpha\coloneqq\lambda(-\Delta,B^{d-1})\), so that \(\lambda(-\Delta,B_{\delta}^{d-1})=\alpha\delta^{-2}\). We assume \(\alpha\delta^{-2}>\mu\). Take \(f\in\mathcal{F}_{\mu}\), and a solution \(0\leq u\leq 1\) of (1.1). Because \(f\) is \(\mathcal{C}^{0,1}\), Schauder estimates imply that \(\left|\nabla u\right|\lesssim_{d,\mu}\delta^{-1}\), where the subscript \(\lesssim_{d,\mu}\) indicates that the implied constant can depend on \(d\) and \(\mu\). After all, Figure 7. A pocket \(\Pi\) attached to \(\Omega_{0}\) via a cylindrical bridge \(\Gamma\) of length \(2L\) and diameter \(2\delta\). if we dilate \(\Omega\) by the factor \(\delta^{-1}\), we obtain a uniformly smooth domain on which the gradient is order \(1\). Shrinking to the original size, the gradient grows by a factor of \(\delta^{-1}\). Integrating from the boundary, we find \(u\lesssim_{d,\mu}\delta^{-1}\operatorname{dist}(x,\partial\Omega)\). On the other hand, \(\phi(x^{\prime})\gtrsim_{d}\delta^{-1}\operatorname{dist}(x^{\prime},\partial B _{\delta}^{d-1})\). It follows that \[u|_{\Gamma}\leq A\phi \tag{5.1}\] for some \(A(d,\mu)>0\) independent of \(\delta\). By (5.1) and the parabolic comparison principle, \(u|_{\Gamma}\) is bounded above by the solution \(z\) of the linear evolution equation \[\begin{cases}\partial_{t}z=(\Delta+\mu)z&\text{in }\Gamma,\\ z=0&\text{on }\partial B_{\delta}^{d-1}\times(-L,L),\\ z=A\phi&\text{on }B_{\delta}^{d-1}\times\{\pm L\},\\ z(t=0,\,\cdot\,)=A\phi&\text{in }\Gamma.\end{cases}\] That is, \(z\geq u\) in \(\Gamma\). Decomposing \(z\) in the cross-sectional eigenbasis of the Laplacian, we see that \(z=Z(t,y)\phi(x^{\prime})\) for a function \(Z\) solving the one-dimensional equation \[\begin{cases}\partial_{t}Z=(\partial_{y}^{2}+\mu-\alpha\delta^{-2})Z&\text{ in }(-L,L),\\ z(t,\pm L)=A,\\ z(0,\,\cdot\,)=A&\text{in }(-L,L).\end{cases}\] Because \(\alpha\delta^{-2}>\mu\), \(Z\) converges exponentially in time to the unique steady state \[Z(\infty,y)=A\frac{\cosh\Bigl{(}y\sqrt{\alpha\delta^{-2}-\mu}\Bigr{)}}{\cosh \Bigl{(}L\sqrt{\alpha\delta^{-2}-\mu}\Bigr{)}}.\] Because \(\cosh\xi\geq\frac{1}{2}\mathrm{e}^{\xi}\), we obtain \[u(x^{\prime},0)\leq z(\infty,x^{\prime},0)=Z(\infty,0)\phi(x^{\prime})\leq 2A (d,\mu)\mathrm{e}^{-L\sqrt{\alpha\delta^{-2}-\mu}}\phi(x^{\prime}).\] We choose \(\delta(d,\mu,L)>0\) sufficiently small that \(2A(d,\mu)\mathrm{e}^{-L\sqrt{\alpha\delta^{-2}-\mu}}\leq 1/4\). We now state our main nonuniqueness result. **Theorem 5.2**.: _Given a bounded domain \(\Pi\) and \(L>0\), there exist a positive reaction \(f\) and \(\delta>0\) such that (1.1) admits multiple positive bounded solutions on \(\Omega=\Pi\cup\Gamma\cup\Omega_{0}\)._ Proof.: In [11, Proposition 1.4], we showed that there exists \(f_{0}\geq 0\) of the form in Figure 8(a) such that (1.1) admits multiple positive solutions on \(\Pi\) with reaction \(f_{0}\). In particular, there exist positive solutions \(\underline{u}_{0},\overline{u}_{0}\) such that \[\underline{u}_{0}<\frac{1}{2}\quad\text{and}\quad\sup\overline{u}_{0}>\frac{ 1}{2}. \tag{5.2}\] Let \(\mu\coloneqq\operatorname{Lip}f_{0}\) and take \(\delta(d,\mu,L)>0\) as in Lemma 5.1. Let \(\phi\) denote the positive Dirichlet principal eigenfunction of \(-\Delta\) on \(B_{\delta}^{d-1}\) such that \(\|\phi\|_{\infty}=1\). Let \(\Pi_{+}\coloneqq\Pi\cup\bigl{(}B_{\delta}^{d-1}\times[-L,0)\bigr{)}\) denote the union of the pocket \(\Pi\) and the bottom half of the bridge \(\Gamma\). Then \(\partial\Pi_{+}\cap\Omega=B_{\delta}^{d-1}\times\{0\}\). By Lemma 5.1, every solution \(0\leq u\leq 1\) of (1.1) on \(\Omega\) with a reaction \(f\in\mathcal{F}_{\mu}\) satisfies \[u\leq\frac{1}{4}\phi\quad\text{on }B_{\delta}^{d-1}\times\{0\}. \tag{5.3}\] Recall the solution \(\underline{u}_{0}<1/2\) of (1.1) on \(\Pi\) with reaction \(f_{0}\) from (5.2). Extend \(\underline{u}_{0}\) by \(0\) to \(\Omega\) and let \(v\) solve the parabolic equation \[\begin{cases}\partial_{t}v=\Delta v+f_{0}(v)&\text{in }\Pi_{+},\\ v=0&\text{on }\partial\Pi_{+}\cap\partial\Omega,\\ v=\frac{1}{4}\phi&\text{on }B_{\delta}^{d-1}\times\{0\},\\ v(t=0,\;\cdot\;)=\underline{u}_{0}&\text{in }\Pi_{+}.\end{cases} \tag{5.4}\] Then the initial condition \(\underline{u}_{0}\) is a subsolution of (5.4), so \(v\) is increasing in \(t\) and has a long-time limit \(v(\infty,\;\cdot\;)\). On the other hand, \(1/2\) is a supersolution of (5.4) because \(f_{0}(1/2)=0\), \(\phi\leq 1\), and \(\underline{u}_{0}<1/2\). Thus by the strong maximum principle, \(v(\infty,\;\cdot\;)<1/2\) on \(\overline{\Pi}_{+}\). Because \(\Pi_{+}\) is bounded, we have \[\varepsilon\coloneqq\frac{1}{2}-\sup_{\Pi_{+}}v(\infty,\;\cdot\;)>0.\] We now increase \(f_{0}\) on the region \((1/2-\varepsilon,1/2+\varepsilon)\) to be positive and smooth while remaining in \(\mathcal{F}_{\mu}\). Let \(f\) denote the new reaction, as depicted in Figure 8(b). Recall the parabolic semigroup \(\mathcal{P}\) from (4.29), defined now on the composite domain \(\Omega\). Because \(\underline{u}_{0}\) is a subsolution on \(\Omega\), \(\mathcal{P}_{t}\underline{u}_{0}\) is increasing in \(t\) and has a long-time limit \(\underline{u}\coloneqq\mathcal{P}_{\infty}\underline{u}_{0}\) solving (1.1). By (5.3), \(\mathcal{P}_{t}\underline{u}_{0}\leq\underline{u}\leq\phi/4\) on \(\partial\Pi_{+}\cap\Omega\). It follows that \(v\) is a supersolution for \(\mathcal{P}|_{\Pi_{+}}\). After all, \(v\leq 1/2-\varepsilon\) for all time, so \(f_{0}(v)=f(v)\). Therefore \((\mathcal{P}\underline{u}_{0})|_{\Pi_{+}}\leq v\). In the long-time limit, we have \(\underline{u}|_{\Pi_{+}}\leq v(\infty,\;\cdot\;)\). In particular, \[\sup_{\Pi}\underline{u}<\frac{1}{2}. \tag{5.5}\] Finally, recall the larger solution \(\overline{u}_{0}>\underline{u}_{0}\) of (1.1) on \(\Pi\) with reaction \(f_{0}\), which satisfies \(\sup_{\Pi}\overline{u}_{0}>1/2\). We extend \(\overline{u}_{0}\) by \(0\) to \(\Omega\). This is a subsolution for (1.1) (because \(f\geq f_{0}\)), so the limit \(\overline{u}\coloneqq\mathcal{P}_{\infty}\overline{u}_{0}\geq\overline{u}_{0}\) exists and solves (1.1). It follows from (5.2) that \[\sup_{\Pi}\overline{u}\geq\sup_{\Pi}\overline{u}_{0}>\frac{1}{2}.\] Figure 8. Reactions used in the proof of nonuniqueness. (a) A preliminary reaction that is positive save for a zero at \(s=1/2\). (b) A true positive reaction. Comparing with (5.5), we see that \(\overline{u}\neq\underline{u}\). That is, (1.1) admits multiple bounded positive solutions on \(\Omega\) with the positive reaction \(f\). In Sections 2-4, we described three classes of structured domains that enjoy uniqueness (under certain conditions): exterior-star, large dilations, and epigraphs. The nonuniqueness constructed above demonstrates the importance of these structural assumptions: if they are violated in a suitable compact set, uniqueness is lost. We present this phenomenon in Figure 9. Domains in the first row satisfy our structural hypotheses and have unique positive bounded solutions, while domains in the second have pockets that support multiple positive bounded solutions. ## 6. Multiple solutions in a cylinder In this section, we make a detailed study of (1.1) on cylinders with bounded smooth cross-section \(\omega\subset\mathbb{R}^{d-1}\). Our motivation is twofold. First, as Figure 6(b) demonstrates, cylinders can arise as limits of epigraphs. Indeed, we can loosely think of a cylinder as an infinitely deep and infinitely steep epigraph. Thus any form of nonuniqueness on cylinders contrasts strikingly with uniqueness on AUL Figure 9. Theorems 1.1–1.3 ensure that (1.1) admits a unique positive bounded solution on the following domains \(\Omega_{0}\): (a) The exterior of a ball. (b) A large ball \(B_{R}\) for \(R\gg 1\) depending on the reaction. (c) A parabola. However, Theorem 1.5 provides a pocket \(\Pi\) and a reaction \(f\) such that (1.1) admits multiple positive bounded solutions on the augmented domains \(\Omega=\Pi\cup\Gamma\cup\Omega_{0}\) in (d)–(f). epigraphs (Theorem 4.2). Second, in the previous section we demonstrated how to robustly construct domains with multiple solutions (Theorem 5.2). There, we are only guaranteed the existence of two solutions. In this section, we show that a cylinder can support uncountably many solutions. As noted in the introduction, positive reactions on the line \(\mathbb{R}\) admit only one positive bounded solution: their stable root \(1\). We contrast this with bistable reactions, namely \(f\) for which \(f<0\) on \((0,\theta)\) and \(f>0\) on \((\theta,1)\) for some \(\theta\in(0,1)\). On \(\mathbb{R}\), one can perturb around the unstable intermediate root \(\theta\) to produce (unstable) oscillatory solutions on \(\mathbb{R}\). These vary in \(x\) and are thus not translation-invariant. We use this phenomenon as a guide to Proposition 1.6. In a precise sense, positive reactions can appear "bistable" on the interval: they may admit multiple stable solutions separated by intermediate unstable solutions. Moving up in dimension from the interval to the strip, we can perturb around the unstable cross-section to produce oscillatory solutions in the strip. ### Instability on the interval As a first step, we show that reactions with \(f^{\prime\prime}(0)>0\) admit unstable solutions with particularly simple spectra. **Lemma 6.1**.: _Let \(f\) be a \(\mathcal{C}^{2}\) positive reaction with \(f^{\prime\prime}(0)>0\). Then there exists a length \(L>0\) and a solution \(\phi\) of (1.1) on \((0,L)\) such that the Dirichlet spectrum of the operator \(-\partial_{x}^{2}-f^{\prime}(\phi)\) consists of one negative (principal) eigenvalue and infinitely many positive eigenvalues._ Informally, \(\phi\) is strictly unstable in one direction but strictly stable in all others. To prove Lemma 6.1, we employ the shooting method. Given \(\alpha\in\mathbb{R}_{+}\), let \(\phi_{\alpha}\) solve the initial-value ODE \[-\phi_{\alpha}^{\prime\prime}=f(\phi_{\alpha}),\quad\phi_{\alpha}(0)=0,\ \ \phi_{\alpha}^{\prime}(0)=\alpha. \tag{6.1}\] We are interested in values of \(\alpha\) for which \(\phi_{\alpha}\) bends back down to zero at some positive location \(L_{\alpha}\). Recalling the \(\mathbb{R}_{+}\)-steady state \(\varphi\) from [10, Theorem 1.1(A)], let \(\alpha^{*}\coloneqq\varphi^{\prime}(0)\). In the proof of [10, Lemma 2.4], we showed that \(\phi_{\alpha}\) bends back to zero if and only if \(\alpha\in(0,\alpha^{*})\). (The lemma is stated for other reaction classes, but this fact applies to positive reactions, as noted in the proof of Lemma 2.5 in the same paper.) So for all \(\alpha\in(0,\alpha^{*})\), \(\phi_{\alpha}\) has a first positive zero \(L_{\alpha}\) and is thus a positive solution of (1.1) on the interval \((0,L_{\alpha})\). Let \(s_{\alpha}\coloneqq\phi_{\alpha}(L_{\alpha}/2)\) denote the maximum value of \(\phi_{\alpha}\) and let \(F(s)\coloneqq\int_{0}^{s}f(r)\ \mathrm{d}r\) denote the antiderivative of \(f\). Multiplying (6.1) by \(\phi_{\alpha}^{\prime}\) and integrating, we find \[(\phi_{\alpha}^{\prime})^{2}-\alpha^{2}=-2F(\phi_{\alpha}). \tag{6.2}\] Since \(\phi_{\alpha}^{\prime}=0\) at its maximum, this yields \[\alpha^{2}=2F(s_{\alpha}). \tag{6.3}\] On the other hand, we can rearrange (6.2) and integrate again to obtain \[L_{\alpha}=2\int_{0}^{s_{\alpha}}\frac{\mathrm{d}s}{\sqrt{\alpha^{2}-2F(s)}}. \tag{6.4}\] (These calculations are presented in greater detail in the proof of [10, Lemma 2.4].) If a function \(g\) depends on \(\alpha\), we use the notation \(\dot{g}\) to denote the derivative \(\partial_{\alpha}g\). To prove Lemma 6.1, we first show that \(\dot{L}<0\) implies instability. **Lemma 6.2**.: _If \(\dot{L}_{\beta}<0\) for some \(\beta\in(0,\alpha^{*})\), then \(\phi_{\beta}\) is strictly unstable:_ \[\lambda\big{(}-\partial_{x}^{2}-f^{\prime}(\phi_{\beta}),(0,L_{\beta})\big{)}<0.\] In [10, Lemma 2.4], we showed that \(L_{\alpha}\to\infty\) as \(\alpha\to\infty\). As a result, if \(\dot{L}_{\beta}<0\), then there exists \(\alpha>\beta\) such that \(L_{\alpha}=L_{\beta}\). That is, we have nonuniqueness. This demonstrates a link between instability and nonuniqueness. For a partial converse, see Lemma A.2 below. Proof of Lemma 6.2.: Assume \(\dot{L}_{\beta}<0\) for some \(\beta\in(0,\alpha^{*})\). By symmetry, \(\phi^{\prime}_{\beta}(L_{\beta})=-\phi^{\prime}_{\beta}(0)=-\beta\). It follows that \[0=\frac{\mathrm{d}}{\mathrm{d}\alpha}\big{[}\phi_{\alpha}(L_{\alpha})\big{]} \Big{|}_{\alpha=\beta}=\dot{\phi}_{\beta}(L_{\beta})+\phi^{\prime}_{\beta}(L_ {\beta})\dot{L}_{\beta}=\dot{\phi}_{\beta}(L_{\beta})-\beta\dot{L}_{\beta}.\] Rearranging, we see that \(\dot{\phi}_{\beta}(L_{\beta})=\beta\dot{L}_{\beta}<0\). On the other hand, \(\dot{\phi}_{\beta}(0)=0\) and \[\dot{\phi}^{\prime}_{\beta}(0)=\frac{\partial^{2}}{\partial x\partial\alpha} \phi_{\alpha}\Big{|}_{\alpha=\beta,x=0}=\frac{\partial^{2}}{\partial\alpha \partial x}\phi_{\alpha}\Big{|}_{\alpha=\beta,x=0}=\frac{\mathrm{d}}{\mathrm{ d}\alpha}\big{[}\phi^{\prime}_{\alpha}(0)\big{]}\Big{|}_{\alpha=\beta}= \frac{\mathrm{d}}{\mathrm{d}\alpha}\alpha\Big{|}_{\alpha=\beta}=1.\] So \(\psi\coloneqq\dot{\phi}_{\beta}\) satisfies \(\psi(0)=0\), \(\psi^{\prime}(0)=1\), and \(\psi(L_{\beta})<0\). It follows that \(\psi\) vanishes somewhere between \(0\) and \(L_{\alpha}\). Let \(\ell\) denotes its earliest intermediate zero: \[\ell\coloneqq\min\{x\in(0,L_{\alpha})\mid\psi(x)=0\}\in(0,L_{\beta}).\] Then \(\psi>0\) on \((0,\ell)\) and \(\psi(0)=\psi(\ell)=0\). Moreover, if we let \(\mathcal{L}\coloneqq\partial_{x}^{2}+f^{\prime}(\phi_{\beta})\) and differentiate (6.1) with respect to \(\alpha\) at \(\alpha=\beta\), we find \(\mathcal{L}\psi=0\). It follows that \(\psi\) is the principal eigenfunction of \(\mathcal{L}\) on \((0,\ell)\) and \[\lambda\big{(}-\mathcal{L},(0,\ell)\big{)}=0.\] The eigenvalue can only fall when we increase the domain, so \(\lambda\big{(}-\mathcal{L},(0,L_{\beta})\big{)}\leq 0\). Moreover, this eigenvalue cannot be \(0\), for otherwise its principal eigenfunction would coincide with \(\psi\) (by ODE uniqueness), and \(\psi\) is not positive on the entirety of \((0,L_{\alpha})\). So in fact \(\lambda\big{(}-\mathcal{L},(0,L_{\beta})\big{)}<0\), as desired. We use this lemma to construct the simply unstable solution \(\phi\) in Lemma 6.1. Proof of Lemma 6.1.: Let \(m\coloneqq f^{\prime}(0)>0\). The function \(\alpha\mapsto L_{\alpha}\) is continuous on \((0,\alpha^{*})\) by ODE well-posedness. We consider the behavior of \(L_{\alpha}\) when \(\alpha\ll 1\). This is an almost-linear regime in which \(f(s)\) is well-approximated by \(ms\). Hence the maximal value \(s_{\alpha}\) tends to \(0\) and the length \(L_{\alpha}\) tends to the value \(L_{*}\) for which \(\lambda\big{(}-\partial_{x}^{2}-m,(0,L_{*})\big{)}=0\). The principal eigenfunction is sinusoidal, and one can explicitly compute \(L_{*}=\pi m^{-1/2}\). So \(s_{\alpha}\to 0\) and \(L_{\alpha}\to L_{*}\) as \(\alpha\searrow 0\). Using the substitution \(s=s_{\alpha}r\) in the first integral in (6.4) as well as (6.3), we find \[L_{\alpha}=\sqrt{2}\int_{0}^{1}\big{[}s_{\alpha}^{-2}F(s_{\alpha})-s_{\alpha}^{ -2}F(s_{\alpha}r)\big{]}^{-1/2}\ \mathrm{d}r.\] Let \[D(\alpha,r)\coloneqq s_{\alpha}^{-2}F(s_{\alpha})-s_{\alpha}^{-2}F(s_{\alpha}r)\] denote the denominator. Using the hypothesis \(f^{\prime\prime}(0)>0\), we show that \(\dot{D}>0\) for all \(r\in(0,1)\) when \(\alpha\ll 1\), which implies that \(\dot{L}_{\alpha}<0\). Differentiating, we can write \(\dot{D}=-\dot{s}_{\alpha}s_{\alpha}^{-3}E\) for \[E(\alpha,r)\coloneqq 2F(s_{\alpha})-2F(s_{\alpha}r)-[s_{\alpha}f(s_{\alpha})-s _{\alpha}rf(s_{\alpha}r)].\] Using (6.3), we have \[\dot{s}_{\alpha}=\frac{\alpha}{f(s_{\alpha})}>0. \tag{6.5}\] It thus suffices to show that \(E<0\) when \(r\in(0,1)\) and \(\alpha\ll 1\). Using the definition of \(F\), we can write \[E=2\int_{rs_{\alpha}}^{s_{\alpha}}f(t)\ \mathrm{d}t-tf(t)\Big{|}_{rs_{\alpha}}^{s_ {\alpha}}. \tag{6.6}\] Integrating by parts, we have \[\int_{rs_{\alpha}}^{s_{\alpha}}f(t)\ \mathrm{d}t=tf(t)\Big{|}_{rs_{\alpha}}^{s_ {\alpha}}-\int_{rs_{\alpha}}^{s_{\alpha}}tf^{\prime}(t)\ \mathrm{d}t.\] Using this in (6.6), we find \[E=\int_{rs_{\alpha}}^{s_{\alpha}}[f(t)-tf^{\prime}(t)]\ \mathrm{d}t. \tag{6.7}\] Because \(f^{\prime\prime}(0)>0\) and \(f\in\mathcal{C}^{2}\), \(f\) is strictly convex on some interval \([0,\varepsilon]\) with \(\varepsilon\in(0,1)\). By the mean value theorem and strict convexity, \[\frac{f(t)}{t}=\frac{f(t)-f(0)}{t-0}<f^{\prime}(t)\quad\text{for all }t\in(0,\varepsilon].\] That is, the integrand in (6.7) is negative. Fix \(\underline{\alpha}\in(0,\alpha^{*})\) such that \[\sup_{\alpha\in(0,\underline{\alpha}]}s_{\alpha}\leq\varepsilon.\] For all \(\alpha\in(0,\underline{\alpha}]\) and \(r\in[0,1)\), we have shown that \(E<0\). As shown above, this implies that \(\dot{L}_{\alpha}<0\) for all \(\alpha\in(0,\underline{\alpha}]\). By Proposition 2.3(vii) of [13], the principal eigenvalue is continuous in the potential. By scaling, it is also continuous in the length \(L\). Therefore \[\lambda\big{(}-\partial_{x}^{2}-f^{\prime}(\phi_{\alpha}),(0,L_{\alpha})\big{)} \to\lambda\big{(}-\partial_{x}^{2}-m,(0,L_{*})\big{)}=0\quad\text{as }\alpha\searrow 0.\] The principal eigenvalue is simple and thus has a spectral gap. That is, there exists \(\delta>0\) such that all non-principal eigenvalues of \(-\partial_{x}^{2}-m\) on \((0,L_{*})\) exceed \(2\delta\). Using the minimax formula for eigenvalues, one can readily check that the second-smallest Dirichlet eigenvalue is continuous in \(\alpha\) as well. Thus there exists \(\alpha\in(0,\underline{\alpha}]\) such that all non-principal eigenvalues of \(-\partial_{x}^{2}-f^{\prime}(\phi_{\alpha})\) on \((0,L_{\alpha})\) exceed \(\delta\). On the other hand, because \(\dot{L}_{\alpha}<0\), Lemma 6.2 shows that \[\lambda\big{(}-\partial_{x}^{2}-f^{\prime}(\phi_{\alpha}),(0,L_{\alpha})\big{)} <0.\] This completes the proof with \(\phi=\phi_{\alpha}\) and \(L=L_{\alpha}\). ### Spatial dynamics We now deploy the theory of spatial dynamics to prove Proposition 1.6. Throughout this subsection, we fix the length \(L\) and solution \(\phi\) from Lemma 6.1. Let \(\mathcal{L}\coloneqq\partial_{y}^{2}+f^{\prime}(\phi)\) denote the linearization of (1.1) about \(\phi\) on \((0,L)\). We are interested in solutions near \(\phi\) of (1.1) on the strip \(\mathbb{R}\times(0,L)\). We will think of the first coordinate as a time variable, so we write coordinates as \(x=(\tau,y)\) on \(\mathbb{R}\times(0,L)\). Defining \(v=u-\phi\) in (1.1), we can write \[0=-\Delta v-[f(v+\phi)-f(\phi)]=-\partial_{\tau}^{2}v-\mathcal{L}v+g(y,v) \tag{6.8}\] for a nonlinear part \(g(y,v)\) satisfying \(g(y,0)=\partial_{v}g(y,0)=0.\) We view (6.8) as a second-order ODE in an infinite dimensional Banach space. To make it first order, let \(w\coloneqq\partial_{\tau}v\), so that \[\partial_{\tau}\begin{pmatrix}v\\ w\end{pmatrix}=\begin{pmatrix}0&1\\ -\mathcal{L}&0\end{pmatrix}\begin{pmatrix}v\\ w\end{pmatrix}+\begin{pmatrix}0\\ g(y,v)\end{pmatrix}. \tag{6.9}\] Let \(z\coloneqq(v,w)^{\top}\), let \(B\) denote the square linear operator above, and define the nonlinear operator \(\mathcal{N}(z)\coloneqq\big{(}0,g(y,v)\big{)}^{\top}.\) Then (6.9) becomes \[\partial_{\tau}z=Bz+\mathcal{N}(z). \tag{6.10}\] By Lemma 6.1, \(\operatorname{Spec}(-\mathcal{L})=\{-\lambda\}\cup\mathcal{P}\) for some \(\lambda>0\) and \(\mathcal{P}\subset\mathbb{R}_{+}\) satisfying \(\inf\mathcal{P}>0\). A short computation shows that the spectrum of \(B\) consists of the "square-root" of \(\operatorname{Spec}(-\mathcal{L})\) in \(\mathbb{C}\): \[\operatorname{Spec}B=\{\pm\mathrm{i}\sqrt{\lambda}\}\cup(\pm\sqrt{\mathcal{P} }).\] Thus \(B\) has precisely two (conjugate) imaginary eigenvalues and the remainder of its spectrum lies in the complement of a strip about the imaginary axis. Fischer studied dynamical systems satisfying these hypotheses in [19]; his results more or less immediately imply Proposition 1.6. Proof of Proposition 1.6.: By Theorem 5.1 of [19], there exists \(\delta_{0}>0\) such that the solutions of (6.10) satisfying \(\|z\|_{L^{\infty}_{\tau}H^{1}_{y}}\leq\delta_{0}\) constitute a two-dimensional manifold. (The dimension equals the number of imaginary eigenvalues of \(B\) with multiplicity.) Moreover, by Theorem 6.1 of [19], there exists \(\delta_{1}\in(0,\delta_{0}]\) such that every solution satisfying \(\|z\|_{L^{\infty}_{\tau}H^{1}_{y}}\leq\delta_{1}\) is periodic in \(\tau\). It only remains to show that such solutions are not, in fact, constant in \(\tau\). To see this, recall that in the proof of Lemma 6.1, there exists \(\underline{\alpha}\in(0,\alpha^{*})\) such that \(\dot{L}_{\alpha}<0\) when \(\alpha\in(0,\overline{\alpha}]\). Because \(s\) is increasing in \(\alpha\), it follows that \(\phi\) is the unique positive solution of (1.1) on \((0,L)\) satisfying \(\phi\leq s_{\overline{\alpha}}.\) That is, _small_ solutions are unique. Thus for \(\delta_{1}\ll 1\), the only constant-in-\(\tau\) solution \(z\) of (6.10) satisfying \(\|z\|_{L^{\infty}_{\tau}H^{1}_{y}}\leq\delta_{1}\) is \(0.\) Because there is an entire two-dimensional manifold of other small, periodic solutions, we see that (1.1) admits a positive solution \(u=\phi+v\) on the strip \(\mathbb{R}\times(0,L)\) that is periodic but not constant in \(\tau\). ## 7. The stable-compact method As noted in the introduction, many of the above proofs follow what we term the "stable-compact method." Here, we clarify this approach by systematically examining our arguments through this lens. The method relies on a decomposition of the domain into two parts, stable and compact, where solutions enjoy some form of stability and compactness, respectively. We note that the decomposition is far from unique; for example, one can modify the division in a region compactly contained in \(\Omega\) without disrupting the argument. The precise forms of stability and compactness vary with the application. Generally, on the stable part we show that solutions are sufficiently close to a linearly stable function to obey a maximum principle. On the compact part, we deform one solution via a one-parameter family of transformations and compare it with another. Compactness, which depends on the deformation, allows us to contradict the strong maximum principle unless uniqueness holds. ### Examples To apply the method to a particular problem, we identify a deformation, an associated notion of compactness, and a source of stability in regions where compactness fails. We describe this process in each of our main uniqueness arguments. Exterior-star domainsAn exterior-star domain \(\Omega\) is monotone with respect to dilation about the star center. Moreover, if we compose a solution with a dilation, it becomes a subsolution. This monotonicity allows us to use dilation as the deformation. Dilation by a bounded factor provides a corresponding notion of compactness. Given a large constant \(\kappa\), the compact region becomes \(\Omega\setminus(\kappa\Omega)\). The corresponding stable part is \(\kappa\Omega\); we choose \(\kappa\gg 1\) to ensure stability. Indeed, when \(\kappa\) is large, the stable part is far from the original boundary \(\partial\Omega\). As a consequence, all positive bounded solutions are close to \(1\) on \(\kappa\Omega\). Because \(1\) is the stable root of the reaction \(f\), solutions then stable on \(\kappa\Omega\) and obey a maximum principle. Large dilationsOur argument on large dilations is a "degenerate" application of the stable-compact method, as we make no use of a deformation or compactness. Rather, for a fixed reaction and \(\kappa\gg 1\), solutions on \(\kappa\Omega\) locally resemble the unique half-space solution. This solution is stable, so we can linearize around it to prove a maximum principle (and thus uniqueness) for solutions of (1.1) on \(\kappa\Omega\). AUL epigraphsEpigraphs are monotone with respect to vertical translation, so we use sliding as our deformation. Then, points within a bounded height of \(\partial\Omega\) can serve as the compact region. (In fact, we are able to use a smaller set in the proof, which simplifies the argument.) We are left searching for stability. By design, asymptotically uniformly Lipschitz epigraphs resemble (rotations of) uniformly Lipschitz (UL) epigraphs at infinity. We show that the unique solution on a UL epigraph is strictly stable. Linearizing about these profiles, we can conclude that solutions on AUL epigraphs satisfy a maximum principle outside a bounded set. In short, the far-field is stable and the near-field is compact and equipped with the sliding deformation. We highlight one subtlety: our proof of strict stability on UL epigraphs (Proposition 4.3) is itself an example of the stable-compact method. We decompose a UL epigraph into regions far from and near to the boundary, and we use stability on the former and compactness on the latter. There is no need to employ a deformation when proving stability (as opposed to uniqueness). This demonstrates the flexibility of the stable-compact framework--it sheds light on multiple qualitative properties of elliptic equations. #### Pockets In the opposite direction, Theorem 1.5 states that we can disrupt uniqueness by attaching a suitable "pocket" to a domain. This demonstrates the importance of the deformation. After all, one can view the pocket as a bounded modification of the compact set, so that the "stable-compact" structure remains intact. However, the pocket does disrupt the continuous deformation (dilation or sliding in the examples above), so we cannot prove uniqueness. ### Strong-KPP reactions To further illustrate the stable-compact method, we next reinterpret our earlier work [11] through this new perspective. Here, we have shown that various structural conditions on the domain ensure uniqueness in (1.1) for all positive reactions. In a different direction, one can instead assume a structural condition on the _reaction_ so that uniqueness holds in (almost) every domain. We took this approach in [11]. In particular, we studied reactions that are strong-KPP in the following sense: **Definition 7.1**.: A positive reaction \(f\) is _weak-KPP_ if \(f^{\prime}(s)\leq f^{\prime}(0)s\) for all \(s\in[0,1]\). It is _strong-KPP_ if \(f(s)/s\) is strictly decreasing in \(s\in(0,1]\). Rabinowitz showed that if \(\Omega\) is smooth and bounded and \(f\) is strong-KPP, then (1.1) admits at most one positive solution [30]. In [11], we extended this result to almost all unbounded domains. (The question of uniqueness in certain spectrally degenerate domains remains unresolved; see [11, Theorem 1.1] for details.) Although we did not use the terminology, our proof is a clear example of the stable-compact method. We decomposed the domain \(\Omega\) into a _stable_ part \(\Omega_{+}\) obeying a maximum principle and a _compact_ part \(\Omega_{-}\) on which positive solutions remain comparable to one another (a form of compactness). See Figure 2 in [11] for an illustration of this decomposition. In fact, Rabinowitz' original proof fits within this framework because when \(\Omega\) is smooth and _bounded_, one can take \(\Omega_{-}=\Omega\). Here, we deploy the same decomposition to extend Rabinowitz' result in a different direction. The proof is a very simple illustration of the stable-compact method. **Theorem 7.1**.: _Let \(\Omega\) be a bounded domain with Lipschitz boundary. If \(f\) is strong-KPP, then (1.1) admits at most one solution._ Because Lipschitz boundaries satisfy the interior and exterior cone condition, there is no trouble interpreting (1.1): the solution \(u\) must indeed vanish everywhere on the boundary. This condition could prove too strong if \(\partial\Omega\) were more irregular. Proof.: Let \(\Lambda\coloneqq\operatorname{Lip}f\). By Proposition 1.1 of [4] and Theorem 1.1 of [5], there exists \(\delta>0\) depending on \(d,\Lambda\), and \(\operatorname{diam}\Omega\) such that \[\lambda(-\Delta,\Omega_{+})>\Lambda\] on any open set \(\Omega_{+}\) satisfying \(|\Omega_{+}|\leq\delta\). (This is a simple consequence of the Alexandrov-Bakelman-Pucci inequality, and was first observed by Bakelman [3].) Let \(\Omega_{+}\subset\Omega\) be a set satisfying this condition such that \(\Omega_{-}\coloneqq\Omega\setminus\overline{\Omega}_{+}\) is smooth and \(\Omega_{-}\Subset\Omega\). For example, \(\Omega_{+}\) can be a thin collar around \(\partial\Omega\) whose "inner boundary" is smooth. Now suppose \(u_{1}\) and \(u_{2}\) are two positive solutions of (1.1). Because \(\Omega_{-}\Subset\Omega\), the strong maximum principle implies that \(\inf_{\Omega_{-}}u_{i}>0\) for each \(i\in\{1,2\}\). Thus \[\underline{\mu}\coloneqq\inf\{\mu>0\ |\ u_{1}\leq\mu u_{2}\text{ in }\Omega_{-}\}\] is finite. Suppose for the sake of contradiction that \(\underline{\mu}>1\). Let \(w\coloneqq\underline{\mu}u_{2}-u_{1}\), which is nonnegative in \(\Omega_{-}\) by continuity. Using the strong-KPP property (and the fact that \(u_{i}>0\)), we have \[-\Delta w=\underline{\mu}f(u_{2})-f(u_{1})>f(\underline{\mu}u_{2})-f(u_{1}) \eqqw\] for some difference quotient \(q\in L^{\infty}(\Omega)\) satisfying \(|q|\leq\Lambda\). It follows from the construction of \(\Omega_{+}\) that \[\lambda(-\Delta-q,\Omega_{+})\geq\lambda(-\Delta-\Lambda,\Omega_{+})>0.\] Now \(w\geq 0\) on \(\partial\Omega_{-}\) and \(w=0\) on \(\partial\Omega\), so \(w\geq 0\) on \(\partial\Omega_{+}=\partial\Omega\cup\partial\Omega_{-}\). Since \(-(\Delta+q)w<0\) in \(\Omega_{+}\), Theorem 1.1 of [5] (a maximum principle on irregular bounded domains) implies that \(w\geq 0\) in \(\Omega_{+}\). Hence \(w\geq 0\) in the entire domain \(\Omega\). Because \(-\Delta w>qw\), the strong maximum principle implies that in fact \(w>0\) in \(\Omega\). Because \(\Omega_{-}\Subset\Omega\), \(\inf_{\Omega_{-}}w>0\). It follows that \(w\geq\eta u_{2}\) in \(\Omega_{-}\) for some \(\eta>0\). That is, \(u_{1}\leq(\underline{\mu}-\eta)u_{2}\) in \(\Omega_{-}\), which contradicts the definition of \(\underline{\mu}\). We conclude that in fact \(\underline{\mu}\leq 1\) and \(u_{1}\leq u_{2}\) in \(\Omega_{-}\). Applying the maximum principle in \(\Omega_{+}\) as above, we see that \(u_{1}\leq u_{2}\) in \(\Omega_{+}\) as well, and hence in the entire domain \(\Omega\). By symmetry, \(u_{1}=u_{2}\). In [4], the first author and Nirenberg deployed a very similar argument to deepen the method of the moving plane. In retrospect, that work is an early exemplar of the stable-compact method. ## 8. Open problems To close the paper, we briefly dwell on a number of lingering open problems. Some are natural extensions of the results presented here, while others explore quite different directions. It is our hope that the following problems will inspire future work in this rich subject. ### Extensions Several of our main results include conditions that may not be necessary. First, Theorem 1.1 assumes that \(\Omega^{\mathrm{c}}\) is compact. This is a more restrictive form of the hypothesis in Theorem 2.1 that \(\Omega^{\mathrm{c}}\) is convex at infinity. We likewise always assume that \(\Omega\) is _strongly_ exterior star. _Open Question 1_.: If \(f\) is positive, does (1.1) have exactly one solution on _all_ exterior-star domains? In a similar vein, we state Theorem 1.3 for asymptotically flat epigraphs. More broadly, Theorem 4.2 treats all asymptotically uniformly Lipschitz epigraphs. but is a condition of this form necessary? _Open Question 2_.: If \(f\) is positive, does (1.1) have exactly one solution on _all_ epigraphs? To these, we add one question posed in our prior work [11] on strong-KPP reactions. _Open Question 3_.: If \(f\) is strong-KPP in the sense of Definition 7.1, does (1.1) have at most one solution on _all_ domains? It seems likely to us that the answer to Questions 1-3 is _yes_. We anticipate that the stable-compact method could make further headway on these problems, but new ideas may be required to completely resolve them. ### Robin boundary In our study [11] of strong-KPP reactions, we were able to treat Dirichlet and Robin boundary conditions in a unified manner. (Neumann conditions are much simpler, for the unique solution is the constant \(1\); see [12, 31].) This is due to the fact that the deformation we employed on the compact part scaled the solution, and thus did not alter the domain. In contrast, several of the main results of the present paper _slide_ the domain to treat the compact part. This sliding leads to natural monotonicity when the boundary condition is Dirichlet. However, this is not the case under Robin conditions. For example, suppose \(u\) and \(v\) solve (1.1) with Robin conditions on a uniformly Lipschitz epigraph. There is no _a priori_ reason for \(u\) and \(v\) to be ordered on the boundary. This issue persists in the interior. If we slide \(v\) vertically by a distance \(y\), there is no reason that the translated solution \(v_{y}\) should lie below \(u\) on \(\partial\Omega+y\mathbf{e}_{d}\). As a result, our proofs of uniqueness on epigraphs and exterior-star domains break down under Robin boundary conditions. Indeed, the same challenge afflicts the proof of uniqueness on uniformly Lipschitz epigraphs in [9]. _Open Question 4_.: Consider (1.1) with Robin rather than Dirichlet boundary conditions: \(u+\alpha\partial_{v}u=0\) on \(\partial\Omega\), where \(\alpha\in\mathbb{R}_{+}\) and \(v\) denotes the outward unit normal vector. Does uniqueness hold on the epigraphs of uniformly Lipschitz functions? On the complements of compact, convex sets? We pose this question in simple, approachable settings. We are naturally interested in generalizations to all epigraphs and all exterior-star domains. Curiously, our proof of Theorem 1.2 (for large dilations) is largely unaffected by Robin data. This is because the problem is "purely stable": as discussed in Section 7, it is an application of the stable-compact method with empty compact part. Thus no sliding is required, and the proof in Section 3 goes through with minor modification. For example, we require a version of Lieb's eigenvalue inequality adapted to Robin conditions; this is Theorem 2.3 in [11]. As a consequence, we simply assert: **Proposition 8.1**.: _Let \(f\) be a positive reaction and let \(\Omega\) be a uniformly smooth domain with Robin boundary parameter \(\alpha\in\mathbb{R}_{+}\). Then there exists \(\kappa(f,\Omega,\alpha)>0\) such that for all \(\kappa>\kappa\), (1.1) with \(\alpha\)-Robin boundary has a unique bounded positive solution on the dilated domain \(\kappa\Omega\)._ In fact, it seems likely that \(\underline{\kappa}\) can be taken independent of \(\alpha\); this would require a somewhat more sophisticated argument. ### Drift In both [11] and the present paper, we exclusively study self-adjoint problems. This reflects a desire for simplicity, but also the fact that the theory of the generalized principal eigenvalue is rather less developed in the non-self-adjoint setting. That said, [13] establishes a link between the maximum principle and a different formulation \(\lambda^{\prime}\) of the principal eigenvalue, which differs from \(\lambda\) when the operator is not self-adjoint. It thus seems possible that a stable-compact method based on \(\lambda^{\prime}\) may pay dividends in non-self-adjoint problems. In particular, we are interested in the validity of our main results in the presence of drift. _Open Question 5_.: Given a vector-field \(q\colon\Omega\to\mathbb{R}^{d}\), suppose we replace \(-\Delta\) by \(-\Delta+q\cdot\nabla\) in (1.1). Do variations on Theorems 1.1-1.3 hold? Are additional conditions like \(\nabla\cdot q=0\) (divergence-free), \((q\cdot\nu)|_{\partial\Omega}=0\) (non-penetrating), or \(|q|\ll 1\) (small) helpful? We note that in general, drift can lead to quite different behavior. For example, on the line \(\mathbb{R}\), positive reactions admit traveling wave solutions of all speeds \(c\geq c_{*}\) for some \(c_{*}>0\) depending on the reaction. Thus if \(q\geq c_{*}\) is constant, (1.1) admits multiple positive bounded solutions: 1 and the traveling wave. In this case, \(q\) is divergence-free and non-penetrating (the boundary is empty), so these conditions alone do not preserve uniqueness. On the other hand, uniqueness does hold if \(|q|<c_{*}\) (still assuming constant \(q\)). For this reason, the condition \(|q|\ll 1\) in Question 5 seems particularly promising. ### Linearly degenerate reactions Throughout the paper, we have made essential use of the hypothesis that \(f^{\prime}(0)>0\) (and, to a lesser extent, that \(f^{\prime}(1)<0\)). Indeed, this nondegeneracy ensures that \(0\) is strictly unstable on sufficiently large domains. However, a number of applications call for reactions that satisfy \(f|_{(0,1)}>0\) but vanish to higher order at zero. We are naturally interested in the validity of our results in this more relaxed setting. On the whole space \(\mathbb{R}^{d}\), \(1\) is the unique positive bounded solution of (1.1) in the low-dimensional case \(d=1,2\). Indeed, solutions of (1.1) are superharmonic; in low dimensions, the only bounded, superharmonic functions are constants. In contrast, when \(d\geq 3\) and \(f(s)=s^{\beta}\), there exist bounded positive solutions of (1.1) when \(\beta\geq\frac{d+2}{d-2}\); these are extremizers of the Sobolev inequality. While this reaction does not satisfy the hypothesis \(f(1)=0\), we can scale the bounded solution to lie below \(\frac{1}{2}\) and then modify \(f\) above \(\frac{1}{2}\) to vanish at \(1\). Thus uniqueness in (1.1) on the whole space depends on both the dimension and the order of vanishing of \(f\) near \(0\). Curiously, the situation is somewhat different in the half-space. Suppose only that \(f|_{(0,1)}>0\) and \(f(0)=f(1)=0\). In collaboration with Caffarelli and Nirenberg [8, Theorem 1.5], the first author showed that bounded solutions of (1.1) in \(\mathbb{H}^{d}\) are one-dimensional (functions of \(x_{d}\) alone) when \(d=2,3\). Then by Lemma 6.1 of [10], there is a unique positive bounded solution. In particular, uniqueness holds in general in \(\mathbb{H}^{3}\) but not \(\mathbb{R}^{3}\). Combining the results of [17] with the methods of [9], one could likewise show uniqueness in coercive, uniformly Lipschitz epigraphs in dimensions 2 and 3. To our knowledge, little is known when \(d\geq 4\). We note that \(0\) is the only bounded nonnegative solution of (1.1) on the half-space (of any dimension) in the pure-power case \(f(s)=s^{\beta}\) (for any \(\beta\geq 1\)) [15]. However, it is not clear that this forbids multiple positive bounded solutions when \(f\) is modified to vanish at \(1\). These observations are by no means comprehensive, but they do indicate the complexity of the problem. We record some of its facets in the following question. Open Question 6.: Suppose \(f(s)\sim As^{\beta}\) as \(s\to 0\) for some \(A>0\) and \(\beta>1\). Is there a critical exponent \(\beta_{d}>1\) such that Theorems 1.1-1.3 hold when \(\beta\in[1,\beta_{d})\) and fail when \(\beta>\beta_{d}\)? Does the existence of \(\beta_{d}\) depend on the dimension or domain? ### Specific domains Finally, we are interested in uniqueness on several simple domains that fall outside our main results. Open Question 7.: 1. Given a smooth bounded cross-section \(\omega\subset\mathbb{R}^{d-1}\), what can we say about uniqueness on the half-cylinder \(\Omega=\mathbb{R}_{+}\times\omega\) or suitable smoothings thereof? Can we relate uniqueness on \(\Omega\) to uniqueness on \(\omega\)? 2. Suppose \(\Omega\) is the complement of two balls. Does (1.1) admit a unique positive bounded solution on \(\Omega\)? Does the answer depend on the balls' radii or separation? We are grateful to Bassam Fayad for raising this question. The stable-compact method provides partial results. On half-cylinders, if (1.1) has a unique positive solution on \(\omega\) and it is _strictly stable_, then one can prove stability at infinity in \(\Omega\) and use sliding on a compact part to prove uniqueness. This method breaks down, however, if the cross-sectional solution is marginally stable. In the other direction, it seems possible that (1.1) will admit multiple solutions on \(\Omega\) if \(\omega\) itself supports multiple positive solutions. For the "two-body problem," we recall that uniqueness holds outside a _single_ ball by Theorem 1.1. Moreover, one can adapt the proof of strict stability in Proposition 4.3 to show that this unique solution is strictly stable. Using this observation, one can then show that the two-body problem is purely stable (much like the dilation problem in Section 3) provided the two balls are sufficiently far apart; uniqueness follows. However, uniqueness is far from clear when the two balls are relatively near one another. ## Appendix A Construction of a marginally stable epigraph In this appendix, we use ODE arguments to prove Proposition 4.7. The precise structure of the reaction will play a major role. As noted in Section 7.2, Rabinowitz showed that strong-KPP reactions (in the sense of Definition 7.1) admit at most one positive solution on bounded domains [30]. In contrast, [11, Proposition 1.4] states that weak-KPP reactions can admit multiple positive solutions. To construct the reaction in Proposition 4.7, we examine a one-parameter family of reactions interpolating between strong- and weak-KPP endpoints. The first appearance of nonuniqueness will correspond to marginal stability. The heart of the matter is the study of (1.1) on the one-dimensional interval. **Lemma A.1**.: _Given \(m\in\mathbb{R}_{+}\), let \(\mathcal{F}\) be a \(C^{2}\)-compact set of weak-KPP reactions satisfying \(f^{\prime}(0)=m\) and \(f^{\prime\prime}(0)<0\) for all \(f\in\mathcal{F}\). Then there exist \(\underline{L},\overline{L}\in\mathbb{R}_{+}\) satisfying \(\pi m^{-1/2}<\underline{L}\leq\overline{L}\) such that (1.1) admits a unique positive solution on \((0,L)\) for all \(L\in(\pi m^{-1/2},\underline{L}]\cup[\overline{L},\infty)\) and all reactions \(f\in\mathcal{F}\)._ As in Section 6.1, we employ the shooting method. Recall the ODE (6.1) for the solution \(\phi_{\alpha}\): \[-\phi_{\alpha}^{\prime\prime}=f(\phi_{\alpha}),\quad\phi_{\alpha}(0)=0,\ \ \phi_{\alpha}^{\prime}(0)=\alpha.\] As noted in Section 6, if \(\varphi\) denotes the unique positive bounded solution of (1.1) on \(\mathbb{R}_{+}\) and \(\alpha^{*}\coloneqq\varphi^{\prime}(0)\), then \(\phi_{\alpha}\) has a first zero \(L_{\alpha}\in\mathbb{R}_{+}\) for all \(\alpha\in(0,\alpha^{*})\). Then \(\phi_{\alpha}\) solves (1.1) on \((0,L_{\alpha})\). Proof.: Recall the notation \(s_{\alpha}=\phi_{\alpha}(L_{\alpha}/2)\) from Section 6.1, which satisfies \[2F(s_{\alpha})=\alpha^{2}.\] (A.1) Using the substitution \(s\mapsto s_{\alpha}-s\) in (6.4), we obtain \[L_{\alpha}=2\int_{0}^{s_{\alpha}}\frac{\mathrm{d}z}{\sqrt{\alpha^{2}-2F(s_{ \alpha}-z)}}\,.\] (A.2) In (6.4), (A.1) implies that the integrand is singular at the moving upper endpoint. We have changed variables in (A.2) so that the moving endpoint does not coincide with a singularity of the integrand. Because \(f^{\prime\prime}(0)<0\), \(f(s)<ms\) near \(0\). Since \(f(s)\leq ms\) everywhere by hypothesis, \(F(s)<\frac{1}{2}ms^{2}\). Using this in (A.2), we see that \(L_{\alpha}>L_{*}\) for all \(\alpha>0\). Now, ODE stability implies that \(\phi_{\alpha}\to\varphi\) locally uniformly as \(\alpha\nearrow\alpha^{*}\). So \(s_{\alpha}\to 1\) and \(L_{\alpha}\to\infty\) in this limit. We wish to show that there exist thresholds \(\underline{\alpha}\) and \(\underline{\alpha}\) independent of \(f\) such that \(0<\underline{\alpha}\leq\overline{\alpha}<\alpha^{*}\) and \(L_{\alpha}\) is strictly increasing on \((0,\underline{\alpha}]\) and \([\overline{\alpha},\alpha^{*})\). Letting \[\underline{L}\coloneqq\inf_{\alpha\in[\underline{\alpha},\overline{\alpha}]}L _{\alpha}\quad\text{and}\quad\overline{L}\coloneqq\sup_{\alpha\in[\underline{ \alpha},\overline{\alpha}]}L_{\alpha},\] the lemma will follow. Indeed, any length \(L\not\in[\underline{L},\overline{L}]\) will correspond to a region in which \(\alpha\mapsto L_{\alpha}\) is injective, so exactly one \(\phi_{\alpha}\) solves (1.1) on \((0,L)\). In the following, we use the notation \(\dot{g}\) to denote \(\frac{\mathrm{d}g}{\mathrm{d}\alpha}\). Differentiating (A.1) in \(\alpha\), we find \[\dot{s}_{\alpha}=\frac{\alpha}{f(s_{\alpha})}>0,\] (A.3) so the maximum \(s_{\alpha}\) is strictly increasing in \(\alpha\). Differentiating the second integral in (A.2) and using (A.3), we find \[\dot{L}_{\alpha}=\frac{2}{f(s_{\alpha})}+2\alpha\int_{0}^{s_{\alpha}}\frac{f(s _{\alpha}-z)/f(s_{\alpha})-1}{[\alpha^{2}-2F(s_{\alpha}-z)]^{3/2}}\ \mathrm{d}z.\] (A.4) Recall that each reaction \(f\) is decreasing near \(1\). By compactness, there exists a single \(\delta\in(0,1)\) such that \(f^{\prime}|_{(1-\delta,1]}<0\) for all \(f\in\mathcal{F}\). As \(\alpha\nearrow\alpha^{*}\), \(s_{\alpha}\to 1\), so \(s_{\alpha}>1-\delta/2\) for \(\alpha\) near \(\alpha^{*}\). Hence \[\int_{0}^{\delta/2}\frac{f(s_{\alpha}-z)/f(s_{\alpha})-1}{[\alpha^{2}-2F(s_{ \alpha}-z)]^{3/2}}\;\mathrm{d}z>0\] because the numerator is positive. Thus (A.4) yields \[\dot{K}_{\alpha}>\frac{2}{f(s_{\alpha})}-2\alpha\int_{\delta/2}^{s_{\alpha}} \frac{\mathrm{d}z}{[\alpha^{2}-2F(s_{\alpha}-z)]^{3/2}}=\frac{1}{f(s_{\alpha}) }-\mathcal{O}(1).\] Since \(f(s_{\alpha})\to f(1)=0\), we see that \(\dot{K}_{\alpha}\to\infty\) as \(\alpha\nearrow\alpha^{*}\). In particular, there exists \(\overline{\alpha}(f)\in(0,\alpha^{*})\) such that \(\dot{K}_{\alpha}>0\) on \((\overline{\alpha}(f),\alpha^{*})\). Using the compactness of \(\mathcal{F}\), one can make \(\overline{\alpha}\) uniform over (and independent of) \(f\). We next consider \(\alpha\searrow 0\). In the proof of Lemma 6.1, we showed that the assumption \(f^{\prime\prime}(0)>0\) implies that \(\dot{L}<0\) near \(\alpha=0\). Here, we assume the opposite sign: \(f^{\prime\prime}(0)<0\). Thus the same calculations imply that \(\dot{L}>0\) when \(\alpha\in(0,\underline{\alpha}(f)]\) for some \(\underline{\alpha}(f)\in(0,\overline{\alpha}]\). Using the compactness of \(\mathcal{F}\), we can readily show that \(\underline{\alpha}\) can be taken independent of \(f\). This completes the proof. We next show that uniqueness implies (weak) stability: **Lemma A.2**.: _If (1.1) admits a unique positive solution \(u\) on a domain \(\Omega\), then_ \[\lambda(-\Delta-f^{\prime}(u),\Omega)\geq 0.\] (A.5) **Corollary A.3**.: _If \(\dot{L}_{\beta}<0\) for some \(\beta\in(0,\alpha^{*})\), then (1.1) admits multiple positive solutions on \((0,L_{\beta})\)._ Proof.: This follows from Lemmas 6.2 and A.2. Proof of Lemma a.2.: Recall the parabolic semigroup \(\mathcal{P}\) from (4.29). Because \(1\) is a supersolution of (1.1), \(\mathcal{P}_{t}1\) is nonincreasing in \(t\). It follows that the limit \(\mathcal{P}_{\infty}1\) exists and solves (1.1). Since \(u<1\) on \(\Omega\), comparison implies that \(\mathcal{P}_{\infty}1\geq u\). By hypothesis, \(u\) is the only positive solution of (1.1), so \(\mathcal{P}_{\infty}1=u\). Now suppose \(u\leq v\leq 1\). By comparison, \(\mathcal{P}_{\infty}v=u\). That is, \(u\) is dynamically stable from above; this implies (A.5). We require one further technical lemma. **Lemma A.4**.: _If \(f\) is analytic and the map \(\alpha\mapsto L_{\alpha}\) is not injective, then there exists \(\beta\in(0,\alpha^{*})\) such that \(\dot{L}_{\beta}<0\)._ Proof.: Suppose \(f\) is analytic and \(L_{\alpha}\) is not injective. Because \(L_{\alpha}\) is differentiable, there is either a slope \(\beta\) of the desired form or there is a nonempty open interval \(A\Subset(0,\alpha^{*})\) on which \(L_{\alpha}\) is constant. Suppose for the sake of contradiction that the latter holds. By (6.5), \(s_{\alpha}\) is strictly increasing in \(\alpha\). Combining (6.3) and (6.4), we thus see that the function \[\mathcal{I}(s)\coloneqq\int_{0}^{s}\frac{\mathrm{d}z}{\sqrt{F(s)-F(s-z)}}\] (A.6) is likewise constant on some nonempty open interval \(Y\Subset(0,1)\). Because \(f\) is analytic, so is \(F\). It follows that \(\mathcal{I}\) is itself analytic on \((0,1)\). After all, analyticity allows us to write \[\frac{F(s)-F(s-z)}{z}=f(s)+zg(s,z)\] for some analytic function \(g\). By the compactness of \(Y\), there exists \(\delta>0\) such that \(|zg(s,z)|\leq\frac{1}{2}f(s)\) for all \(s\in Y\) and \(z\in(0,\delta)\). Hence we can invert and take a square root: \[\left(\frac{F(s)-F(s-z)}{z}\right)^{-1/2}=\frac{1}{\sqrt{f(s)}}+zh(s,z)\] for some analytic \(h\). Then (A.6) becomes \[\mathcal{I}(s)=\frac{1}{\sqrt{f(s)}}\int_{0}^{\delta}\frac{\mathrm{d}z}{\sqrt {z}}+\int_{0}^{\delta}\sqrt{z}h(s,z)\,\,\mathrm{d}z+\int_{\delta}^{s}\frac{ \mathrm{d}z}{\sqrt{F(s)-F(s-z)}}.\] Each term is analytic in \(s\), as desired. Since \(\mathcal{I}(s)\to\infty\) as \(s\to 1\), \(\mathcal{I}\) is not constant. Thus \(\mathcal{I}\) cannot coincide with a constant on any nonempty open set. We now construct the reaction in Proposition 4.7. **Proposition A.5**.: _There exists a weak-KPP reaction \(f\) and a length \(L>0\) such that \(f^{\prime}(0)>\pi^{2}L^{-2}\), (1.1) admits exactly one positive solution \(\phi\) on \((0,L)\), and_ \[\lambda\big{(}-\partial_{x}^{2}-f^{\prime}(\phi),(0,L)\big{)}=0.\] By scaling \(f\), one can in fact arrange \(L=1\); we will not use this freedom. We also note that this proposition implies that Lemma A.2 cannot be improved to _strict_ stability: there exist domains with uniqueness but merely marginal stability. Proof.: By Proposition 1.4 of [11], there exists a weak-KPP reaction \(\tilde{f_{1}}\) such that (1.1) has multiple positive solutions on the unit interval \((0,1)\). A brief examination of the proof of [11, Proposition 1.4] shows that one can arrange \(\tilde{f_{1}^{\prime}}(0)>\pi,\tilde{f_{1}}\in\mathcal{C}^{2}\), and \(\tilde{f_{1}^{\prime\prime}}(0)<0\). Approximating \(\tilde{f_{1}^{\prime\prime}}\) in \(L^{\infty}\) by a polynomial and integrating twice, we can find a nearby weak-KPP polynomial \(f_{1}\) with \(f_{1}^{\prime}(0)>\pi\) and \(f_{1}^{\prime\prime}(0)<0\) such that (1.1) has multiple positive solutions on the unit interval \((0,1)\) with reaction \(f_{1}\). Let \(m\coloneqq f_{1}^{\prime}(0)>\pi\) and define \(f_{0}(s)\coloneqq ms(1-s)\). Then \(f_{0}\) is a strong-KPP reaction and by Theorem 1.5 of [11], (1.1) admits a positive solution on \((0,1)\) with reaction \(f_{0}\). Rabinowitz showed that this solution is unique [30]. Next, given \(\tau\in[0,1]\), let \(f_{\tau}\coloneqq(1-\tau)f_{0}+\tau f_{1}\), so that \(f_{\tau}\) interpolates between the reactions \(f_{0}\) and \(f_{1}\). The weak-KPP condition is convex, so \(f_{\tau}\) is weak-KPP for all \(\tau\). Moreover, because \(f_{l}^{\prime}(0)=m\) and \(f_{l}^{\prime\prime}(0)<0\) for each \(i\in\{0,1\}\), we have \(f_{\tau}^{\prime}(0)=m\) and \(f_{\tau}^{\prime\prime}(0)<0\) for all \(\tau\in[0,1]\). The family \(\mathcal{F}\coloneqq\{f_{\tau}\}_{\tau\in[0,1]}\) is clearly compact in \(\mathcal{C}^{2}\), so it satisfies the hypotheses on \(\mathcal{F}\) in Lemma A.1. Let \(L_{*}\coloneqq\pi m^{-1/2}<1\). Then Lemma A.1 provides \(L_{*}<\underline{L}\leq\overline{L}<\infty\) such that (1.1) has a unique positive solution on \((0,L)\) whenever \(L\in(L_{*},\underline{L}]\cup[\overline{L},\infty)\) and \(f\in\mathcal{F}\). Note that by the choice of \(f_{1}\), \(\underline{L}<1<\overline{L}\). Let \(\mathcal{T}\subset[0,1]\) denote the set of \(\tau\) for which there exists \(L(\tau)\) such that (1.1) admits multiple positive solutions on \(\big{(}0,L(\tau)\big{)}\) with reaction \(f_{\tau}\). We claim that \(\mathcal{T}\) is open. To see this, take \(\tau\in\mathcal{T}\) and note that \(L_{\alpha}\) is not injective at the value \(L(\tau)\). As the convex combination of two polynomials, \(f_{*}\) is a polynomial and hence analytic. It follows from Lemma A.4 that \[\inf\dot{L}_{\alpha}<0\quad\text{for all }\tau\in\mathcal{T}.\] (A.7) Since the family \(f_{\tau}\) is smooth in \(\tau\), the is an open neighborhood \(U\ni\tau\) such that for all \(\sigma\in U\), \(\dot{L}_{\beta}(f_{\sigma})<0\) (where we make the dependence on \(f\) explicit for clarity). By Corollary A.3, \(U\subset\mathcal{T}\). That is, \(\mathcal{T}\) is open. Now define \[\tau_{*}\coloneqq\inf\mathcal{T}\] and \(f_{*}\coloneqq f_{\tau_{*}}\). We claim that \(f_{*}\) is the desired reaction. Noting that \(0\notin\mathcal{T}\) by construction, \(\tau_{*}\) lies on the boundary of \(\mathcal{T}\), and in particular \(\tau_{*}\notin\mathcal{T}\). It follows that (1.1) admits precisely one solution with reaction \(f_{*}\) on every interval \((0,L)\) with \(L\in(L_{*},\infty)\). By Lemma A.1, for all \(\tau\in[0,1]\) we have uniqueness on lengths outside \([\underline{L},\overline{L}]\). Let \(A\subset(0,\alpha^{*})\) be a compact interval whose image under \(\alpha\mapsto L_{\alpha}\) contains \([\underline{L},\overline{L}]\). Then \[\inf_{A^{c}}\dot{L}_{\alpha}\geq 0\quad\text{for all }\tau\in[0,1].\] Hence (A.7) implies that \[\inf_{A}\dot{L}_{\alpha}<0\quad\text{for all }\tau\in\mathcal{T}.\] By Lemma 6.2, \[\inf_{A}\lambda\big{(}-\partial_{x}^{2}-f_{\tau}^{\prime}(\phi_{\alpha}),(0,L _{\alpha})\big{)}<0\quad\text{for all }\tau\in\mathcal{T}.\] As noted in the proof of Lemma 6.1, [13, Proposition 2.3(vii)] implies that \(\lambda\) is continuous in the potential and the length. Approaching \(\tau_{*}\) from within \(\mathcal{T}\), it follows that there exists \(\alpha\in A\) such that \[\lambda\big{(}-\partial_{x}^{2}-f_{*}^{\prime}(\phi_{\alpha}),(0,L_{\alpha}) \big{)}\leq 0.\] On the other hand, we have uniqueness on \((0,L_{\alpha})\), so by Lemma A.2, \[\lambda\big{(}-\partial_{x}^{2}-f_{*}^{\prime}(\phi_{\alpha}),(0,L_{\alpha}) \big{)}\geq 0.\] Therefore \[\lambda\big{(}-\partial_{x}^{2}-f_{*}^{\prime}(\phi_{\alpha}),(0,L_{\alpha}) \big{)}=0,\] as desired. Additionally, \(f^{\prime}(0)=m=\pi^{2}L_{*}^{-2}>\pi^{-2}L_{\alpha}^{-2}\) because \(L_{\alpha}\geq\underline{L}>L_{*}\). We can finally construct an _epigraph_ with at best marginal stability. Proof of Proposition 4.7.: Let \(f\) and \(L\) be as in Proposition A.5, and let \(\phi\) denote the unique positive solution of (1.1) on \((0,L)\), which is marginally stable. Let \(\Omega\subset\mathbb{R}^{2}\) have the form in Figure 6(b), so that \(\Omega\) includes a sequence of ever deeper wells of limiting width \(L\). Then the cylinder \(\Gamma\coloneqq(0,L)\times\mathbb{R}\) is a local limit of \(\Omega\). Let \(u\) be a positive bounded solution of (1.1) on \(\Omega\). By Lemma 3.2, \[\lambda(-\Delta,\Gamma)=\lambda\big{(}-\Delta,(0,L)\big{)}=\frac{\pi^{2}}{L^{ 2}}<f^{\prime}(0).\] (A.8) Hence by Lemma 4.5 of [11], \(u\) does not vanish locally uniformly in the limit to \(\Gamma\). Let \(u^{*}>0\) be a subsequential limit of \(u\) on the limit domain \(\Gamma\), so \(u^{*}\) solves (1.1) on \(\Gamma\). Write coordinates on \(\Gamma\) as \((x^{\prime},y)\in(0,L)\times\mathbb{R}\). We claim that \(u^{*}(x^{\prime},y)=\phi(x^{\prime})\). To see this, observe that \(u^{*}\leq 1\), so recalling the parabolic semigroup \(\mathcal{P}\) from (4.29), the comparison principle yields \(u^{*}\leq\mathcal{P}_{\infty}1\). Because 1 is independent of \(y\), so is \(\mathcal{P}_{\infty}1\). Thus \(\mathcal{P}_{\infty}1\) solves (1.1) on \((0,L)\), and hence is \(\phi\) by the reasoning from the proof of Lemma A.2. For a lower bound, we observe that by (A.8) and Proposition 2.3(iv), there exist \(H\in\mathbb{R}_{+}\) and \(\delta\in(0,L/2)\) such that \[\lambda\big{(}-\Delta,(\delta,L-\delta)\times(0,H)\big{)}<f^{\prime}(0).\] Let \(\psi\) be the principal eigenfunction on the rectangle \(R\coloneqq(\delta,L-\delta)\times(0,H)\). Because \(f^{\prime}\) is continuous, there exists \(\overline{\varepsilon}\in(0,1)\) such that \(\varepsilon\psi\) is a subsolution of (1.1) on \(\Gamma\) for all \(\varepsilon\in[0,\overline{\varepsilon}]\). Because \(u^{*}>0\), there exists \(\varepsilon\in(0,\overline{\varepsilon}]\) such that \(u^{*}\geq\varepsilon\psi\). Raising \(\varepsilon\), the strong maximum principle implies that \(u^{*}\geq\overline{\varepsilon}\psi\) (because \(u^{*}\) cannot touch any \(\varepsilon\psi\)). Sliding \(R\) in \(y\), the strong maximum principle further implies that \[u^{*}(x^{\prime},y)\geq\theta(x^{\prime})\coloneqq\sup_{y\in(0,H)}\psi(x^{ \prime},y).\] Note that \(\theta\geq 0\) and \(\theta\not\equiv 0\). As the supremum of subsolutions, \(\theta\) is itself a subsolution of (1.1). Hence the parabolic limit \(\mathcal{P}_{\infty}\theta\) exists is a positive solution of (1.1) on \((0,L)\). By uniqueness, \(\mathcal{P}_{\infty}\theta=\phi\). Then comparison yields \(u^{*}\geq\mathcal{P}_{\infty}\theta=\phi\). So indeed \(u^{*}=\phi\). By Lemma 2.2 of [11], the principal eigenvalue can only increase along limits: \[\lambda(-\Delta-f^{\prime}(u),\Omega)\leq\lambda(-\Delta-f^{\prime}(u^{*}), \Gamma)=\lambda(-\Delta-f^{\prime}(\phi),\Gamma).\] (The lemma is stated only for the operator \(-\Delta\), but the proof applies to sequences of operators with potentials as well.) By Lemma 3.2 and Proposition A.5, we find \[\lambda(-\Delta-f^{\prime}(u),\Omega)\leq\lambda(-\Delta-f^{\prime}(\phi), \Gamma)=\lambda\big{(}-\Delta-f^{\prime}(\phi),(0,L)\big{)}=0.\qed\]
2309.07997
First principle prediction of structural distortions in the cuprates and their impact on the electronic structure
Materials-realistic microscopic theoretical descriptions of copper-based superconductors are challenging due to their complex crystal structures combined with strong electron interactions. Here, we demonstrate how density functional theory can accurately describe key structural, electronic, and magnetic properties of the normal state of the prototypical cuprate Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ (Bi-2212). We emphasize the importance of accounting for energy-lowering structural distortions, which then allows us to: (a) accurately describe the insulating antiferromagnetic (AFM) ground state of the undoped parent compound (in contrast to the metallic state predicted by previous {\it ab initio} studies); (b) identify numerous low-energy competing spin and charge stripe orders in the hole-overdoped material nearly degenerate in energy with the AFM ordered state, indicating strong spin fluctuations; (c) predict the lowest-energy hole-doped crystal structure including its long-range structural distortions and oxygen dopant positions that match high-resolution scanning microscopy measurements; and (d) describe electronic bands near the Fermi energy with flat antinodal dispersions and Fermi surfaces that in agreement with angle-resolved photoemission spectroscopy (ARPES) measurements and provide a clear explanation for the structural origins of the so-called ``shadow bands''. We also show how one must go beyond band theory and include fully dynamic spin fluctuations via a many-body approach when aiming to make quantitative predictions to measure the ARPES spectra in the overdoped material.
Zheting Jin, Sohrab Ismail-Beigi
2023-09-14T19:15:27Z
http://arxiv.org/abs/2309.07997v2
# AFM insulating state and normal state of cuprates from first principle ###### Abstract Materials-realistic microscopic theoretical descriptions of copper-based superconductors are challenging due to their complex crystal structures combined with strong electron interactions. Here, we demonstrate how _ab initio_ calculations can accurately describe key structural, electronic, and magnetic properties of the normal state of the prototypical cuprate Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+x}\) (Bi-2212). We emphasize the importance of accounting for energy-lowering structural distortions, which then allows us to: (a) accurately describe the insulating antiferromagnetic (AFM) ground state of the undoped parent compound (in contrast to the metallic state predicted by previous _ab initio_ studies); (b) identify numerous low-energy competing spin and charge stripe orders in the hole-overdoped material that are nearly degenerate in energy with the G-AFM ordered state, indicating strong spin fluctuations; (c) predict the lowest-energy hole-doped crystal structure which exhibits long-range structural distortions and oxygen dopant positions matching high-resolution scanning transmission electron microscopy (STEM) measurements; and (d) describe electronic bands near the Fermi energy with flat antinodal dispersions and Fermi surfaces that in agreement with angle-resolved photoemission spectroscopy (ARPES) measurements and provide a clear explanation for the structural origin of the so-called "shadow bands". ## I Introduction The cuprate superconductors continue to be a fascinating and actively researched class of materials. In addition to their superconducting phases, their normal state has attracted broad research interest due to a wide range of unusual properties. Understanding the physical origin of the AFM insulating phase [1; 2], the pseudogap [3; 4; 5; 6], the flat antinodal dispersion [7; 8; 9; 10], the strange metallicity [11; 12; 13; 14; 15], and the presence of quantum critical fluctuations [16; 17; 18; 19] in the normal state can provide important insights into the underlying mechanisms that can give rise to the superconductivity. Despite plenty of proposed possible mechanisms, including competing orders [20; 21; 22] and preformed pairs [23; 24; 25; 26], the physical origin of this complicated normal state is still unclear. A comprehensive description of the normal state is challenging due to the strong electronic interactions combined with the structural complexity of typical doped cuprates. While the electronic correlations can be captured by effective model Hamiltonian approaches using accurate methods such as density matrix renormalization group (DMRG) [27; 28] or quantum Monte Carlo (QMC) [29; 30], the effects of the complicated lattice distortions introduced by dopants and impurities necessitate a realistic and detailed understanding of the materials. These structural distortions can result in significant changes to the materials such as additional symmetry breaking [31; 32] and modified superconducting temperatures [33]. Density functional theory [34; 35] offers a potent foundational method for investigating the ground-state properties of materials from first principles. For cuprates, DFT has played a pivotal role in constructing effective model Hamiltonians [36; 37; 38; 39; 40; 41]. To ensure the accuracy of these models, it is essential that the DFT calculations capture correctly both the structural properties and the predominant electronic properties of the ground state. Indeed, recent DFT studies on transition metal oxides have highlighted the significance of allowing energy-lowering structural distortions to achieve high-quality predictions of materials properties [42; 43; 44]. Bismuth strontium calcium copper oxide Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+x}\) (BSCCO or Bi-2212) [45] is one of the most intensively studied cuprates and is the focus of in this work. However, prior DFT studies faced challenges due to its intricate structural distortions and superlattice modulations [46; 47; 48; 49]. We demonstrate that by employing modern exchange-correlation functionals in DFT and providing an accurate description of the crystalline structure including energy-lowering lattice distortions, we can directly describe the antiferromagnetic insulating ground state of BSCCO, its correct crystal structure upon doping, the presence of competing magnetic and charge stripe orders, as well as crucial details of photoemission spectra such as "shadow bands". This means that we can, from first principles, correctly ascribe certain experimental observations to specific structural motifs (e.g., shadow bands) while simultaneously helping build microscopically well-justified model Hamiltonians that can be used to compute the effects of strong electron correlations. As a first example, we note that to date, the ground state properties of Bi-2212 have not been described well by DFT, which hampers further theoretical studies of the effects of doping and other perturbations on the system at a microscopic level. For instance, an important problem involves the undoped parent material, which serves as the starting point for any systematic analysis. Prior works were unable to reproduce the AFM insulating behavior of undoped Bi-2212 [50; 51; 52], even when considering the crystal modulation correction [49; 53]. Here, we provide a realistic description of Bi-2212 by capturing the dominant lattice and electronic properties of the normal state using DFT. For the undoped system, we demonstrate insulating bismuth-derived bands as a direct result of strong Bi-O bonding in the optimized and distorted BiO layers which explains the absence of bismuth-derived Fermi surface in all experiments regardless of doping level. In contrast, previous DFT studies [51; 52; 54] had to resort to additional "manual" hole doping to eliminate the finite bismuth density of states at the Fermi level that plagued the predictions for undoped Bi-2212. Second, armed with a proper description of the undoped material, our doped Bi-2212 results predict correct structural modulations and realistic oxygen dopant positions that match STEM measurements [55]. Third, using this optimized crystal structure, we discover many stripe-ordered electronic states that are almost degenerate with the G-type (checkerboard) AFM-ordered state. This indicates the presence of strong spin and charge fluctuations and competing orders in the normal state and might be a possible physical origin of the pseudogap phase in cuprates [56]. Fourth, we compute the electronic band structure near the Fermi energy and reproduce the flat band dispersions around the antinodal regions of the Brillouin zone, as well as details of the Fermi surface including the shadow bands observed via ARPES [9]. Our analysis identifies two distinct structural origins for different types of shadow bands. By capturing the dominant properties of the crystal and electronic structure in Bi-2212 under a first principle framework, our work can serve as a useful platform for future theoretical modeling and possible engineering of these superconductors. ## II Undoped system The undoped Bi-2212 possesses a bilayer crystal structure as depicted in Fig. 1(a)(b), where each bilayer comprises two CuO layers separated by one Ca layer and sandwiched between SrO and BiO layers. The smallest unit cell contains 30 atoms and crystallizes in the tetragonal I4/mmm space group, with only one Cu atom in each CuO layer [57; 58; 59]. However, this small unit cell leads to a false non-magnetic ground state due to the artificial assumption of translational invariance of the Cu local moments [60]. Here, we focus on a larger supercell (60 atoms/cell) with two Cu atoms in each CuO layer, which allows for the spontaneous symmetry breaking of the magnetic local moments. First, we perform a relaxation of the database crystal structure [58; 59; 61] to identify the nearest local minimum with the same symmetry, referred to as the "high-symmetry" structure. As a result, we find a G-AFM ordered ground state with local Cu magnetic moments of \(\pm 0.45\mu_{B}\), while the non-magnetic state is about 0.25 eV/Cu higher in energy. These local moments agree with the experimental measurements, typically falling within the range of 0.4-0.6\(\mu_{B}\) in cuprates without chlorine [62]. Fig. 1(c) shows the projected band structure and density of states of this AFM ground state. The AFM order opens a gap of about 0.6 eV for Cu \(d\)-orbitals between \(M^{\prime}\) and \(X^{\prime}\). However, this calculation still results in a false metallic ground state due to the widely dispersive Bi \(p\)-bands located within the AFM gap. To improve the theoretical description, we find that allowing crystal structural distortions in the BiO layers can further lower the total energy of the calculation. Using the conjugate gradient algorithm for structural relaxation, we optimize the structures by initially lowering the symmetry in the BiO layers manually. Fig. 2(a)(b) depicts two typical stable BiO layer patterns whose ground states are both G-AFM ordered on the CuO\({}_{2}\) planes. Fig. 2(a) shows a zigzag pattern, where all the oxygen atoms on BiO layers are moved along the diagonal direction of the in-plane unit cell by the same amount. With all BiO layers displaying this zigzag distortion pattern, the AFM ground state energy is about 0.29 eV/Cu lower than the one with the high-symmetry structure in Fig. 1(b), and Figure 1: Crystal and electronic structure of high-symmetry undoped Bi-2212. (a) Side view of the crystal structure along the \(b\)-axis. Sr, Ca, Cu, Bi, and O atoms are marked by green, gray, blue, purple, and red balls, respectively. The black square in the crystal structure marks the G-AFM unit cell. (b) Top view of the BiO layer from the \(c\)-axis. (c) Projected band structures (left) and density of states (DOS) (right) from DFT calculation, where the Fermi energy is set to be the reference energy. The inset of the band structures shows the first Brillouin zones (BZ) of the 60-atom supercell. The unit of DOS is the number of states per unit cell (UC) per eV. Red circles and solid lines show Bi \(p\)-orbitals; blue squares and dash lines show Cu \(d\)-orbitals; black dash-dotted line shows O \(p\)-orbitals in BiO layers. the local Cu magnetic moments are \(\pm 0.48\mu_{B}\), slightly larger than the high-symmetry case. Fig. 2(b) shows an orthorhombic pattern, characterized by the lowest symmetry among all three structures presented in Fig. 1 and 2. This pattern further breaks the mirror symmetry of the crystal, resulting in an orthorhombic lattice consistent with observations in experiments [63]. This orthorhombic pattern exhibits the lowest AFM ground state energy, 0.5 eV/Cu lower than the high-symmetry case. It also features the largest local Cu magnetic moments of \(\pm 0.53\mu_{B}\) among all three Bi-O structural motifs mentioned above. Consequently, this structure displays the largest AFM gap of about 1 eV, consistent with the gap size observed in a recent scanning tunneling microscopy (STM) experiment [64]. The electronic structures of the distorted crystals in Figs. 2(a) and (b) are presented in Figs. 2(c) and (d), respectively. Compared to the electronic structure of the high-symmetry case in Fig. 1(c), most of the changes are observed within the Bi bands due to the BiO distortion patterns. Although the zigzag distortion pattern helps reduce the size of the Bi electron pocket, it's only the lowest-energy orthorhombic distortion pattern that elevates the entire Bi bands above the Fermi level, resulting in an insulating AFM ground state. A straightforward microscopic picture helps elucidate how the distortions contribute to elevating the Bi bands to higher energy levels. The densities of states (DOS) plots reveal that the in-plane BiO system possesses filled low-energy bonding states dominated by oxygen and antibonding states dominated by bismuth, and the bonding/anti-bonding gaps are centered at about an energy 1 eV below the Fermi energy for all three structures. However, the size of the bonding/anti-bonding gap varies with the distortion pattern. The DOS shows that this gap is smallest for the high-symmetry structure in Fig. 1(c) (\(\sim\)1 eV) and largest for the orthorhombic structure in Fig. 2(d) (\(\sim\)2 eV). The gap size difference arises from different Bi-O hybridization strengths among the three structures. The coupling is strong enough only in the orthorhombic structure to lift the anti-bonding state above the Fermi level, while the couplings in high-symmetry and zigzag structures are too weak, resulting in metallic states. This microscopic picture is consistent with a structural analysis: the Bi-O bond lengths in the high-symmetry, zigzag, and orthorhombic structures are 2.65, 2.28, and 2.17A, respectively. In general, a shorter bond length implies a larger hopping element between two atoms. We substantiate this picture quantitatively by computing the tight-binding Kohn-Sham Hamiltonian on the maximally localized Wannier basis [65] extracted from our DFT calculations using Wannier90 [66]. The projected Wannier orbitals encompass the \(p\)-orbitals of Bi and O, as well as the \(d\)-orbitals of Cu, which are sufficient to describe the bands near the Fermi level as shown in supplementary Fig. S 8 [67]. Our calculations reveal that the Bi-O hopping (tunneling) matrix element for nearest-neighbor in-plane Bi-O pairs is about 1.0, 1.8, and 2.3 eV for the high-symmetry, zigzag, and orthorhombic structures, respectively, which is consistent with the behavior of the gaps. Notably, the hopping strength in the orthorhombic structure is the only one larger than 2 eV, a value sufficient to open a gap and raise the Bi electron pockets (originating from the anti-bonding BiO bands) above the Fermi level, which leads to the insulating ground state. ## III Oxygen-Doped System Commencing with undoped Bi-2212, hole doping is incorporated through interstitial oxygen dopants within the material. These additional hole dopants give rise to a diverse range of physical properties including supercon Figure 2: Crystal and electronic structure of two stable low-symmetry crystals of undoped Bi-2212. (a)(b) The top views of the “zigzag” and “orthorhombic” distortion patterns of the BiO layer, where the orthorhombic distortion pattern in (b) is the most energetically favorable structure. Large purple and small red balls represent Bi and O atoms. The black squares in the crystal structure illustrate the G-AFM unit cells. (c)(d) Projected band structures (left) and the density of states (right) of the crystal structures in (a) and (b), separately. Red circles and solid lines show Bi \(p\)-orbitals; blue squares and dash lines show Cu \(d\)-orbitals; black dash-dotted line shows O \(p\)-orbitals in BiO layers. ductivity, the pseudogap phenomenon, and the presence of shadow bands within the Fermi surface. The subsections below present our findings on various aspects: 1. The crystal structure including the long-range superlattice modulation and associated placement of the oxygen dopants. 2. The low-energy magnetic states including spin- and charge-stripes. 3. Detailed ARPES spectra around the Fermi level, which allows us to understand the role of structural distortions in creating the shadow bands. ### Crystal structure For the oxygen-doped Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+x}\) system with \(x=0.25\), we account for the superlattice modulation [46; 47; 48; 49; 31] and explore various possible positions for the oxygen dopants. These structural properties are essential for describing the BiO layers properly and have a significant impact on the superconducting gap [68; 33]. Fig. 3 shows our optimized lowest-energy crystal structure for Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8.25}\), where four oxygen dopants are added into a stoichiometric undoped Bi-2212 unit cell of 240 atoms which is a \(4\times 1\times 1\) enlargement of the 60-atom unit cell of Sec. II. Several metastable crystals are listed in the supplementary material [67], showing an energy cost of at least 0.9 eV/dopant to move oxygen dopants between two BiO layers, and 168 meV/dopant to move dopants between BiO and SrO layers. The optimized super modulation periodicity is 8 atom sites as enforced by the \(x=0.25\) doping level. The most stable positions for the oxygen dopants are located at the necking region between BiO and SrO layers, all in good agreement with a recent high-resolution STEM study [55] on a similar doping level. ### Magnetic and charge ordering With the lattice structure of our doped BSCCO confirmed, we turn our attention to the spin structure within this system. We initiate with short-period magnetic orderings of the Cu magnetic moments. Not surprisingly, the most energetically favorable magnetic order among them is the G-AFM order with antiparallel nearest-neighbor spins on Cu atoms as illustrated in Fig. 4(a). Other short-period meta-stable magnetic Figure 3: Crystal structure of Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8.25}\), where the hole doping level is \(x=0.25\). Sr, Ca, Cu, Bi, O, and dopant O atoms are marked by green, gray, blue, purple, red, and black balls, respectively. Red arrows further highlight the O dopants. The black solid square in the crystal structure marks the 244-atom unit cell of the doped crystal. The black dashed rectangle highlights the CuO layers in one of the bilayers in the unit cell. Figure 4: Competing G-AFM and stripe order phases in a CuO bilayer. (a,b) Illustrations of G-AFM (a) and a typical stripe order phase (b) for the bilayer in the black dashed square of Fig. 3. The black arrows represent the local moments on Cu atoms, with exaggerated length and thickness highlighting their relative magnitudes. Yellow dashed lines show the magnetic domain walls in the stripe order phase. (c) The magnitude of local moments in \(\mu_{B}\) on the Cu sites \(i\) along \(a\)-axis defined as \(|m_{i}|\equiv|n_{i\uparrow}-n_{i\downarrow}|\mu_{B}\), where \(n_{i\sigma}\) is the occupancy of the \(d_{x^{2}-y^{2}}\) Wannier orbital at site \(i\) with spin \(\sigma\). (d) The \(d_{x^{2}-y^{2}}\) occupancy on Cu atomic sites defined as \(|n_{i}|\equiv|n_{i\uparrow}+n_{i\downarrow}|\). Red dashed lines and blue solid lines represent the GAFM and stripe-order states, respectively. Circles and diamonds show the results of the upper and lower layers. orders exhibit aligned nearest-neighbor spins, either in an intralayer (\(\sim 40\)meV/Cu higher in energy) or interlayer (\(\sim 2\)meV/Cu higher in energy) fashion [67]. Separately, the energy cost associated with changing the interbilayer spin alignment from AFM to FM is negligible [67]. Hence, in the subsequent discussions, we concentrate the discussion on the magnetic structure within a single bilayer. Below, we will delve into longer-period spin and charge orders, making it crucial to quantify the local Cu moment and electron count in a precise manner. A widely adopted and chemically intuitive approach involves assessing the oxidation state of Cu. For example, an undoped Cu ion would have the formal oxidation state of Cu\({}^{2+}\), and each added hole on that Cu would raise its oxidation state by one unit. Within band theory, this corresponds directly to band occupancy: we count the occupancy of bands that are labeled to "belong" to particular atomic orbitals. In our case, the band structures reveal that the bands at the Fermi energy primarily exhibit Cu \(d_{x^{2}-y^{2}}\) character, as demonstrated in supplement Fig.S 9 and 10 [67]. Consequently, we construct a Wannier basis using only orbitals of Cu \(d_{x^{2}-y^{2}}\) character via the maximally localized Wannier function method. As shown in supplemental material Sec. V [67], this Wannierized Hamiltonian reproduces the DFT band structure of Cu \(d_{x^{2}-y^{2}}\) derived bands about the Fermi level. In essence, we have established a localized basis that replicates the _ab initio_ band structure near the Fermi energy, constructed with one orbital per Cu of \(d_{x^{2}-y^{2}}\) character. Subsequently, we employ this tight-binding representation to compute the band structure, band occupancies, and local occupancies of the Wannier orbitals. For a thermal Fermi-Dirac smearing of 100 K, Figure 4(c) presents the local moment magnitudes for the G-AFM state. The moments are around \(0.47\mu_{B}\) with small modulations of about \(\pm 0.04\mu_{B}\) due to the superlattice modulation. A similar modulation of \(\pm 0.03e\) also manifests in the \(d_{x^{2}-y^{2}}\) occupancies, as shown in Fig. 4(d). Beyond short-period ordering, we have discovered numerous longer-ranged stripe-ordered states that exhibit nearly degenerate energies with the G-AFM state. While DFT-based stripe-ordered states were previously reported in LSCO [69; 70] and YBCO [56], they have not been observed in BSCCO up to now. For Bi-2212, we present a typical bond-centered stripe-ordered state in Fig. 4(b), where the nearest neighbor spins crossing the dashed domain walls align in parallel, in contrast to the antiparallel alignment in the G-AFM state. Remarkably, the total energy of this stripe-order state is only 1.9 meV/Cu higher than the G-AFM state. In the supplementary materials [67], we tabulate eight distinct stripe order patterns with energy costs below 3 meV/Cu. The existence of all these low-energy orders suggests the presence of strong spin fluctuations in the normal state, which can play an important role in superconducting pairing [71; 72]. Fig. 4(c) and (d) show the local moment and \(d_{x^{2}-y^{2}}\) occupancy of this stripe-order state. Notably, the sites closer to the domain walls exhibit lower occupancies and smaller local moments than the other sites. These striped spin and charge orders present a modulation of \(\pm 0.2\mu_{B}\) for local moments and \(\pm 0.15\) for occupancies, significantly larger than the modulation caused by structural supermodulation in the G-AFM state. Consequently, the formation of the stripe order has an electronic origin, but the precise location of the domain boundaries can be influenced by the superlattice modulation effect. Notice that the modulations of local moments and the electron occupancies are almost the same numerically. This is because we have one orbital per site, and within band theory, the mechanism to form a local moment is to have an exchange splitting with an occupied low-energy spin-majority orbital and empty high-energy minority-spin orbital: the doped holes go into the spin-majority orbital and reduce both local occupancy and magnetic moment simultaneously. Hence, the redistribution of doped holes strictly follows the change of spin structures. In addition to the Wannier basis described above, we have also analyzed the electronic structure of the stripe orders with the standard atomic projections output by the Vienna ab initio simulation package (VASP) software [73; 74] (Fig.S 14 of the supplementary material [67]). The results are difficult to interpret due to the non-orthogonality of the standard VASP projections as explained in the supplement: the Cu magnetic moments show spatial modulations similar to those in Fig. 4 but the total occupancies hardly vary from site to site; additionally, there is a large oxygen contribution to the bands crossing the Fermi level. In short, in contrast to our \(d_{x^{2}-y^{2}}\) Wannier basis and the intuitive picture it provides, the Cu VASP projections are insufficient to explain the stripe order and we do not discuss them further. ### ARPES spectra The topology of the Fermi surface and the associated low-energy electronic spectrum, usually measured by ARPES [75], provides important insights into the electronic properties of solids. In particular, many materials exhibit a so-called "shadow band" (SB) Fermi surface, resembling a weak-intensity copy of their main band (MB) Fermi surface with certain shifted vectors in momentum space. Depending on the system, these SB Fermi surfaces can, in principle, originate from any type of symmetry breaking such as electronic [76], magnetic [77], or structural [32] origins. The physical origin of the SB Fermi surface is crucial for understanding the physical properties of the material, but it is difficult to distinguish between the different possible origins from ARPES alone. The Fermi surface of Bi-2212, as revealed by intensive ARPES studies, includes weaker intensity SB in addition to the main bands (see Fig. 5(f), adapted from a previous ARPES experiment [9]). These SB can be de scribed by two types of symmetry-breaking vectors. One of them always aligns with the superstructural modulation direction, while the other is along \(\pm(\pi,\pm\pi)\), coincident with the AFM ordering vectors. This has led to a continued debate regarding the magnetic [78; 79; 80] versus the structural origin [81; 82; 83; 32] of the \(\pm(\pi,\pm\pi)\) folding vector. Theoretical interpretation of these SB Fermi surfaces from first principles has been lacking. Below, we meticulously predict the ARPES Fermi surface using our first-principle calculations and delve into the underlying physical mechanisms behind the emergence of SB. We will demonstrate that the emergence of these two distinct types of SB is attributed to two distinct _structural_ symmetry-breaking mechanisms. Due to the strong spin fluctuations indicated by the competing strip orders and G-AFM states, the normal state of the hole-doped system cannot be described by a single magnetic-ordered configuration (i.e., a single Slater determinant). In principle, an accurate account of quantum spin fluctuations is needed for a comprehensive theoretical description including the magnetic susceptibility [84] and pseudogap [85] properties in cuprates. In practice, however, computed spectra in a non-magnetic state of the cuprates are usually (and surprisingly) comparable to ARPES measurements [75; 86]. In this context, we solely consider the electronic band structure of the non-magnetic state, while details about the G-AFM and stripe-ordered states are provided in the supplementary materials [67]. As an added benefit, by removing the spin degree of freedom allows us to focus exclusively on symmetry breaking through structural perturbations. As demonstrated below, the non-magnetic state successfully accounts for many normal state spectroscopic properties, including the SB. To facilitate comparison to experimental ARPES spectra of the Fermi surface, we employ a standard "band-unfolding" method [87; 88] to project the band structure onto the primitive unit cell Brillouin zone. This approach is known to reproduce qualitative spectrum intensities observed in various ARPES experiments on different materials [89; 90; 91; 92]. The band structures and unfoldings discussed below are computed using the Cu \(d_{x^{2}-y^{2}}\) Wannier basis described in Sec. III.2. The panels of Fig. 5 display the unfolded band structures. Fig. 5(a) shows the case of the 244-atom 25% hole-doped system for the non-magnetic phase. Around the antinodal region near M/X' in Fig. 5(a), we find two branches of flat bands below the Fermi level, which are split by the interlayer coupling in the bilayers, consistent with the ARPES experiments [9; 93]. Fig. 5(e) exhibits the unfolded Fermi surface of this 25% hole-doped system. The curves with the highest intensity contain two easily visible curves corresponding to the solid and dashed black curves of Fig. 5(f) from the ARPES measurement. Aside from these main curves, the ARPES Fermi surface exhibits a complicated set of shadow bands with lower intensities which are also reproduced in Fig. 5(e). We will now conduct an analysis to Figure 5: Electronic structures of the 25% hole-doped Bi-2212 system with a non-magnetic state. (a) The unfolded band structure of Cu \(d_{x^{2}-y^{2}}\) orbitals for the 244-atom unit cell. The opacity represents the spectral weight. (b) Schematic Fermi surfaces are black curves. The black, yellow, blue, and red rectangles represent the first BZ of the 1, 2, 4, and 8 Cu per layer unit cell, respectively. (c-e) Unfolded Fermi surfaces for the high-symmetry crystal of Fig. 1(b), the Bi-O distorted crystal of Fig. 2(b), and the hole-doped crystal of Fig. 3, respectively. The Fermi level in (c) and (d) is shifted by \(-0.1\)eV to allow a fair comparison to the hole-doped Fermi surface in (e). Yellow dashed (\(q_{1}\)) and red solid (\(q_{2}\)) arrows show two different coupling wave vectors. (f) Measured ARPES Fermi surface at \(T=104\) K, adapted from experiments [9]. Solid and dashed black curves highlight two distinct kinds of Fermi surface curves. demonstrate that the different sets of shadow bands on the Fermi surface have two distinct physical origins. Our analysis begins with the examination of the high-symmetry undoped crystal shown in Fig. 1(b). Although the crystal contains 2 Cu atoms in each CuO layer, the non-magnetic state of this crystal can be described in a smaller primitive cell with one Cu atom in each CuO layer, due to full translational symmetry. Consequently, the unfolded Fermi surface of this crystal shown in Fig. 5(c) only contains the main Fermi curves without any shadow bands, as expected. (The Fermi levels of the undoped crystals are intentionally lowered by \(-0.1\) eV to simulate hole doping.) Starting from this clean Fermi surface, we successively add complexity to the crystal to see the emergence of the shadow bands. The first type of shadow band exhibits an approximately circular shape as shown in Fig. 5(d). This circular shape arises from structural distortions in the BiO layers of the undoped crystal, as depicted in Fig. 2(b). Previous experiments have observed these shadow bands and suggested their likely structural origin [82, 81, 32] without providing a specific microscopic picture. Here, we microscopically observe that the distortions in BiO layers break the translational symmetry and introduce an inter-band coupling at wave vectors \(q_{1}=\pm(\pi,\pm\pi)\). This coupling leads to folding from the main bright curves to the shadow bands as indicated by the yellow dashed arrows in Fig. 5(d) and Fig. 5(e). On top of the circle-like shadow band, there is another type of shadow band involving \(q_{2}=\pm(\pi/4,-\pi/4)\) folding vectors shown in Fig. 5(e). This type of shadow band, often referred to as a "superstructure" in previous studies [94], has long been attributed to a post-emission modulation effect from BiO layer buckling [83]. Prior experiments have also shown that this type of shadow band fades away when the crystal modulation is gradually reduced by Pb doping [94]. Our calculation presents a consistent microscopic picture: the oxygen dopants and associated superlattice modulations further break the translational symmetry and introduce an additional coupling at \(q_{2}\). This coupling creates a further folding (illustrated by the red arrow) from the main bright curves to another set of shadow bands. It is important to note that this coupling only occurs in one crystalline direction due to the superlattice modulations being solely along the \(a\)-axis as shown in Fig. 3. ## IV Outlook In conclusion, within a first principle framework, we have provided a microscopic understanding of the AFM insulating phase of the undoped Bi-2212 system. Additionally, we have uncovered competing stripe orders in the hole-overdoped system, which offers a paradigmatic approach (specifically for BSCCO) to describe stripe orders using DFT without performing complex many-body calculations. Spectroscopically, our non-magnetic DFT band structure calculations remarkably reproduce the observed normal state spectral properties. Furthermore, we have elucidated the structural origin of the ARPES \(\pm(\pi,\pm\pi)\) and \(\pm(\pi/4,-\pi/4)\) shadow bands in the hole-overdoped system. Our work underscores the importance of considering the crystal degrees of freedom, including structural distortions, modulations, and realistic oxygen dopant positions, for an accurate description of various material properties. We believe that our study establishes a robust first-principle foundation for more "surgical" structural engineering of cuprates, particularly with regard to manipulating broken translational and rotational symmetries. Moreover, as our DFT ground state captures the dominant low-energy properties in the normal state of Bi-2212, the Wannierized Hamiltonians extracted from our DFT calculations can provide appropriate theoretical model Hamiltonians that can serve as platforms for future applications and research. An intriguing direction for future work involves a deeper exploration of why naive non-magnetic band structure calculations describe the observed ARPES spectra so well when the actual materials exhibit numerous nearly degenerate competing, but clearly magnetic, low-energy configurations. ## V Computational details We use the Vienna ab initio simulation package (VASP) with the projector-augmented wave method [95]. A relatively high plane-wave cutoff energy of 500 eV is used. The generalized-gradient-approximation (GGA) with the semilocal Perdew-Burke-Ernzerhof (PBE) functional [96, 97] is used in all of our calculations. All calculations are done with collinear spins. The recent finding of non-collinear spin texture in cuprates [98] is also an interesting topic to study but is beyond the scope of this work. To avoid the known failure of the local-spin-density approximation (LSDA) and the GGA in reproducing the copper magnetic moment in cuprates [51, 52, 53, 55, 99, 100] due to self-interaction errors (SIE) in the approximate exchange-correlation functionals [101], we add \(U=4\) eV for the Cu \(3d\) manifold in all our PBE+\(U\) calculations following previous theoretical works [102, 103, 60]. Although we treat \(U\) as an adjustable parameter, changing its value only shifts the energy of the unoccupied high-energy Cu-derived bands [67] which does not affect our main findings around the Fermi energy. On a related note, a recent study [104] based on the strongly-constrained-and-appropriately-normed (SCAN) meta-GGA exchange-correlation functional [105] predicted the correct AFM magnetic ground state for undoped Bi-2212 but failed to reproduce its insulating nature. This is not surprising because the SCAN functional still displays underestimated band gaps and fails to reproduce the insulating ground state in a wide range of transition metal oxides [106] including CuO [107]. In the supplement [67], we show that the SCAN+U method [106; 107] provides very similar results to the PBE+U results we show below. Details on optimized structures are found in the supplementary materials [67]. To accelerate the structural relaxation of the large 244-atom oxygen-doped unit cell, we use the Spanish Initiative for Electronic Simulations with Thousands of Atoms (SIESTA) package [108] to approximately relax the structure before doing the final relaxations using VASP. A DZP basis with EnergyShift of 100 meV and SplitNorm of 0.25 is used in all our SIESTA calculations. ## VI Acknowledgements We thank Byungmin Sohn and Yu He for helpful discussions and comments on the manuscript. This work was supported by grant NSF DMR 2237469 and NSF ACCESS supercomputing resources via allocation TG-MCA08X007.
2309.09762
An examination of the hierarchy problem beyond the Standard Model
As the Higgs field is a weak isospin doublet of the SU(2) symmetry, the Standard Model requires any symmetry solution to the Higgs hierarchy problem to be SU(2) invariant, a constraint on the type of the symmetry. However, the hierarchy problem is about the size. The size of SU(2) for the Higgs boson can be calculated by $|$SU$_2(\ell)|=\ell^3-\ell$, having the Higgs mass $M_H=1/\ell_{H}\approx125$ GeV. To find the origin of the relative smallness of the Higgs mass in Planck units, alternatively, we search for the origin of such a large order assuming that it stems from an unknown field theory X beyond the Standard Model. Accordingly, this order, which corresponds to the quantum of the Higgs field, should determine the order of quantum/core of X symmetry, its automorphism group. We calculate $|$Aut(X)$|\approx8.2\times 10^{53}$, close to the order of the Monster sporadic group, $|\mathbb M|\approx 8.1\times 10^{53}$, the automorphism group of the Monster CFT, which we therefore conjecture to be X. To examine this conjecture, we calculate the mass of a scalar boson whose SU(2) order is determined by $|\mathbb M|$, observing a 125.4 GeV boson mass and a 245.7 GeV VEV. The Monster CFT does not have any spin-1 operators and Kac-Moody symmetry. Therefore, based on the CFT/(A)dS correspondences, it only describes pure gravity without the gauge fields. In search of a gauge theory candidate, we promote SU(2) (double cover of SO(3)), to SO($d$), and show that the same $\mathbb M$-symmetric vacuum configuration reaches the Planck mass of quantum gravity precisely at $d=32$ (with 99\% accuracy). Then, the spin-1 boson mass of the eligible gauge candidates, SO(32) and $E_8\times E_8$, is calculated to be 80.9 GeV. Further, several pieces of evidence are provided supporting the conjecture.
Seyed Khaki
2023-09-18T13:43:27Z
http://arxiv.org/abs/2309.09762v4
# An examination of the de Sitter space dual to ###### Abstract In 2007, Witten proposed that the Monster conformal field theory (CFT) is very likely the dual CFT of 3D pure gravity in Anti-de Sitter space. Based on the fact that the Hilbert space of quantum gravity in asymptotically de Sitter space is finite, in this article, an attempt is made to study the de Sitter space dual to the Monster CFT. As the Monster CFT is an orbifold theory, it has two distinct sectors, an untwisted and a twisted one. It is observed that the boson of the underlying scalar field in the ground state of the untwisted sector is as heavy as the Higgs boson and the vacuum energy of the twisted sector approximately coincides with the Cosmological constant. Although further studies are required to shed more light on these observations, a couple of proposals are suggested to the best of our knowledge aiming to explain them. ## 1 Introduction In 2007, Witten proposed that the Monster conformal field theory (CFT) is very likely the dual CFT of 3D pure gravity in Anti-de Sitter (AdS) space [1]. Unfortunately, he did not examine the dual de Sitter (dS) space as he noted "not knowing how to define any mathematically precise observables, we do not know what to try to calculate". However, it is known that the Hilbert space of quantum gravity in asymptotically dS space is finite [2][3][4](for a review see [5]), which we found it as a reliable ground to calculate the vacuum energies in the dual dS space. The plan of the article is the following. In Section 1, we shortly introduce the orbifold theory with an emphasis on the coexistence of two distinct untwisted and twisted sectors. Then, with a brief introduction to the Monster CFT in Section 2, we calculate the vacuum energies of the corresponding untwisted and twisted sectors and report that they may address the Hierarchy and the Cosmological constant problems, respectively. In Section 3, we try to illustrate these observations offering a couple of proposals to the best of our knowledge, where we stress the roles of the sporadic groups and the Leech lattice in mass generation and the spacetime symmetries, respectively. In the last part, we assess aspects related to the grand unified theory (GUT) including the calculation of the GUT scale for the HE superstring theory. ### Orbifold Theory In mathematics, an orbifold generalizes the concept of the manifold to a quotient space of a Euclidean space by a finite group. In string theory, the orbifold CFTs are interesting as they provide rich spaces for string compactification where strings can propagate smoothly in a consistent way even though their classical geometry can be singular [6][7]. Starting from a particular manifold, the orbifold can be regarded as gauging some discrete worldsheet symmetries. Consider a unitary CFT with discrete symmetry group \(G\). One can mod out \(G\) by projecting the Hilbert space to a subspace invariant under the action of \(G\)[8]: \[\mathcal{H}\rightarrow\mathcal{H}_{G}\;:=\;\frac{1}{|G|}\sum_{g\in G}g\, \mathcal{H} \tag{1}\] The partition function of this theory constructed by inserting operators of this Hermitian projection in the trace is not modular invariant. To ensure modular invariance, one should apply the boundary conditions for local operators introducing a new space called _twisted sector_, where the fields defined on the closed strings are now periodic up to an action from \(G\). The full modular invariant partition function then consists of the twisted and untwisted sectors. \[Z=\;\frac{1}{|G|}\sum_{g,h\in G|[g,h]=1}\mathrm{Tr}_{\mathcal{H}_{h}}\;g\;q^{( L_{0}-c/24)}\;\bar{q}^{(\bar{L}_{0}-\bar{c}/24)} \tag{2}\] where, in the h-twisted sector, we are restricted to symmetries \(g\) that do not change the sector (\(gh=hg\)) [8]. \[Z=\;\frac{1}{|G|}\sum_{gh=hg}\quad g\boxed{\quad\boxed{\quad\boxed{\quad\boxed{ \quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad \boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{\quad\boxed{ \quad\boxed{\quad\boxed{\quad\boxeded{\quad\quad\box ### Monster CFT In the construction of the Monster CFT, the 26D target space of the bosonic string is compactified to 2D [46], where the space of 24 transverse dimensions is discretized by the Leech lattice, \(\Lambda_{24}\), in a way that momenta of the left- and right-moving are components of \(\Lambda_{24}^{L}\otimes\Lambda_{24}^{R}\). The CFT is then obtained by gauging the \(\mathbb{Z}_{2}\) symmetry and calculating the partition function of the form [13][14] \[Z = \tag{4}\] which results in the modular invariant function \(Z(\tau)=j(\tau)-744\) where \(j(\tau)\) is the celebrated \(j\)-function, \(j(\tau)=e^{-2\pi i\tau}+744+196884e^{2\pi i\tau}+21493760e^{4\pi i\tau}+...\). ### Untwisted Vaccum Since \(\mathbb{M}\) is the symmetry group of the theory, for each \(g\in\mathbb{M}\), one has an automorphism of the CFT, \[g:\mathcal{H}\rightarrow\mathcal{H} \tag{5}\] The ground state of the theory (the first term in the partition function), which is a tachyonic scalar field, also respects \(\mathbb{M}\) symmetry, i.e., \[g\left|\Omega\right>=\left|\Omega\right> \tag{6}\] The ground state energy of a CFT with the central charge \(c=24k\) is \(L_{0}=-c/24=-k\), which for the Monster CFT with \(c=24\) becomes \(L_{0}=-1\). This negative energy is due to the assumption of an infinite number of harmonic oscillators oscillating in 24 transverse dimensions, whose ground state energies add up to \(24\times(1+2+3+...)/2=24\times\zeta(-1)/2=-1\). Particularly, the theory is a vertex operator algebra (VOA), which by definition, is a graded infinite-dimensional vector space \(\mathcal{V}=\oplus_{n=0}^{\infty}\mathcal{V}_{n}\). A dual AdS space, which accepts such an infinite-dimensional space, admits \(L_{0}=-1\) too. However, it is known that the Hilbert space of quantum gravity in asymptotically dS space is finite [2][3][4](for a review see [5]). Therefore, a dual dS space of the Monster CFT does not admit \(L_{0}=-1\). It accepts only a finite number of oscillators in its ground state which yields a finite and positive vacuum energy. As noted in (6), the ground state of the theory preserves \(\mathbb{M}\) symmetry, i.e., for each \(g\in\mathbb{M}\) we have a modular transformation that leaves the ground state unchanged. In other words, the microstates of the ground state are invariant under each modular transformation \(g\in\mathbb{M}\). Since there is \(\left|\mathbb{M}\right|\) number of \(g\), we have \(\left|\mathbb{M}\right|\) modular transformations in the ground state. When the number of modular elements is finite, the field over which the transformation is defined must be finite. Over an underlying finite field \(\mathbb{F}_{q}\) (\(\mathbb{Z}/q\mathbb{Z}\)) with \(q\) microstates, the number of modular transformations is \(\left|PSL(2,q)\right|=q^{3}-q\)[15] (when the number of modular transformations is infinite, they form the modular group \(PSL(2,\mathbb{Z})\) which is defined over the infinite field of \(\mathbb{Z}\)). The energy of each microstates oscillating in 24-DOF is equal to the energy of 24 microstates oscillating in 1-DOF. That is, \(q\) microstates oscillating in 24 DOF are equivalent (in terms of energy) to \(24q\) microstates oscillating in 1-DOF, which leads to \(\left|PSL(2,24q)\right|=(24q)^{3}-(24q)\) number of modular transformations. Thus, by \(\mathbb{F}_{q}\), the underlying scalar field in the ground state of the dual dS space, and considering the central charge, we counted the modular transformations that the Monster CFT suggests to be \(|\mathbb{M}|\approx 8\times 10^{53}\). Thus, \[(24q)^{3}-(24q)\approx 8.08\times 10^{53} \tag{7}\] Solving for \(q\) provides three roots with length \(|q|\approx 3.88\times 10^{16}\). This is the string length (\(\ell_{s}\)) of the scalar (boson of) \(\mathbb{F}_{q}\) in the Planck units. The mass of the closed string is then \[m_{s}=\frac{2}{\ell_{s}}\approx\frac{2}{3.88\times 10^{16}}\approx 5.15\times 10^{ -17} \tag{8}\] Accordingly, this is the dimensionless mass in the Planck units. Recovering the dimension by the reduced Planck mass \(M_{P}\approx 2.435\times 10^{18}\,\mathrm{GeV}\), yields \(M_{s}\approx 5.15\times 10^{-17}\times M_{P}\approx 125.4\,\mathrm{GeV}\). This value is almost identical to the experimentally measured mass of the Higgs boson (\(125.1\,\mathrm{GeV}\)) in the Large Hadron Collider (LHC) in 2012 [16]. Taking into account an antiholomorphic counterpart, the ground state energies add up to a total vacuum expectation value (VEV) of \(\approx 250\,\mathrm{GeV}\). Besides the string theoretic explanations for this observation, there is also a simple interpretation without string terminology. The quantum mechanical equivalent length of a particle with mass \(M\) is the reduced Compton wavelength \(\lambda=\frac{\hbar}{Mc}\) which in the Planck units becomes \(\lambda=\frac{1}{m}\) where \(m\) is now a dimensionless mass normalized by the reduced Planck mass. How many Planck wavelengths can be embedded in the Compton wavelength of the Higgs boson? It is \(\lambda_{Higgs}\approx 10^{16}\) (which is the length of the open string). How many Planck cubes (cubes of Planck length) can be embedded in a Higgs cube (cubes of \(\lambda_{Higgs}\) length)? It is approximately \(10^{48}\). How many Planck cubes can be embedded in a 24-Higgs cube (cubes of length \(24\lambda_{Higgs}\))? It is almost the order of Monster group \(\approx 8\times 10^{53}\)! Furthermore, We would like to point out that one can observe that \(|PSL(2,q)|=q^{3}-q\) appears (or is encoded) in the Virasoro algebra, \([L_{m},L_{n}]=(m-n)L_{m+n}+\frac{c}{12}(m^{3}-m)\delta_{m+n,0}\), too. In particular, over a field \(k\) of characteristic \(q\), the Witt algebra is defined to be the Lie algebra of derivations of the ring \(k[z]/z^{q}\), where the Witt algebra is spanned by \(L_{m}\) for \(-1\leq m\leq q-2\). Hence, the central extension term \(m^{3}-m\) reflects the cubic form \(q^{3}-q\) of the number of modular transformations over a finite field. ### Twisted Vacuum As noted in the introduction, in orbifold theories like the Monster CFT, the twisted sector introduces a new ground state and a distinct collection of oscillators. The group of finite modular transformations \(PSL(2,q)\) (a.k.a \(A_{n}(q)\)) is a Chevalley group of Lie type which can be viewed as a Lie group over \(F_{q}\). If \(F_{q}\) is a (unique up to isomorphism) finite field of size \(q\), there is a unique quadratic separable extension of \(F_{q}\), such that the extension field is a finite field of order \(q^{2}\). Such a quadratic extension field is involved in the twisted Chevalley group of Lie type, \(PSU(2,q)\) (a.k.a \({}^{2}A_{n}(q^{2})\)) [15]. While the order of both twisted and untwisted groups are \(q^{3}-q\), the difference is that, in the twisted group, two fields are involved; one of order \(q\) and one quadratic of order \(q^{2}\). In the previous observation, we calculated the vacuum energy of the fixed field in the untwisted sector. Now, we calculate the vacuum energy of the extended field of order \(q^{2}\) in the twisted sector. For the fixed field, we calculated \(\ |q|\approx 3.88\times 10^{16}\). Hence, for the extension field, we have \(q_{twist}=q^{2}=1.5\times 10^{33}\) and therefore \(m_{twist}=\frac{2}{1.5\times 10^{33}}\approx 1.33\times 10^{-33}\), which yields \(vev_{twist}=2m_{twist}=2.66\times 10^{-33}\). If we recover the dimension, as we did above by the reduced Planck mass, we will obtain a \(VEV_{twist}\) of order \(10^{-15}\,GeV\). Remarkably, the observational data reports a vacuum energy density of \(\rho_{vac}\approx 2.6\times 10^{-47}GeV^{4}\)[17] which is equivalent to \(2.2\times 10^{-12}\,GeV\). This observation suggests that calculating \(VEV_{twist}\) is probably estimating \(\rho_{vac}\). Therefore, now, we know that we should not recover the mass by the reduced Planck mass, \(\sqrt{\frac{c\hbar}{8\pi G}}\), because it is not purely quantum mechanical as it is'reduced' by the gravitational factor \(8\pi\) (coming from the Einstein field equations). Instead, to estimate the VEV only from the quantum mechanics, we should recover the mass by the Planck mass, \(\sqrt{\frac{c\hbar}{G}}\approx 1.22\times 10^{19}\,GeV\). Accordingly, \(VEV_{twist}=vev_{twist}\times 1.22\times 10^{19}\approx 3.2\times 10^{-14}\,GeV\). Noting that the Cosmological constant problem is known as the worst theoretical prediction in the history of physics [18](for a review see [19]), this prediction is encouraging. Consequently, this observation may explain the reason for the discrepancy between the experimentally measured VEVs in particle physics and cosmology. Apparently, at the subatomic level of particle physics, the VEV of the untwisted sector, whereas in cosmology, the VEV of the twisted sector is measured. ## 3 Proposals ### Sporadic Groups We provided evidence that the Higgs field has probably the Monster group symmetry in its ground state. Here, we explain why the symmetry of sporadic groups is capable of mass generation and why they can be the only simple groups that have this property. In physics, and particularly in particle physics, the study of symmetry in the context of group theory plays an essential role. One can argue that it is natural to expect the elementary particles to be described by atoms of symmetry, the simple groups. The simple groups are either finite or infinite. Despite the fact that the absolute infinities (divergences) are not physical, more focus in the physics community was on the infinite groups. One reason was perhaps the mathematical difficulty of dealing with finite groups. Nevertheless, due to the hard work of hundreds of mathematicians, the finite simple groups are classified into a couple of general categories: infinite and finite [20]. The finite category contains the sporadic groups (Figure 1) in which the Monster group is the largest. The infinite category contains 3 classes, the cyclic groups \(\mathbb{Z}_{q}\), the alternating groups \(A_{n}\), and the groups of Lie type containing \(PSL(n+1,q)\), \(O(2n+1,q)\), \(PSp(2n,q)\), \(O^{+}(2n,q)\), \(PSU(n+1,q)\), \(O^{-}(2n,q)\), \(F_{4}(q)\), \(G_{2}(q)\), \(E_{6}(q)\), \(E_{7}(q)\), \(E_{8}(q)\), \({}^{2}D_{4}(q)\), and \({}^{2}E_{6}(q)\). These groups are determined by their input parameters \(q\), \(n\), or both. The adjective 'infinite' refers to the fact that there is an infinite number of these groups in each class. For instance, if we select \(n=2\) in \(PSU(n,q)\), we get an infinite number of groups \(PSU(2,2),PSU(2,3),PSU(2,2^{2}),...\), with the order spectrum of \(6,24,120,...\) However, on the other side, 26 sporadic groups in the finite category are _fully determined_. Meaning that there is no input parameter to determine. They are completely fixed by nature and their frozen order spectrum is \(|M_{11}|=7920,|M_{12}|=95040,...\), \(|\mathbb{M}|\approx 8\times 10^{53}\). These orders are the only _constants_ within the simple groups' orders. By constant, we mean a fixed specified number (without any arbitrary input) that when put on the RHS of (7), determines the field's order on the LHS and, therefore, a mass scale. In other words, the orders of sporadic groups are the only orders that are not a function of \(q\) and \(n\), leading to a fixed number of modular transformations and, therefore, a fixed mass. This property is evident in the Monster CFT partition function. There are no massless DOFs in the theory which hints that the theory is profoundly engaged with'mass' and all its parameters are totally fixed. In addition, noting the inverse relation of mass and length, the energy spectrum of the infinite category is UV convergent but IR divergent. However, only the spectrum of the sporadic groups is both UV and IR convergent. The UV and IR convergences in the sporadic groups correspond to its _natural cutoffs_, respectively the smallest and largest sporadic groups. By considering Monster CFT we assessed the IR of the sporadic groups which led to the electroweak scale of the Higgs mass. Now, let us probe the UV of the sporadic groups, namely the Mathieu groups \(|M_{12}|=95040\) and \(|M_{11}|=7920\). The initial moonshine development, the Monstrous moonshine (for a recent review of moonshines see [12]), is recently extended by the link between the largest Mathieu group \(M_{24}\) representation and the elliptic genus of \(K3\) surfaces [21]. Although there are many aspects of the Mathieu moonshine that are mysterious and yet unclear, it ultimately refers to the massive sector [22]. So, not wishing to look a gift horse in the mouth, noting the above observations, let us apply our method to Mathieu groups irrespective of the existence of VOAs/CFTs with the automorphism of such groups. We emphasize that, here, our aim is merely to obtain an approximation of the mass scales corresponding to these symmetries. In this way, for \((24q)^{3}-(24q)\approx|M_{12}|\) and \((24q)^{3}-(24q)\approx|M_{11}|\), we obtain \(q\approx 1.9\) and \(q\approx 0.8\), which yield the integer values of \(q=2\) and \(q=1\), respectively. These values then give rise to the mass scales of \(M_{P}\) and \(2M_{P}\) for closed string (with \(m_{s}=\frac{2}{l_{s}}=\frac{2}{q}\)). Also, \(q=1\) leads to \(M_{P}\), for open string with \(m_{s}=\frac{1}{l_{s}}=\frac{1}{q}\). Consequently, the [IR, UV] pair of the sporadic groups [\(\mathbb{M}\), \(M_{11}\)] coincide with the physical [IR, UV] pair of [Higgs, Planck] scales. Based on this observation, _we propose that the Higgs field was perhaps following the symmetries of the sporadic groups in its evolution_. Remarkably, at the end of the spectrum, there are two phase transition points at the pair [\(q\approx|\mathbb{M}|^{1/3}\), \(q=1\)]. Before discussing these points in the next part, note that the intermediate scales are covered by intermediate sporadic groups, where CFTs for 9 intermediate sporadic groups, for example, are studied in [23]. #### 3.1.1 Phase Transition **UV at \(q=1\)** Mathematically, based on the standard definition of the 'field', 2 is the smallest number of elements of a field. However, in mathematics, the field with a single element [24], \(\mathbb{F}_{1}\), is a known object that behaves like a finite field with one element, if such a field could exist. It is worth mentioning that it has connections to noncommutative geometry and the Riemann hypothesis [25]. Physically, such a collapse of the field concept at \(q=1\) matches with the current picture of modern physics where it is believed that the typical concept of spacetime collapses above the Planck scale. Moreover, at Planck scales, the Schwarzschild radius and the string length (Compton wavelength) are of the same order. If the mass is squeezed smaller than its Schwarzschild radius it becomes a black hole, a type of phase transition. **IR at \(q\approx|\mathbb{M}|^{1/3}\)** This phase transition (which gave rise to the electroweak scale of the Higgs boson) is the electroweak spontaneous symmetry-breaking. It is then natural to ask: How a phase transition can happen in a finite system? The well-known work of Lee and Yang [26][27] nicely describes the mechanism of such a phase transition. According to the Lee-Yang theory, in the infinite-size limit of a finite-size system, when the complex zeros of the partition function become numerous and dense along a certain arc, a phase transition can be triggered. In our system, the partition function of the Monster CFT is the modular invariant \(J\)-function, and such a certain arc is the circle of the discussed finite field \(\mathbb{F}_{q}\). In this way, it turns out that in the evolution of the system, through numerous symmetry breaking, the number of zeros of the \(J\)-function grows to \(q\approx 3.88\times 10^{16}\) and then at this critical value, a phase transition of the Lee-Young type occurs. Moreover, in the original model assessed by Lee and Yang, a system containing \(N\) Ising \(\frac{1}{2}\)-spins \(\sigma_{i}\) with the Hamiltonian \(H=\sum_{i<j}J_{ij}\sigma_{i}\sigma_{j}-\sum_{i}H_{i}\sigma_{i}\) is considered, where, \(J_{ij}\) is the coupling constants of pair ferromagnetic interactions, and \(H_{i}\) is the value of the magnetic field acting on the spin \(\sigma_{i}\). In a homogeneous magnetic field with \(H_{i}=H\), the partition function \(Z_{N}(z_{1},z_{2},...,z_{N})\) is a polynomial of degree \(N\) in the fugacity \(z=exp(-2H\beta)\). According to the Circle theorem, zeros of \(Z_{N}\) lie on the unit circle (\(|z|=1\)), for \(N\) imaginary values of \(H\)[28]. Furthermore, let us explain such a phase transition from another perspective. In the evolution of the system, through numerous symmetry breaking, the entropy increases accordingly. Eventually, at the final state, where the symmetries of the system are all broken, the second law of thermodynamics forces the system to continue breaking the symmetries (increasing entropy). However, the system is saturated and does not have more capacity. Therefore, it must change its phase spontaneously in order to increase entropy. In addition, this phase transition is not the first case related to the number theory. A phase transition in a number-theoretical system is first reported in the prominent work of Bost and Connes [29]. Particularly, the Bost-Connes system, which is a quantum dynamical system originating from the statistical theory of prime numbers, whose partition function is the Riemann zeta function, exhibits a phase transition with spontaneous symmetry breaking. One can naturally imagine such a phase transition by noting the well-known connection between the j-function, which encodes information about the elliptic curves, and the Riemann zeta function, which encodes information regarding prime numbers (generally between modular forms and L-functions) [30][31]. #### 3.1.2 Neutrinos The sporadic groups are divided into 2 separate families, namely the Monster family and the Pariah family which are 6 groups (Lyons, O'Nan group, and Rudvalis, and three Janko groups) that are not subquotients of the Monster group (see Figure 1). Therefore, the model of sporadic groups predicts, at least, two sources of mass. Since in the SM, it is expected to have another source of mass except the Higgs boson for the Neutrinos, we propose that _the source of mass for the Neutrinos is the Pariah family._ Although the Neutrino's masses are not in the spectrum of the Pariah family, the seesaw mechanism [32], for instance, can explain their small masses. There are a couple of supports for this proposal. First, the seesaw mechanism assumes the existence of a very large mass scale comparable to the GUT scale which can be found within the spectrum of the Pariah family (e.g., the scale of Rudvalis is \(\approx 2\times 10^{16}\,GeV\)). Second, the UV of the Pariah family and the Monster family are close to each other. Particularly, the UV of the Pariah family corresponds to the smallest Janko group \(|J_{1}|=175,560\) which is close to \(|M_{12}|=95,040\). ### Leech Lattice We provided evidence that the Monster CFT probably describes the 3D (2D space + 1D time) pure gravity in dS space which leads to the prediction of a flat space plus time. The observational data reported from different sources (e.g., WMAP [33], BOOMERanG [34], and Planck [17]) confirm that the universe is _globally_ flat. Although this is a verification for our proposal, we _locally_ observe a 3D space in the presence of matter. Recall that the resulting 2D space is produced by the compactification of 24D from the target space of 26 bosonic strings, where \(26-24=2\). Consequently, there are 2 ways to obtain 3D instead of 2D space. The first way is to start from the target space of 27D instead of 26D (the existence of such a bosonic string theory is conjectured in [35]). Because the target space is fixed, in this case, we should have observed a 3D space globally, which is not correct. The second way is that the 24 transverse dimensions locally decrease to 23D in the presence of matter, and conversely, by removing the matter, 23D transforms to the original 24D. In this way, inserting matter results in a local 3D space, and removing it yields a 2D space. The possibility of such a distortion depends on the nature of the compactified 24 dimensions. As mentioned, the 24 internal dimensions are discretized by the 24-dimensional Leech lattice \(\Lambda_{24}\), which is an even unimodular lattice with no roots. The local distortion of \(\Lambda_{24}\) to a 23D lattice requires the existence of a 23D lattice with the following properties. 1.It must be unimodular evidently. A simple interpretation is that the volume of each lattice cell must be one. In string theory, in the context of compactification, the ubiquitous unimodular property usually comes with the even property as it is in \(\Lambda_{24}\). This is not the case here because we are not searching for a lattice for compactification. The compactification is done in 24D with \(\Lambda_{24}\), and the aim is now to find a similar neighbor lattice in 23D which makes the distortion possible after perturbation by matter. 2.It must have the same number of roots as \(\Lambda_{24}\) (i.e., zero). There are two reasons for this requirement. First, it is the no-roots property that allows \(\Lambda_{24}\) to have the densest sphere packing among all lattices in 24D and become universally optimal [36]. Implying that between all point configurations of the same density, it provides the minimum possible Gaussian energy [37]. The no-roots property makes \(\Lambda_{24}\) unique such that between 24 unimodular lattices in 24D, \(\Lambda_{24}\) is the only one with this property. It is, therefore, straightforward that these exceptional features stem from the exceptional property of having no roots. The second reason is that when we expect a distortion between two neighbor lattices, then we should expect that they are similar in nature to make a smooth distortion possible. If they have dissimilar numbers of roots, then every time a matter is inserted and removed, the roots should be produced and removed which does not make sense. Moreover, the removal of roots is what we expect in physics. However, if the number of roots is dissimilar, then reciprocal distortion implies the coexistence of removal and adding roots. 3.It must be an unstable excitation mode of \(\Lambda_{24}\) because the local distortions (local 3D) caused by a matter must be recovered (2D) after the matter is removed, otherwise, we should not have observed a global flat universe. In terms of symmetries, it means that it should be less symmetrical than \(\Lambda_{24}\) as distortion breaks symmetries and leads to an excited state with a higher level of energy. Among 117 unimodular lattices in 23D, only one lattice has no roots which is the shorter Leech lattice \(O_{23}\) (with minimal norm 3). Looking at Table 1 of unimodular lattices gives an insightful picture [38][39]. It seems that the property of having no roots is extremely exceptional and meaningful (at least in lower dimensions; since in higher dimensions they rapidly grow, e.g., in 32D there are around \(10^{16}\) unimodular lattices without root). Under the critical 26D, there are only three lattices with no roots, \(\Lambda_{24}\) and the odd Leech lattice \(O_{24}\) in 24D, and \(O_{23}\) in 23D! Remarkably, \(O_{23}\) is odd which makes sense regarding the 3rd property (if it was even then it had all the required properties to be stable). On the other hand, the symmetry group of \(\Lambda_{24}\) and \(O_{23}\) are the double covers of the sporadic Conway groups \(Co_{1}\) and \(Co_{2}\), respectively. This explains the close relationship between \(\Lambda_{24}\) and \(O_{23}\), since \(Co_{2}\) is not only a subquotient of \(Co_{1}\) (see Figure 1) (in sporadic group wordings, \(Co_{2}\) is involved in \(Co_{1}\)) but also it is its largest maximal subgroup. In other words, \(O_{23}\) fits inside \(\Lambda_{24}\) and a distortion between them is possible. A descriptive example of such a conversion is the Binary Golay code (BGC). One way of constructing the Leech lattice is via the error-correcting BGC. There are two _closely related_ BGCs, namely the extended BGC, \(G_{24}\), and the perfect BGC, \(G_{23}\), whose symmetries are the sporadic groups of Mathieu \(M_{24}\) and \(M_{23}\), respectively. Similarly, \(M_{23}\) is a subquotient of \(M_{24}\). \(G_{24}\) can correct up to 3 errors in a codeword of 24 letters and detect up to 7. \(G_{23}\) has a length of 23 letters and can be obtained from \(G_{24}\) by deleting one letter. Conversely, \(G_{24}\) can transform to \(G_{23}\) by adding a parity letter. Also, in Figure 1, one sees that \(M_{24}\) is a subquotient of \(Co_{1}\), and \(M_{23}\) is a subquotient of \(Co_{2}\), suggesting that the relationship (transformation) between these Conway groups is similar to the relationship (transformation) between these Mathieu groups. On the other hand, just as the Monster VOA is associated with \(\Lambda_{24}\), a vertex operator superalgebra, called the shorter Moonshine module [40], whose automorphism group is \(2\times B\), is associated with \(O_{23}\), where \(B\) is the Baby monster group (see the upper bold lines in Figure 1). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Dimension & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline Lattices & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Dimension & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & \\ \hline Lattices & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \(O_{23}\) & \(\Lambda_{24}\:,\,O_{24}\) & 0 \\ \hline \end{tabular} \end{table} Table 1: Unimodular lattices with no roots In sum, _we propose that the fabric of spacetime is very likely the Leech lattice which, in the presence of matter, is perturbed and locally distorts to the shorter Leech lattice._ Remarkably, the model explains why it is not possible to have more than 3 and less than 2 dimensions of space. It is because, as mentioned, there are no 22 and 25-dimensional lattices without roots to make it possible for \(\Lambda_{24}\) and \(O_{23}\) to further transform to other dimensions. We stress that, in the model, the compactification is accomplished only in 24D and \(\Lambda_{24}\) is the original candidate for the fabric of spacetime. Hence, \(O_{23}\) is only the consequence of the existence of matter and it does not show up in a universe without matter. #### 3.2.1 Validation Above, it is proposed that the presence of matter causes \(\Lambda_{24}\) to distort to \(O_{23}\). What is the smallest mass that can do such a distortion? If our proposal is correct, then the model has to predict the amount of mass at which the gravitational effects of the quantum realm become important. This mass scale is known to be the Planck scale, where the Compton wavelength (the equivalent quantum length) of a matter is equal to the Schwarzschild radius (the equivalent gravitational length). In string theory, it is the critical scale of a fundamental string. In the model, deforming \(\Lambda_{24}\) to \(O_{23}\) is equivalent to transforming from a more symmetrical state of \(2.Co_{1}\) to a less symmetrical state of \(2.Co_{1}\), where \(|Co_{1}|=4157776806543360000\) and \(|Co_{2}|=42305421312000\). In this process, symmetries break \(\frac{2|Co_{1}|}{2|Co_{2}|}=98280\) times, or in more accurate terminology, symmetries become hidden. As before, this is the number of modular transformations \(|PSL(2,24q)|=(24q)^{3}-(24q)\) needed to break \(\Lambda_{24}\) to \(O_{23}\). Therefore, we have \((24q)^{3}-(24q)=98280\) which yields the length of \(|q|\approx 1.92\) where the closest integer is \(q=2\). This length yields the closed string mass of \(m_{s}=\frac{2}{2}=1\) which means \(M_{s}=M_{P}\). Figure 1: Diagram of sporadic groups. A line implies that the lower group is a subquotient of the upper one. The colors indicate different generations, namely the Mathieu groups (khaki), Leech lattice groups (grey), other subgroups of the Monster (blue), and the Pariah groups (purple). It is also worthwhile to further explore the significance of 98280 hidden symmetries. The minimal (norm four) vectors that span \(\Lambda_{24}\) split into 3 shapes of \((4^{2},0^{22})\), \((2^{8},0^{16})\), and \((\mp 3,\pm 1^{23})\) respectively with sizes 97152, 1104, and 98304, where, 97152 + 1104 + 98304 = 196560. First, note that 98280 = \(\frac{196560}{2}\), i.e., the smallest faithful permutation representation of \(Co_{1}\) is on 98280 opposite pairs \((x,-x)\) of norm 4 vectors. When exactly half of the minimal vectors become hidden (break), it suggests the connection to the BPS-saturated states (where the BPS bound is saturated when half of the supersymmetry generators are unbroken) which we discuss in the next point. Second, the size of the last shape, \((98304=2^{12}.24)\) which originates from BGC (i.e., a 12-dimensional subspace in a 24D space over \(\mathbb{F}_{2}\)) is \(98280+24\) which means \(98280=24(2^{12}-1)\). Here there are two observations. On one hand, one can see that in the Monster CFT, 196884 states with \(h=2\) consist of 98280 opposite state pairs, plus \(2^{12}.24\) states coming from the twisted sector, plus \(1+2+...+24\) states are of the form \(\alpha_{-1}^{i}\alpha_{-1}^{j}\ket{0}\). On the other hand, we noticed that the reported spectrum of the Monstrous CHL models in [41] also contains two kinds of irreducible representations, one short (BPS) 1-dimensional representation and a long representation of dimension \(2^{12}\) and the theory has 24 spacetime supersymmetries. Before mentioning their work, recall that the BPS states are those states that are annihilated by some nonzero odd element in the superalgebra forming short representations of an extended supersymmetry algebra [42]. It sounds interesting since we also have a symmetry breaking from the even Leech lattice (with minimal norm 4) to the odd shorter Leech lattice (with minimal norm 3). Their work involves compactification (on the spatial circle) in the heterotic Monster theory \(\mathbb{M}\times Co_{0}\) from \((1+1)D\) to \((0+1)D\). The heterotic Monster contains a supersymmetric variant of Monster CFT, which is first remarked in [14] and then constructed in [43]. In particular, the left-moving side has the same construction as the Monster CFT. The right-moving NS sector is a \(N=1\) superconformal field theory with \(c=12\) whose symmetry group is the familiar \(2Co_{1}\), built in terms of the \(E_{8}\) lattice. Although there is not a direct geometric interpretation for the internal CFT, it can be obtained as \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) orbifold of the product of the holomorphic Leech lattice CFT and the anti-holomorphic \(E_{8}\) lattice SCFT. The compactification of heterotic strings on torus \(\mathbb{T}^{8}\) yields a \((1+1)D\) theory with \((8,8)\) spacetime supersymmetry. While the \(\mathbb{Z}_{2}\) orbifold acting on the left-moving (bosonic) sector respects them, the \(\mathbb{Z}_{2}\) orbifold acting on the supersymmetric right-moving side breaks half of them. Then, incorporating 16 extra supersymmetries with the same spacetime chirality by the twisted sector, yields \((0,24)\) supersymmetry of the resulting theory. The full partition function of the Moster heterotic theory is \(Z_{NS}(q,\bar{q})=Z_{\mathbb{M}}(q).\bar{Z_{NS}^{-}}(\bar{q})\) and \(Z_{R}(q,\bar{q})=Z_{\mathbb{M}}(q).\bar{Z_{R}}(\bar{q})\) where, \(Z_{\mathbb{M}}(q)\) is the partition function of the Monster CFT, \(\bar{Z_{NS}^{-}}(\bar{q})=-\bar{Z_{R}}(\bar{q})=\frac{\Theta_{E_{8}}\theta_{2} ^{2}}{2\eta^{22}}=8+2048q+49152q^{2}+...\), and \(\Theta_{E_{8}}\) is the theta function of the \(E_{8}\) lattice. Note that the partition function in the bosonic left-movers is a worldsheet object whereas the _supersymmetric right-movers belong to spacetime_[41]. Thus, distortion (symmetry breaking) from \(\Lambda_{24}\) to \(O_{23}\) seemingly breaks the spacetime supersymmetry. ### Parallel Chains In attempts for the unification, it is understood that the notable gauge theory candidates, that can accommodate the fermion families, are the GUT groups of \(SU(5)\), \(SO(10)\), and \(E_{6}\); respectively with a chiral matter description of quark-leptons in the \(\bar{5}+10\), \(16\), \(27\)-dimensional representation. Even though \(SU(5)\) and \(SO(10)\) nicely match with the fermion quantum numbers, the gifts we received during the process of trying for unification (e.g., predictions of proton decay and neutrino masses) justify the desire for larger GUT groups like \(E_{6}\). In this context, the introduction of group embedding \(E_{6}\to SO(10)\to SU(5)\to SM\) was justified. The \(E_{6}\) group, which is an exceptional simple Lie group, also shows up in the embedding chain of the exceptional Lie groups \(E_{8}\to E_{7}\to E_{6}\to F_{4}\to G_{2}\). In this embedding, notably, \(E_{6}\) is the only exceptional group with complex representations, and therefore, the only eligible one to contain chiral fermions and be a candidate for GUT. In this regard, a natural question was raised as to why nature should not allow the chain of exceptional groups above \(E_{6}\)[44]? What is exceptional with this exceptional group? The desire to continue the chain of exceptional groups above \(E_{6}\) was mainly due to the emergence of the heterotic \(E_{8}\times E_{8}\) (HE) superstring theory [45][46]. In the development of the string theory, the HE superstring is a well-marked candidate. Therefore, let us calculate its corresponding energy scale in a similar manner. Although the simple exceptional group \(E_{8}\) is an infinite group and, therefore, leads to a zero mass, we exceptionally consider its Weyl group which is a finite group and describes the symmetries of the weight lattice of \(E_{8}\). As demonstrated in [47], in the case of finite-dimensional representations, there exists a canonical action of the Weyl groups on observables such that many physical relations are a consequence of the Weyl group action rather than the action of the original group of root systems on observables. Particularly, \(Weyl(E_{8})=2.O_{8}^{+}(2).2\), i.e., a stem extension by \(\mathbb{Z}_{2}\) of an extension of \(\mathbb{Z}_{2}\) by the orthogonal group \(O_{8}^{+}(2)\)[48]. The target space of the HE superstring is again the \(26\) bosonic strings where compactification of \(16\)D results in a \(10\)D spacetime. Thus, we have \(|PSL(2,16q)|=(16q)^{3}-(16q)=|Weyl(E_{8})|=696729600\). Solving for \(q\), yields \(\ |q|\approx 55.4\). The mass of the NS sector ground state is \(m_{s}=\frac{\sqrt{2}}{\ell s}\). Therefore, \(M_{s}\approx 6\times 10^{16}\,\text{GeV}\). This value is in agreement with the approximated GUT scale of \(10^{16}\ GeV\) with supersymmetry [49]. As mentioned, the heterotic version of the Monster CFT, \(\mathbb{M}\times Co_{0}\), is also constructed in [43]. Putting the symmetry groups of the left and right moving sides in a bracket as a pair, \([\mathbb{M}\,\ Co_{1}]\) (\(Co_{0}\) is the double cover of \(Co_{1}\)), we found it worthwhile to find other similar pairs (bold lines in Figure 1). In this manner, one level lower, we find the pair \([B\,\ Co_{2}]\) noting the discussed relationship between \(Co_{1}\) and \(Co_{2}\). As noted, a vertex operator superalgebra, called the shorter Moonshine module, whose symmetry group is \(2\times B\), where \(B\) is the Baby Monster group, is also constructed in [40]. According to McKay's observation, the exceptional Lie groups \(E_{8}\), \(E_{7}\), and \(E_{6}\) are in correspondence respectively with \(\mathbb{M}\), \(B\), and \(Fi_{24}\)[50][51][52] regarding the assignment of their extended Dynkin diagram to certain conjugacy classes in the sporadic groups. So, the next pair should start with \(Fi_{24}\), and according to the subquotient relationships, it becomes \([Fi_{24}\,\ M_{24}]\). The next pairs are then easy to find, \([Fi_{23}\,\ M_{23}]\) and \([Fi_{22}\,\ M_{22}]\). These pairs along the abovementioned group embedding, considering McKay's correspondence are collected in Table 2 Note that the table is divided into two different sections regarding the nature of the left element of the pairs (LEPs) and the right element of the pairs (REPs). First, the REPs in the upper section are the familiar Conway groups (from the 2nd generation of the sporadic groups). The REPs in the lower section are the Mathieu groups (from 1st generation of the sporadic groups). Second, the LEPs of the lower part are the Fischer groups, where \(Fi_{24}\) is not a subquotient of \(B\) and is not involved in it. Therefore, Table 2 may be elaborating and providing a hint for the above question about the \(E_{6}\) distinction. That is, not only \(B\) does not contain \(F_{24}\) (which cuts the chain) but also the REPs of the upper section are reserved for the spacetime symmetry of the Leech and shorter Leech lattices. Finally, there is another rationale pointing at \(E_{6}\). If we search for the largest maximal subgroup of \(\mathbb{M}\), we find \(B\) which is also the centralizer of an involution. Next, if we search for the largest maximal subgroup of \(B\), we do not find a sporadic group! It is the twisted Chevalley group \({}^{2}E_{6}(2)\) (over \(\mathbb{F}_{2}\)) [20] which is also the the centralizer of an involution. This latter embedding of \({}^{2}E_{6}(2)\) in \(B\) is "a match made in Heaven" according to Stroth [53]. Also, its order can be calculated by inserting \(q=2\) in \(|^{2}E_{6}(q)|=|q^{36}(q^{12}-1)(q^{9}+1)(q^{8}-1)(q^{6}-1)(q^{5}+1)(q^{2}-1)|\) which approximately is \(2^{78}\approx 10^{23}\). Over, e.g. \(q=|\mathbb{M}|^{1/3}\), it has then around \(10^{1326}\) points which seem enough to reflect a favorable smooth manifold. For comparison, there are about \(10^{80}\) atoms in the universe. As a result, we stress the importance of the embedding chain of \(\mathbb{M}\to B\rightarrow\ ^{2}E_{6}(2)\). In sum, in light of the presented observations, we found these pieces of evidence satisfactory to come to the conclusion that _a fundamental theory of nature is possibly discrete and finite_. ### Acknowledgment I especially thank my family for their support and appreciate the discussion with Masud Naseri. \begin{table} \begin{tabular}{|c|c|} \hline \(E_{8}\) & \([\mathbb{M}\,\ Co_{1}]\) \\ \(E_{7}\) & \([B\,\ Co_{2}]\) \\ \hline \(E_{6}\) & \([Fi_{24}\,\ M_{24}]\) \\ \(SO(10)\) & \([Fi_{23}\,\ M_{23}]\) \\ \(SU(5)\) & \([Fi_{22}\,\ M_{22}]\) \\ \hline \end{tabular} \end{table} Table 2: Parallel Chains
2309.10233
Quark phases in neutron stars consistent with implications of NICER
The analyses for the NICER data imply $R_{2.0M_\odot}=12.41^{+1.00}_{-1.10}$ km and $R_{1.4M_\odot}=12.56^{+1.00}_{-1.07}$ km, indicating the lack of significant variation of the radii from $1.4 M_\odot$ to $2.0 M_\odot$. This feature cannot be reproduced by the hadronic matter due to the softening of equation of state (EoS) by hyperon mixing, indicating the possible existence of quark phases in neutron-star interiors. % Two models are used for quark phases: In the quark-hadron transition (QHT) model, quark deconfinement phase transitions from a hadronic-matter EoS are taken into account so as to give reasonable mass-radius ($MR$) curves by adjusting the quark-quark repulsions and the density dependence of effective quark mass. % In the quarkyonic model, the degrees of freedom inside the Fermi sea are treated as quarks and neutrons exist at the surface of the Fermi sea, where $MR$ curves are controlled mainly by the thickness of neutron Fermi layer. % The QHT and quarkyonic EoSs can be adjusted so as to reproduce radii, tidal deformabilities, pressure and central densities inferred from the NICER analysis better than the nucleonic matter EoS, demonstrating the clear impacts of quark phases. Then, the maximum mass for the quakyonic-matter EoS is considerably larger than that for the QHT-matter EoS.
Y. Yamamoto, N. Yasutake, Th. A. Rijken
2023-09-19T01:15:41Z
http://arxiv.org/abs/2309.10233v1
# Quark phases in neutron stars consistent with implications of NICER ###### Abstract The analyses for the NICER data imply \(R_{2.0M_{\odot}}=12.41^{+1.00}_{-1.10}\) km and \(R_{1.4M_{\odot}}=12.56^{+1.00}_{-1.07}\) km, indicating the lack of significant variation of the radii from \(1.4M_{\odot}\) to \(2.0M_{\odot}\). This feature cannot be reproduced by the hadronic matter due to the softening of equation of state (EoS) by hyperon mixing, indicating the possible existence of quark phases in neutron-star interiors. Two models are used for quark phases: In the quark-hadron transition (QHT) model, quark deconfinement phase transitions from a hadronic-matter EoS are taken into account so as to give reasonable mass-radius (\(MR\)) curves by adjusting the quark-quark repulsions and the density dependence of effective quark mass. In the quarkyonic model, the degrees of freedom inside the Fermi sea are treated as quarks and neutrons exist at the surface of the Fermi sea, where \(MR\) curves are controlled mainly by the thickness of neutron Fermi layer. The QHT and quarkyonic EoSs can be adjusted so as to reproduce radii, tidal deformabilities, pressure and central densities inferred from the NICER analysis better than the nucleonic matter EoS, demonstrating the clear impacts of quark phases. Then, the maximum mass for the quarkyonic-matter EoS is considerably larger than that for the QHT-matter EoS. pacs: 21.30.Cb, 21.30.Fe, 21.65.+f, 21.80.+a, 12.39.Jh, 25.75.Nq, 26.60.+c ## I Introduction In studies of neutron stars (NS), the fundamental role is played by the equation of state (EoS) for neutron star matter. The massive neutron stars with masses over \(2M_{\odot}\) have been reliably established by the observations of NSs J1614\(-\)2230 [1], J0348+0432 [2], J0740+6620 [3] and J0952-0607 [4]. The radius information of NSs have been obtained for the massive NS PSR J0740+6620 with \(2M_{\odot}\) and \(1.4M_{\odot}\) NSs, shown as \(R_{2M_{\odot}}\) and \(R_{1.4M_{\odot}}\), from the analyses for the X-ray data taken by the _Neutron Star Interior Composition Explorer_ (NICER) and the X-ray Multi-Mirror (XMM-Newton) observatory. The analysis of Miller et al. gives \(R_{2.08M_{\odot}}=12.35\pm 0.75\) km and \(R_{1.4M_{\odot}}=12.45\pm 0.65\) km [5]. The analysis of Riley gives \(R_{2.08M_{\odot}}=12.39^{+1.30}_{-0.98}\) km and \(R_{1.4M_{\odot}}=12.33^{+0.76}_{-0.81}\) km [6]. Legred et al. investigate these measurement's implications for the EoS, employing a nonparametric EoS model based on Gaussian processes and combining information from other X-ray and gravitational wave observations [7]. The purpose of this paper is to demonstrate that the radius information of massive NSs give the important constraints for the neutron-star EoSs. In our EoS analysis, the following neutron-star radii are adopted as critical values to be reproduced: \[R_{2.0M_{\odot}} =12.41^{+1.00}_{-1.10}\ \ \mathrm{km}\] \[R_{1.4M_{\odot}} =12.56^{+1.00}_{-1.07}\ \ \mathrm{km} \tag{1}\] with maximum mass \(M_{max}/M_{\odot}=2.21^{+0.31}_{-0.21}\), being given by the analysis by Legred et al.[7]. The median values of \(R_{2M_{\odot}}\) and \(R_{1.4M_{\odot}}\) in the above three references [5][6][7] are only a few hundred meters apart from each other. We set the fitting accuracy to a few hundred meters in our analysis for \(R_{2M_{\odot}}\) and \(R_{1.4M_{\odot}}\). Then, the EoS obtained from our analysis are not changed, even if the set of \(R_{2M_{\odot}}\) and \(R_{1.4M_{\odot}}\) in [5] or [6] is used as the criterion instead of Eq.(1) or all three sets in [5][6][7] are used. The key feature found commonly in the three sets is the small variation of radii from \(1.4M_{\odot}\) to \(2M_{\odot}\), namely \(R_{2M_{\odot}}\approx R_{1.4M_{\odot}}\). The reason why the result in [7] is used in our present analysis is because they present the inferred values of maximum masses, radii, tidal deformabilities, pressure and central densities obtained from their analysis. These quantities can be compared with our corresponding results, by which the features of our EoSs are revealed in detail. The hyperon mixing in neutron-star matter brings about a remarkable softening of the EoS and a maximum mass is reduced to a value far less than \(2M_{\odot}\). The EoS softening is caused by changing of high-momentum neutrons at Fermi surfaces to low-momentum hyperons via strangeness non-conserving weak interactions overcoming rest masses of hyperons. In order to derive EoSs for massive NSs, it is necessary to solve this "hyperon puzzle in neutron stars". There have been proposed possible mechanisms: (i) more repulsive hyperon-hyperon interactions in relativistic mean field (RMF) models driven by vector mesons exchanges [8; 9; 10; 11], (ii) repulsive hyperonic three-body forces [12; 13; 14; 15; 16; 17; 18; 19], (iii) appearance of other hadronic degrees of freedom, such as \(\Delta\) isobars [20] or meson condensates [21; 22; 23; 24; 25], (iv) existence of quark phases in high-density regions [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. It should be noted that the criterion for NS radii Eq.(1) is stricter than the con dition of \(M_{max}>2M_{\odot}\) only to solve the "puzzle" and the above mechanisms are needed to be re-investigated under this stricter condition. One of the approaches belonging to (ii) is to assume that three-nucleon repulsions (TNR) [37] work universally among every kind of baryons as three-baryon repulsions (TBR) [12]. In [14; 15; 16], the multi-pomeron exchange potential (MPP) was introduced as a model of universal repulsions among three and four baryons on the basis of the extended soft core (ESC) baryon-baryon interaction model developed by two of the authors (T.R. and Y.Y.) and M.M. Nagels [39; 40; 41]. In the case of this special modeling for hyperonic three-body repulsions, the EoS softening by hyperon mixing is not completely recovered by the above universal repulsions, and the maximum masses become not so large even if universal many-body repulsions increase. As a result, the maximum masses for hyperonic-matter EoS cannot be over \(2M_{\odot}\), as found in [14; 15; 16]: It is difficult that criterion Eq.(1) is realized by this modeling of hadronic-matter EoSs. A simple way to avoid the strong softening of EoS by hyperon mixing is to assume \(\Lambda NN\) repulsions stronger than \(NNN\) repulsions with neglect of \(\Sigma^{-}\) mixing [17]. In this paper, we focus on the mechanism (iv). It is possible to solve the "hyperon puzzle" by taking account of quark deconfinement phase transitions from a hadronic-matter EoS to a sufficiently stiff quark-matter EoS in the neutron-star interiors, namely by studying hybrid stars having quark matter in their cores, where repulsive effects in quark phases are needed to result in massive stars over \(2M_{\odot}\). In the Nambu-Jona-Lasinio (NJL) model, for instance, repulsions to stiffen EoSs are given by vector interactions. Then, it is known well that quark-hadron phase transitions should be crossover or at most weak first-order, because strong first-order transitions soften EoSs remarkably in order to obtain stiff EoSs. In [35], they derived the new EoS within the quark-hadron crossover (QHC) framework (3-windows model) so as to reproduce \(R_{2.1M_{\odot}}\approx R_{1.4M_{\odot}}\approx 12.4\)km. Here, the small variation of radii indicates that the pressure grows rapidly while changes in energy density are modest, producing a peak in the speed of sound [35]. In their QHC framework, the EoSs in the quark-hadron mixed region of \(1.5\rho_{0}\sim 3.5\rho_{0}\), playing a decisive role for the resulting \(MR\) curves, are given by the interpolating functions phenomenologically. Then, it is meaningful to study the other modeling for phase transitions in which the mixed regions are modeled explicitly. We investigate how this criterion Eq.(1) can be realized in the case of using the EoS derived from our quark-hadron transition (QHT) model for neutron-star matter in the Bruecener-Hartree-Fock (BHF) framework [36], being different from their 3-windows model. Here, the quark-matter EoS is derived from the two-body quark-quark (\(QQ\)) potentials, in which all parameters are on the physical backgrounds with no room for arbitrarily changing: They are composed of meson-exchange quark-quark potentials derived by unfolding of the baryon-baryon meson-exchanges, and instanton-exchange, one-gluon-exchange and multi-pomeron exchange potentials. Then, baryonic matter and quark matter are treated in the common BHF framework, where quark-hadron transitions are treated on the basis of the Maxwell condition. In this paper, it is shown that the criterion Eq.(1) can be realized by our QHT model for neutron-star matter, as well as the QHC model [35], by adjusting the \(QQ\) repulsion to be strong enough and the quark-hadron transition density to be about \(2\rho_{0}\). In our QHT model the BHF framework is used for deriving the quark-matter EoS, which is not popular. Our treatments for quark-hadron phase transitions is the same as that in [33] where the NJL model is adopted for quark matter under the mean field approximation. In spite of the difference between quark-matter models, their obtained \(MR\) curves are similar to ours in [36]. Therefore, it is considered that the same conclusions can be derived also by using their QHT model instead of ours. Another type of quark phase in neutron-star interiors is given by the quarkyonic matter [43; 44; 45; 46; 47; 48; 49; 50], where the degrees of freedom inside the Fermi sea are treated as quarks and nucleons exist at the surface of the Fermi sea. The transition from hadronic-matter phase to the quarkyonic-matter phase is considered to be in second-order. In the quarkyonic matter, the existence of free quarks inside the Fermi sea gives nucleons extra kinetic energy by pushing them to higher momenta, leading to increasing pressure. This mechanism to realize the criterion Eq.(1) is completely different from the QHT matter in which the essential roles for EoS stiffening are played by the \(QQ\) repulsions. Then, it is valuable to study the characteristic differences between neutron-star mass-radius (\(MR\)) curves obtained from the QHT-matter EoS and quarkyonic-matter EoS. This paper is organized as follows: In Sect.II, the hadronic-matter EoS (II-A), the quark-matter EoS (II-B) and the quarkyonic-matter EoS (II-C) are formulated on the basis of our previous works, where the BHF frameworks with our \(QQ\) potentials are adopted both for baryonic matter and quark (quarkyonic) matter. Transitions from hadron phases to quark matter (quasyonic) phases are explained. In Sect.III-A, the calculated results are shown for pressures, energy densities and sound velocities. In III-B, the \(MR\) curves of hybrid stars are obtained by solving the Tolman-Oppenheimer-Volkoff (TOV) equation. In III-C, the obtained values of maximum masses, radii, tidal deformabilities, pressure and central densities are compared with those inferred from the NICER-data analysis. The conclusion of this paper is given in Sect.IV. Models of neutron-star matter ### hadronic matter The hadronic matter is defined as \(\beta\)-stable hyperonic nuclear matter including leptons, composed of \(n\), \(p^{+}\), \(\Lambda\), \(\Sigma^{-}\), \(e^{-}\), \(\mu^{-}\). We recapitulate here the hadronic-matter EoS. In the BHF framework, the EoS is derived with use of the ESC baryon-baryon (\(B\!B\)) interaction model [14; 15; 16]. As is well known, the nuclear-matter EoS is stiff enough to assure neutron-star masses over \(2M_{\odot}\), if the strong three-nucleon repulsion (TNR) is taken into account. However, there appears a remarkable softening of EoS by inclusion of exotic degrees of freedom such as hyperon mixing. One of the ideas to avoid this "hyperon puzzle" is to assume that the many-body repulsions work universally for every kind of baryons [12]. In [14; 15; 16], the multi-pomeron exchange potential MPP was introduced as a model of universal repulsions among three and four baryons. This was inspired by the multi-reggeon model to describe CERN-ISR pp-data [38]. The ESC work is mentioned in [39; 40; 41]. In [16] they proposed three versions of MPP (MPa, MPa\({}^{+}\), MPb), where MPa and MPa\({}^{+}\) (MPb) include the three- and four-body (only three-body) repulsions. Their strengths are determined by analyzing the nucleus-nucleus scattering using the G-matrix folding model under the conditions that the saturation parameters are reproduced reasonably. The EoSs for MPa and MPa\({}^{+}\) are stiffer than that for MPb, and maximum masses and radii of neutron stars obtained from MPa, MPa\({}^{+}\) are larger than those from MPb. The important criterion for repulsive parts is the resulting neutron-star radii \(R\) for masses of \(1.4M_{\odot}\): In the case of using MPb, we obtain \(R_{1.4M_{\odot}}\approx 12.4\) km similar to the value in the criterion Eq.(1.1). On the other hand, we have \(R_{1.4M_{\odot}}\approx 13.3\) (13.6) km in the case of MPa (MPa\({}^{+}\)). In this paper, we adopt MPb as three-baryon repulsion: Our nuclear interactions are composed of two-body part \(V_{BB}\) and three-body part \(V_{BBB}\), where \(V_{BB}\) and \(V_{BBB}\) are given by ESC and MPb, respectively. It is worthwhile to say that the three-nucleon repulsion in MPb is stronger than the corresponding one (UIX) in the standard model by APR [37] giving rise to \(R_{1.4M_{\odot}}\approx 11.6\) km [42]. \(BB\) G-matrix interactions \({\cal G}_{BB}\) are derived from \(BB\) bare interactions \(V_{BB}\) or \(V_{BB}+V_{BBB}\)[14]. They are given for each \((BB^{\prime},T,S,P)\) state, \(T\), \(S\) and \(P\) being isospin, spin and parity in a two-body state, respectively, and represented as \({\cal G}^{TSP}_{BB^{\prime}}\). The G-matrix interactions derived from \(V_{BB}\) and \(V_{BB}+V_{BBB}\) are called B1 and B2, respectively. In the quarkyonic model, we need only the neutron-neutron sectors, \({\cal G}^{SP}_{nn}\). A single baryon potential is given by \[U_{B}(k) = \sum_{B^{\prime}=n,p,\Lambda,\Sigma^{-}}U_{B}^{(B^{\prime})}(k)\] \[= \sum_{B^{\prime}=n,p,\Lambda,\Sigma^{-}}\sum_{k^{\prime}<k^{(B^{ \prime})}_{F}}\langle kk^{\prime}|{\cal G}_{BB^{\prime}}|kk^{\prime}\rangle\] with \(B=n,p,\Lambda,\Sigma^{-}\). Here, \(\langle kk^{\prime}|{\cal G}_{BB^{\prime}}|kk^{\prime}\rangle\) is a \(BB^{\prime}\) G-matrix element in momentum space, being derived from \(V_{BB}\) or (\(V_{BB}+V_{BBB}\)), and \(k^{(B)}_{F}\) is the Fermi momentum of baryon \(B\). In this expression, spin and isospin quantum numbers are implicit. The baryon energy density is given by \[\varepsilon_{B} = \tau_{B}+\upsilon_{B}\] \[= g_{s}\int_{0}^{k^{(B)}_{F}}\frac{d^{3}k}{(2\pi)^{3}}\left\{ \sqrt{h^{2}k^{2}+M_{B}^{2}}+\frac{1}{2}U_{B}(k)\right\}\,\] where \(\tau_{B}\) and \(\upsilon_{B}\) are kinetic and potential parts of the energy density. In \(\beta\)-stable hadronic matter composed of \(n\), \(p\), \(e^{-}\), \(\mu^{-}\), \(\Lambda\) and \(\Sigma^{-}\), equilibrium conditions are given as (1) chemical equilibrium conditions, \[\mu_{n}=\mu_{p}+\mu_{e} \tag{2.3}\] \[\mu_{\mu}=\mu_{e}\] (2.4) \[\mu_{\Lambda}=\mu_{n}\] (2.5) \[\mu_{\Sigma^{-}}=\mu_{n}+\mu_{e} \tag{2.6}\] (2) charge neutrality, \[\rho_{p}=\rho_{e}+\rho_{\mu}+\rho_{\Sigma^{-}} \tag{2.7}\] (3) baryon number conservation, \[\rho=\rho_{n}+\rho_{p}+\rho_{\Lambda}+\rho_{\Sigma^{-}}. \tag{2.8}\] Expressions for \(\beta\)-stable nucleonic matter composed of \(n\), \(p\), \(e^{-}\) and \(\mu^{-}\) are obtained by omitting hyperon sectors from the above expressions for \(\beta\)-stable baryonic matter. ### Quark-Hadron transition model In our treatment of quark matter, the BHF framework is adopted on the basis of two-body \(QQ\) potentials [36]. Here, correlations induced by bare \(QQ\) potentials are renormalized into coordinate-space G-matrix interactions, being considered as effective \(QQ\) interactions used in quark-matter calculations. Our bare \(QQ\) interaction is given by \[V_{QQ} = V_{EME}+V_{INS}+V_{OGE}+V_{MPP} \tag{2.9}\] where \(V_{EME}\), \(V_{INS}\), \(V_{OGE}\) and \(V_{MPP}\) are the extended meson-exchange potential, the instanton-exchange potential, the one-gluon exchange potential and the multi-pomeron exchange potential, respectively. Parameters in our \(QQ\) potential are chosen so as to be consistent with physical observables. The \(V_{EME}\)\(QQ\) potential is derived from the ESC \(BB\) potential so that the \(QQM\) couplings are related to the \(BBM\) couplings through folding procedures with Gaussian baryonic quark wave functions. In the construction of the relation between \(BBM\) and \(QQM\) couplings, the requirement that the coefficients of the \(1/M^{2}\) expansion should match is based on Lorentz invariance, which fixes the QQM couplings and also determines the (few) extra vertices at the quark level [39]. Then, the \(V_{EME}\)\(QQ\) potential is basically of the same functional expression as the ESC \(BB\) potential. Strongly repulsive components in ESC \(BB\) potentials are described mainly by vector-meson and pomeron exchanges between baryons. This feature persists in the \(V_{EME}\)\(QQ\) potential, which includes the strongly repulsive components originated from vector-meson and pomeron exchanges between quarks. Similarly the multi-pomeron exchange potentials among quarks, \(V_{MPP}\), are derived from the corresponding ones among baryons, giving repulsive contributions. Contributions from \(V_{INS}\) and \(V_{OGE}\) in average are attractive and repulsive, respectively. The strength of \(V_{OGE}\) is determined by the quark-gluon coupling constant \(\alpha_{S}\). In [36]\(\alpha_{S}\) is chosen as 0.25, that is \(V_{OGE}(\alpha_{S}=0.25)\), and the three sets are defined as follows: Q0 : \(V_{EME}\), Q1 : \(V_{EME}+V_{INS}+V_{OGE}(\alpha_{S}=0.25)\) Q2 : \(V_{EME}+V_{MPP}+V_{INS}+V_{OGE}(\alpha_{S}=0.25)\). In our QHT model for neutron-star matter, quark-hadron phase transitions occur at crossing points of hadron pressure \(P_{H}(\mu)\) and quark pressure \(P_{Q}(\mu)\) being a function of chemical potential \(\mu\). Positions of crossing points, giving quark-hadron transition densities, are controlled by parameters \(\rho_{c}\) and \(\gamma\) included in our density-dependent quark mass \[M_{Q}^{*}(\rho_{Q})=\frac{M_{0}}{1+\exp[\gamma(\rho_{Q}-\rho_{c})]}+m_{0}+C \tag{10}\] with \(C=M_{0}-M_{0}/[1+\exp(-\gamma\rho_{c})]\) assuring \(M_{Q}^{*}(0)=M_{0}+m_{0}\), where \(\rho_{Q}\) is number density of quark matter, and \(M_{0}\) and \(m_{0}\) are taken as 300 (360) MeV and 5 (140) MeV for \(u\) and \(d\) (\(s\)) quarks. Here, the effective quark mass \(M_{Q}^{*}(\rho_{Q})\) should be used together with \(B(\rho_{Q})=M_{Q}^{*}(0)-M_{Q}^{*}(\rho_{Q})+B_{0}\), meaning the energy-density difference between the perturbative vacuum and the true vacuum. A constant term \(B_{0}\) is added for fine tuning of an onset density. In [36], the values of (\(\rho_{c}\), \(\gamma\)) without \(B_{0}\) are given for each set of Q0, Q1 and Q2. Let us focus on the typical result for Q2+H1 in [36]. The \(QQ\) interaction Q2 is the most repulsive among Q0, Q1 and Q2. The \(BB\) interaction H1 consists of ESC and MPb, and results in the reasonable value of \(R_{1.4M_{\odot}}\). In this case of Q2+H1, we obtain the maximum mass of \(2.25M_{\odot}\) and the reasonable value of \(R_{1.4M_{\odot}}=12.5\) km, in which the quark-hadron transition occurs at density of \(3.5\rho_{0}\). Then, we have \(R_{2.0M_{\odot}}=12.0\) km, being rather smaller than 12.4 km in the criterion Eq.(1). In order to reproduce a larger value of \(R_{2.0M_{\odot}}\approx 12.4\) km, we make \(V_{OGE}\) more repulsive by taking larger values of \(\alpha_{S}=0.36\) and 0.49. It is not suitable for such a purpose to strengthen the \(V_{MPP}\) repulsion, because \(V_{MPP}\) is essentially of three-body interaction and the contributions in low-density region are small. On the other hand, \(V_{OGE}\) is of two-body interaction, and its repulsive contributions are not small even in low density region, being important for a large value of \(R_{2.0M_{\odot}}\). Another condition to make \(R_{2.0M_{\odot}}\) larger is to lower quark-hadron transition densities by adjusting the parameters (\(\rho_{c}\),\(\gamma\),\(B_{0}\)) included in the density-dependent quark mass Eq.(10). We define newly the following three sets with the fixed value of \(\gamma\)=1.2 \[\mbox{Q2 : }V_{EME}+V_{MPP}+V_{INS}+V_{OGE}(\alpha_{S}=0.25)\] \[\mbox{with }\rho_{c}=6.9\rho_{0}\mbox{ and }B_{0}=\)8.5\] \[\mbox{Q3 : }V_{EME}+V_{MPP}+V_{INS}+V_{OGE}(\alpha_{S}=0.36)\] \[\mbox{with }\rho_{c}=6.9\rho_{0}\mbox{ and }B_{0}=\)7.5\] \[\mbox{Q4 : }V_{EME}+V_{MPP}+V_{INS}+V_{OGE}(\alpha_{S}=0.69)\] \[\mbox{with }\rho_{c}=7.5\rho_{0}\mbox{ and }B_{0}=\)10.0 where the values of \(\rho_{c}\) and \(B_{0}\) for each set are chosen so as to give quark-hadron transition densities of \(\sim 2\rho_{0}\). G-matrix interactions \({\cal G}_{qq^{\prime}}\) with \(q,q^{\prime}=u,d,s\) are derived from the above bare \(QQ\) interactions. They are given for each \((qq^{\prime},T,S,P)\) state, \(T\), \(S\) and \(P\) being isospin, spin and parity in a two-body state, respectively, and represented as \({\cal G}_{qq^{\prime}}^{TSP}\). Hereafter, Q2, Q3 and Q4 mean the naming of corresponding \(QQ\) G-matrix interactions, not only of bare \(QQ\) interactions. The \(QQ\) G-matrix interactions are used also in the quarkyonic matter calculations. A single quark potential is given by \[U_{q}(k) = \sum_{q^{\prime}=u,d,s}U_{q}^{(q^{\prime})}(k)=\sum_{q^{\prime}=u, d,s}\sum_{k^{\prime}<k^{\prime}_{F}}\langle kk^{\prime}|{\cal G}_{qq^{\prime}}|kk^{ \prime}\rangle\] with \(q=u,d,s\), where \(k^{q}_{F}\) is the Fermi momentum of quark \(q\). Spin and isospin quantum numbers are implicit. The quark energy density is given by \[\varepsilon_{q} = g_{s}N_{c}\sum_{q=u,d,s}\int_{0}^{k_{Fq}}\frac{d^{3}k}{(2\pi)^{3}}\] \[\left\{\sqrt{\hbar^{2}k^{2}+M_{q}^{2}}+\frac{1}{2}U_{q}(k)\right\}\.\] Fermion spin and quark color degeneracies give rise to \(g_{s}=2\) and \(N_{c}=3\). In order to demonstrate the features of our \(QQ\) interactions (Q2,Q3,Q4), we show the potential energy per particle \(U/A\) as a function of the baryon number density \(\rho_{B}=\frac{1}{3}\rho_{Q}\) in the case of taking \(\rho_{u}=\rho_{d}=\rho_{s}\). In Fig.1, the short-dashed, long-dashed and solid curves are obtained by using Q2, Q3 and Q4, respectively. The repulsions are found to be strong in the order of Q4, Q3, Q2. This difference of repulsions among Q4, Q3 and Q2 comes from the different values of \(\alpha_{S}\) included in \(V_{OGE}\). In the figure, it should be noted that the difference is not small even in the low-density region. In the EoS of \(\beta\)-stable quark matter composed of \(u\), \(d\), \(s\), \(e^{-}\), the equilibrium conditions are given as (1) chemical equilibrium conditions, \[\mu_{d}=\mu_{s}=\mu_{u}+\mu_{e} \tag{13}\] (2) charge neutrality, \[0=\frac{1}{3}(2\rho_{u}-\rho_{d}-\rho_{s})-\rho_{e} \tag{14}\] (3) baryon number conservation, \[\rho_{B}=\frac{1}{3}(\rho_{u}+\rho_{d}+\rho_{s})=\frac{1}{3}\rho_{Q}. \tag{15}\] In order to construct the hybrid EoS including a transition from hadronic phase to quark phase, we use the replacement interpolation method [33][36], being a simple modification of the Maxwell and the Glendenning (Gibbs) constructions [51]. The EoSs of hadronic and quark phases and that of mixed phase are described with the relations between pressures and chemical potentials \(P_{H}(\mu)\), \(P_{Q}(\mu)\) and \(P_{M}(\mu)\), respectively. The critical chemical potential \(\mu_{c}\) for the transition from the hadronic phase to the quark phase is obtained from the Maxwell condition \[P_{Q}(\mu_{c})=P_{H}(\mu_{c})=P_{c}. \tag{16}\] The pressure of the mixed phase is represented by a polynomial ansatz. The matching densities \(\rho_{H}\) and \(\rho_{Q}\) are obtained with use of \(\rho(\mu)=dP(\mu)/d\mu\). ### quarkyonic matter In the BHF framework, we derive the EoS of quarkyonic matter composed of neutrons and quarks with flavor \(q=u,d\) in the simplest form by McLerran and Reddy [45]. In the chargeless 2-flavor quarkyonic matter, strongly interacting quarks near the Fermi sea form interacting neutrons, and the remaining d and u quarks fill the lowest momenta up to \(k_{Fu}\) and \(k_{Fd}\), respectively. The quark mass is taken to be \(M_{q}=M_{n}/3\) constantly, \(M_{n}\) being the neutron mass. In calculations of quarkyonic matter, we use B1 (\(V_{nn}\)) and B2 (\(V_{nn}\)+\(V_{nnn}\)) for nuclear interactions, and Q0 for \(QQ\) interactions for simplicity. The total baryon number density is given by \[\rho_{B} = \rho_{n}+\frac{N_{c}}{3}(\rho_{u}+\rho_{d}) \tag{17}\] \[= \frac{g_{s}}{6\pi^{2}}\left[k_{Fn}^{3}-k_{0n}^{3}+\frac{N_{c}}{3 }(k_{Fu}^{3}+k_{Fd}^{3})\right]\,\] where \(k_{Fn}\), \(k_{Fu}\) and \(k_{Fd}\) are the Fermi momenta of neutrons and u and d quarks, respectively. Fermion spin and quark color degeneracies give rise to \(g_{s}=2\) and \(N_{c}=3\). Neutrons are restricted near the Fermi surface by \(k_{0n}\), being assumed as \[k_{0n}=k_{Fn}-\Delta_{qyc}\] \[\Delta_{qyc}=\frac{\Lambda^{3}}{\hbar c^{3}k_{Fn}^{2}}+\kappa \frac{\Lambda}{N_{c}^{2}\hbar c}\, \tag{18}\] where \(\Delta_{qyc}\) for the thickness of Fermi layer includes the two parameters \(\Lambda\) and \(\kappa\). In this work, we take the fixed value of \(\kappa=0.3\). Then, \(k_{Fd}\) and \(k_{Fu}\) are related to \(k_{0n}\) by \(k_{Fd}=\frac{1}{N_{c}}k_{0n}\) and \(k_{Fu}=2^{-1/3}k_{Fd}\). A single neutron potential is given by \[U_{n}(k) = \sum_{k_{0n}<k^{\prime}<k_{Fn}}\langle kk^{\prime}|{\cal G}_{nn}| kk^{\prime}\rangle \tag{19}\] with \(nn\) G-matrix interactions \({\cal G}_{nn}\). The neutron energy density is given by \[\varepsilon_{n} = \tau_{n}+\upsilon_{n}\] \[= g_{s}\int_{k_{0n}}^{k_{Fn}}\frac{d^{3}k}{(2\pi)^{3}}\left\{ \sqrt{\hbar^{2}k^{2}+M_{n}^{2}}+\frac{1}{2}U_{n}(k)\right\}\.\] Additionally, another form of the neutron potential energy density is defined as \[\bar{\upsilon}_{n}=g_{s}\int_{0}^{k_{n}}\frac{d^{3}k}{(2\pi)^{3}}\left\{\frac{ 1}{2}U_{n}(k)\right\}\, \tag{21}\] which is used in [45] instead of \(\upsilon_{n}\). Figure 1: (Color online) Potential energies per particle \(U/A\) as a function of the baryon number density \(\rho_{B}\) in the case of \(\rho_{u}=\rho_{d}=\rho_{s}\). The short-dashed, long-dashed and solid curves are obtained by using Q2, Q3 and Q4, respectively. Single quark potentials for \(q=u,d\) are given by \[U_{q}(k) = \sum_{q^{\prime}=u,d}U_{q}^{(q^{\prime})}(k) \tag{22}\] \[=\sum_{q^{\prime}=u,d}\sum_{k^{\prime}<k_{Fq}}\langle kk^{\prime}| \mathcal{G}_{qq^{\prime}}|kk^{\prime}\rangle\] \[U_{q}^{(n)}(k) = \sum_{k_{0n}<k^{\prime}<k_{Fn}}\langle kk^{\prime}|\mathcal{G}_{ qn}|kk^{\prime}\rangle \tag{23}\] with G-matrix interactions \(\mathcal{G}_{qq^{\prime}}\) and \(\mathcal{G}_{qn}\). Here, \(\mathcal{G}_{qn}\) is the quark-neutron (\(Qn\)) interactions: We assume the simple model in which the potentials \(\mathcal{G}_{qq^{\prime}}\) are folded into the potentials \(\mathcal{G}_{qn}\) with Gaussian baryonic quark wave functions. In Eqs.(19)(22)(23) spin quantum numbers are implicit. The quark energy density is given by \[\varepsilon_{q} = g_{s}N_{c}\sum_{q=u,d}\int_{0}^{k_{Fq}}\frac{d^{3}k}{(2\pi)^{3}}\] \[\left\{\sqrt{\hbar^{2}k^{2}+M_{q}^{2}}+\frac{1}{2}U_{q}(k)+U_{qn }(k)\right\}\,\] where values of \(k_{Fq}\) are determined by \[N_{c}k_{Fq}=k_{0n}. \tag{25}\] Thus, our total energy density is given by \[\varepsilon=\varepsilon_{n}+\varepsilon_{d}+\varepsilon_{u}. \tag{26}\] The chemical potential \(\mu_{i}\) (\(i=n,d,u\)) and pressure \(P\) are expressed as \[\mu_{i}=\frac{\partial\varepsilon_{i}}{\partial n_{i}}\, \tag{27}\] \[P=\sum_{i=n,d,u}\mu_{i}n_{i}-\varepsilon\, \tag{28}\] where \(\frac{\partial\varepsilon_{i}}{\partial n_{i}}=\frac{\partial\varepsilon_{i}} {\partial n_{B}}\frac{\partial n_{B}}{\partial n_{i}}\). In our model, the phase transition from \(\beta\)-stable nucleonic matter to the quarkyonic matter occurs in second-order, resulting in the hybrid EoS including hadronic and quarkyonic EoSs. Then, the transition densities are controlled mainly by the parameter \(\Lambda\): In this work, we choose the three values of \(\Lambda\)=380, 350 and 320 MeV with the fixed value of \(\kappa=0.3\). The transition densities for these values are \(0.28\sim 0.38\) fm\({}^{-3}\) (\(0.28\sim 0.36\) fm\({}^{-3}\)) in the case of using B1 (B2) for nuclear interactions. Hereafter, when a value of \(\Lambda\)=380 MeV is used, for instance, it is denoted as \(\Lambda\)380. ## III Results and discussion ### EoS In Fig.2, pressures \(P\) are drawn as a function of baryonic number density \(\rho_{B}\). The dot-dashed curve is for the \(\beta\)-stable nucleonic-matter EoS, and the dotted one is for the \(\beta\)-stable hadronic-matter EoS with hyperon mixing. The latter is substantially below the former, demonstrating the EoS softening by hyperon mixing. Thin (thick) solid curves in the upper side are pressures in the quarkyonic matter for \(\Lambda\)350 and \(\Lambda\)320 (\(\Lambda\)380) with use of B1 for nuclear interactions. At the crossing points with the dot-dashed curve in the low-density side, there occur second-order transitions from \(\beta\)-stable nucleonic to quarkyonic phases: The transition densities \(\rho_{t}\) are 0.38, 0.33, 0.28 fm\({}^{-3}\) (\(2.2\rho_{0}\), \(1.9\rho_{0}\), \(1.6\rho_{0}\)) in the cases of \(\Lambda\)380, \(\Lambda\)350 and Figure 3: (Color online) Pressures \(P\) as a function of the energy density \(\varepsilon\). The dot-dashed (dotted) curves are for \(\beta\)-stable nucleonic (hadronic) matter. Thin (thick) solid curves show pressures in quarkyonic phases for \(\Lambda\)350 and \(\Lambda\)320 (\(\Lambda\)380) with B1. The short-dashed curve is for the QHT model with Q4. Figure 2: (Color online) Pressures \(P\) as a function of baryonic number density \(\rho_{B}\). The dot-dashed (dotted) curve is for \(\beta\)-stable nucleonic (hadronic) matter. Upper thin (thick) solid curves are pressures in the quarkyonic matter for \(\Lambda\)350 and \(\Lambda\)320 (\(\Lambda\)380) with B1. Lower thin (thick) short-dashed curves are for the QHT matter with Q2 and Q3 (Q4). \(\Lambda\)320, respectively. Thin (thick) short-dashed curves are for the QHT models with Q2 and Q3 (Q4). It should be noted that pressures in the quarkyonic matter increase more rapidly with density than those in the QHT matter. As discussed later, the rapid growth of pressure with density in the range of \(2\rho_{0}\sim 4\rho_{0}\) is an important feature of the quarkyonic model. This rapid increase of pressure at onset of the quarkyonic phase influences significantly on neutron-star \(MR\) curves. In Fig.3, pressures \(P\) are drawn as a function of the energy density \(\varepsilon\), which are related closely to neutron-star \(MR\) curves. The dot-dashed (dotted) curve shows pressures in \(\beta\)-stable nucleonic (hadronic) matter. Thin (thick) solid curves show pressures in quarkyonic matter for \(\Lambda\)350 and \(\Lambda\)320 (\(\Lambda\)380) with B1. The short-dashed curve is for the QHT matter with Q4. Though the curves for Q4 and \(\Lambda\)380 are rather similar to each other in comparison with the corresponding curves in Fig.2, the former is still less steep than the latter in the region of low energy density. As shown later, the EoSs for the QHT model Q4 and the quarkyonic model \(\Lambda\)380 lead to the neutron-star \(MR\) curves consistent with the criterion Eq.(1). In Fig.4, sound velocities are drawn as a function of \(\rho_{B}\). The dot-dashed curve is sound velocities in \(\beta\)-stable nucleonic matter. Solid curves are those in quarkyonic matter for \(\Lambda\)380, \(\Lambda\)350 and \(\Lambda\)320 with B1. There appear peak structures in the solid curves, being related to rapid increasing of pressures in the range of \(2\rho_{0}\sim 4\rho_{0}\). The dashed curve is sound velocities in the QHT matter with Q4 and the dotted one is those in \(\beta\)-stable hadronic matter with hyperon mixing, in which there appears no peak structure. The dashed curve becomes \(c_{s}>c\) in high-density region. Also, the peak regions of solid curves become \(c_{s}>c\), if B2 is used instead of B1 for nuclear parts. In such regions of \(c_{s}>c\), sound velocities are approximated to be \(c_{s}=c\). It is interesting to notice that the peak structures in our quarkyonic-matter results are somewhat similar to those for the QHC-matter EoS (QHC21) found in [35]. Our QHT-matter EoS gives no peak structure in sound velocities, being different from both of them. In the left panel of Fig.5, solid curves show pressures in quarkyonic matter for \(\Lambda\)380 in the cases of using B1 and B2 for nuclear interactions, and short-dashed (dashed) curves are partial pressures of neutrons (quarks) in respective cases. The dot-dashed curve is pressures in \(\beta\)-stable nucleonic matter. Pressures in quarkyonic matter are found to be completely dominated by neutron partial pressures. In order to reveal the reason why neutron pressures in quarkyonic matter are far higher than those in \(\beta\)-stable nucleonic matter, we show the neutron chemical potentials in the cases of using B1 and B2 for nuclear interactions: In the right panel of Fig.5, neutron chemical potentials \(\mu_{n}\) are drawn as a function of \(\rho_{B}\). Lower and upper solid curves give neutron chemical potentials in quarkyonic matter for \(\Lambda\)380 in the cases of using B1 and B2, respectively. The dot-dashed curve gives neutron chemical potential in \(\beta\)-stable nucleonic matter. The neutron chemical potentials in quarkyonic matter are far higher than those in the \(\beta\)-stable nucleonic matter, which makes neutron pressures in the former far higher than those in the latter. The reason of higher chemical potentials in the quarkyonic matter is because the existence of free quarks inside the Fermi sea gives nucleons extra kinetic energies by pushing them to higher momenta [45]. ### \(Mr\) diagrams We have the two types of hybrid EoSs, the QHT-matter EoS and the quarkyonic-matter EoS. They are combined with the \(\beta\)-stable nucleonic-matter EoS connected smoothly to the crust EoS [52; 53] in the low-density side. The \(MR\) relations of hybrid stars can be obtained by solving the TOV equations with these hybrid EoSs. In Fig.6, star masses are given as a function of radius \(R\). The dot-dashed curves are obtained by the \(\beta\)-stable nucleonic matter EoS. In the left panel, thin (thick) solid curves are obtained by the QHT-matter EoSs with Q2 and Q3 (Q4). The dotted curve is by the hadronic-matter EoS including hyperons. In the cases of Q2, Q3 and Q4, the maximum masses are \(M_{max}/M_{\odot}\)= 2.23, 2.30, 2.40, respectively, and the radii at 2.0\(M_{\odot}\) are 11.8 km, 12.2 km, 12.5 km, respectively. In the right panel, thin (thick) solid curves are obtained by the quarkyonic-matter EoSs for \(\Lambda\)350 and \(\Lambda\)320 (\(\Lambda\)380) with use B1 for nuclear interactions. In the cases of \(\Lambda\)380, \(\Lambda\)350 and \(\Lambda\)320, the maximum masses are \(M_{max}/M_{\odot}\)= 2.64, 2.79, 2.76, respectively, and the radii at 2.0\(M_{\odot}\) are 12.6 km, 13.1 km, 13.5 km, respectively. In both panels, the Figure 4: (Color online) The square of the sound speed \(c_{s}^{2}\) in units of \(c^{2}\) as a function of baryonic number density \(\rho_{B}\). The dot-dashed (dotted) curve is that in \(\beta\)-stable nucleonic (hadronic) matter. Solid curves are pressures in quarkyonic matter for \(\Lambda\)380, \(\Lambda\)350 and \(\Lambda\)320 with B1. The dashed curve is for the QHT matter with Q4. horizontal lines indicates \(R_{1.4M_{\odot}}=12.56^{+1.00}_{-1.07}\) km and \(R_{2.0M_{\odot}}=12.41^{+1.00}_{-1.10}\) km, and the rectangle indicates the region of mass \(M_{max}/M_{\odot}=2.21^{+0.31}_{-0.21}\)[7]. The thick solid curve for Q4 in the left panel and that for \(\Lambda\)380 in the right panel are found to be consistent with the criterion Eq.(1.1), and the key features of \(R_{2M_{\odot}}\approx R_{1.4M_{\odot}}\) are found in these cases. Then, it should be noted that the maximum mass \(2.64M_{\odot}\) for \(\Lambda\)380 is substantially larger than the value \(2.40M_{\odot}\) for Q4. The reason for such a difference between maximum masses can be understood by comparing the \(P(\rho_{B})\) curves in Fig.2, where the solid curve for \(\Lambda\)380 increases more rapidly at onset of the quakyonic matter than the dashed curve for Q4 at onset of quark matter. This means that the stiffness for former is larger than that for the latter. In the case of QHT matter, it is not possible to obtain such a rapid increasing of \(P(\rho_{B})\) in the low-density region, even if the \(QQ\) repulsions are strengthened. In the case of hadronic (nucleonic) matter, shown by the dotted (dot-dashed) curve in the left panel, the maximum mass is \(1.82M_{\odot}\) (\(2.19M_{\odot}\)). The reduction of \(0.37M_{\odot}\) is due to the EoS softening by hyperon ((\(\Lambda\) and \(\Sigma^{-}\)) mixing. This softening is mainly caused by \(\Sigma^{-}\) mixing: If only \(\Lambda\) mixing is taken into account, the maximum mass is obtained as \(2.06M_{\odot}\) being close to the value of \(2.19M_{\odot}\) without hyperon mixing (dot-dashed curve). Thus, massive stars with \(M>2M_{\odot}\) cannot be obtained by the hadronic matter EoSs with hyperon (\(\Lambda\) and \(\Sigma^{-}\)) mixing [14; 15; 16]. On the other hand, the value of \(R_{1.4M_{\odot}}\) is \(12.4\) (\(12.5\)) km in the case of hadronic (nucleonic) matter, which means that the hyperon mixing does not depend much on \(R_{1.4M_{\odot}}\). In Fig.7, star masses are given as a function of central baryon density \(\rho_{Bc}\). The dot-dashed curves are by the \(\beta\)-stable nucleonic matter EoS. The solid curve is obtained by the quarkyonic-matter EoS for \(\Lambda\)380 with B1, and the dashed curve is by the QHT-matter EoS for Q4, where the onset density in the former (latter) \(0.39\) (\(0.33\)) fm\({}^{-3}\). Both of them are consistent with Eq.(1.1), but the former mass curve for \(\rho_{Bc}\) is considerably above the latter one, as well as the corresponding \(MR\) curves. In Fig.8, star masses are given as a function of radius \(R\). The solid curve is obtained by the quarkyonic-matter EoS for \(\Lambda\)380 with use of B1 (\(V_{nn}\)) for nuclear interactions, given also in Fig.6. Dashed and short-dashed curves are by the quarkyonic-matter EoSs for \(\Lambda\)380 and \(\Lambda\)400, respectively, in the case of using B2 (\(V_{nn}\)+\(V_{nnn}\)) instead of B1. The difference between solid and dashed curves demonstrates the effect of the three-neutron repulsion \(V_{nnn}\), giving the larger maximum mass and larger value of \(R_{2.0M_{\odot}}\). The short-dashed curve for \(\Lambda\)400 indicates that this effect of \(V_{nnn}\) to increase mass and radius is cancelled out by taking larger values of \(\Lambda\). In Fig.9, star masses are given as a function of radius \(R\). The solid curve is obtained by the quarkyonic-matter EoS for \(\Lambda\)380 with \(\kappa=0.3\) in the case of using B1, given also in Fig.6. The dashed curve is obtained by the approximation used in [45], where the \(QQ\) interactions are neglected and the quark energy density Eq.(2.24) is replaced by the kinetic energy density. Then, the difference between short-dashed and dashed curves is due to this Figure 5: (Color online) In the left panel, solid curves are pressures \(P\) in quarkyonic phases for as a function of baryonic number density \(\rho_{B}\) for \(\Lambda\)380 in the cases of using B1 and B2, and short-dashed (dashed) curves are partial pressures of neutrons (quarks) in respective cases. The dot-dashed curve is for \(\beta\)-stable nucleonic matter. In the right panel, solid (dot-dashed) curves are neutron chemical potentials \(\mu_{n}\) in quarkyonic (\(\beta\)-stable nucleonic) phases as a function of \(\rho_{B}\) for \(\Lambda\)380 in the cases of using B1 and B2. The dot-dashed curve gives neutron chemical potential in \(\beta\)-stable nucleonic matter. approximation. The short-dashed curve is obtained by taking \(\kappa=0.4\) under this approximation. The similarity between solid and short-dashed curves means that the deviation due to this approximation is canceled out by adjusting the value of \(\kappa\). In the same case of \(\Lambda 380\) and \(\kappa=0.3\) with B1, the dotted curve is obtained by replacing \(\kappa=0.3\) with B1. ing the potential energy density in Eq.(20) to Eq.(21), being the approximated treatment in [45]. This approximation to use Eq.(21) is found to reduce masses and to increase radii. ### Discussion In [7], they present the neutron-star properties such as maximum mass, radius, tidal deformability, pressure and central density inferred from their analysis, for which the median and 90% highest-probability-density credible regions are given. From Table II of [7], we choose the quantities in the case of w/J0740+6620 Miller+ in order to compare with the corresponding values obtained from our QHT-matter and the quarkyonic matter EoSs. In Table 1, tabulated are maximum masses \(M_{max}\), pressures \(p\) at \(\rho_{0}\), \(2\rho_{0}\) and \(6\rho_{0}\), radii \(R\) and dimensionless tidal deformabilities \(\Lambda\) at \(1.4M_{\odot}\) and \(2.0M_{\odot}\), central densities \(\rho_{c}\) at \(1.4M_{\odot}\), \(2.0M_{\odot}\) and \(M_{max}\). Here, our results are for the \(\beta\)-stable nucleonic matter EoS denoted as NUC, the QHT-matter EoS Q4 and the quarkyonic matter EoS V380. These EoSs are adjusted so as to reproduce \(R_{1.4M_{\odot}}\) with an accuracy of a few hundred meters. Then, the key feature of \(R_{2M_{\odot}}\approx R_{1.4M_{\odot}}\) is found in the cases of Q4 and V380 EoSs, contrastively to the case of the nucleonic EoS giving \(R_{2M_{\odot}}<R_{1.4M_{\odot}}\). The values of \(R_{2.0M_{\odot}}\), central densities and tidal deformabilities for Q4 and V380 EoSs are far closer to the median values than those for nucleonic EoS, demonstrating the clear impacts of quark phases in Q4 and V380 EoSs. The deviations from the median values in the latter are considerably larger than those in the formers. Especially, the values of \(\Lambda_{1.4}\) and \(\Lambda_{2.0}\) for the nucleonic EoS are noted to be out of 90% credible regions. In the case of the quarkyonic matter EoS for V380, the values of \(M_{max}\) and \(p(6\rho_{0})\) are found to be far larger than that for the nucleonic EoS. It is interesting that such a large value of \(M_{max}\) can be obtained straightforwardly from the quarkyonic-matter EoS, considering the implication of the large mass (\(2.35\pm 0.17\))\(M_{\odot}\) for PSR J0952-0607 [4]. The reason why a large value of \(M_{max}\) is obtained n the case of the quarkyonic matter EoS is because the pressure rises rapidly in the region of \(\rho_{B}\sim 2\rho_{0}\) as found in Fig.2. In the McLerran-Reddy model of the quarkyonic matter, the resulting EoS is mainly controlled by the one parameter \(\Delta_{qvc}\) for Fermi-layer thickness. Then, it is difficult to reproduce simultaneously \(M_{max}=2.2M_{\odot}\) and \(R_{2.0M_{\odot}}=12.4\) km. ## IV Conclusion The observed masses and radii of neutron stars give constraints on the dense matter EoSs and resulting \(MR\) diagrams. In this sense, the observations of massive stars over \(2M_{\odot}\) and the NICER implication of \(R_{2M_{\odot}}\approx R_{1.4M_{\odot}}\) are critically important for restricting neutron-star matter EoSs. In the case of hadronic matter, even if the nucleonic matter EoS is constructed so as to be stiff enough to give the maximum mass over \(2M_{\odot}\), the hyperon mixing brings about a remarkable softening of the EoS. The EoS-softening by hyperon mixing can be reduced, for instance, by introducing many-body repulsions which work \begin{table} \begin{tabular}{|c|c c c c|} \hline & NUC & Q4 & \(\Lambda\)380 & Ref.[7] \\ \hline \(M_{max}/M_{\odot}\) & 2.19 & 2.40 & 2.64 & \(2.21^{+0.31}_{-0.21}\) \\ \(p(\rho_{0})\) (\(10^{33}\)dyn/cm\({}^{2}\)) & 5.27 & 5.27 & 5.27 & \(4.30^{+3.37}_{-3.80}\) \\ \(p(2\rho_{0})\) (\(10^{34}\)dyn/cm\({}^{2}\)) & 2.76 & 5.09 & 4.42 & \(4.38^{+2.46}_{-2.96}\) \\ \(p(6\rho_{0})\) (\(10^{35}\)dyn/cm\({}^{2}\)) & 6.94 & 12.0 & 22.6 & \(7.41^{+5.87}_{-4.18}\) \\ \(R_{1.4M_{\odot}}\) (km) & 12.5 & 12.7 & 12.5 & \(12.56^{+1.00}_{-1.07}\) \\ \(R_{2.0M_{\odot}}\) (km) & 11.8 & 12.5 & 12.6 & \(12.41^{+1.00}_{-1.10}\) \\ \(R_{2.0M_{\odot}}-R_{1.4M_{\odot}}\) (km) & \(-0.72\) & \(-0.14\) & \(+0.03\) & \(-0.12^{+0.83}_{-0.85}\) \\ \(\Lambda_{1.4}\) & 779 & 525 & 473 & \(507^{+234}_{-242}\) \\ \(\Lambda_{2.0}\) & 128 & 46 & 49 & \(44^{+34}_{-30}\) \\ \(\rho_{c}(1.4M_{\odot})\) (\(10^{14}\)g/cm\({}^{3}\)) & 7.9 & 6.6 & 6.8 & \(6.7^{+1.7}_{-1.3}\) \\ \(\rho_{c}(2.0M_{\odot})\) (\(10^{14}\)g/cm\({}^{3}\)) & 12. & 9.1 & 8.0 & \(9.7^{+3.6}_{-3.1}\) \\ \(\rho_{c}(M_{max})\) (\(10^{15}\)g/cm\({}^{3}\)) & 1.8 & 1.6 & 1.3 & \(1.5^{+0.3}_{-0.4}\) \\ \hline \end{tabular} \end{table} Table 1: Maximum masses \(M_{max}\), pressures \(p\) at \(\rho_{0}\), \(2\rho_{0}\) and \(6\rho_{0}\)), radii \(R\) and tidal deformabilities \(\Lambda\) at \(1.4M_{\odot}\) and \(2.0M_{\odot}\), central densities \(\rho_{c}\) at \(1.4M_{\odot}\), \(2.0M_{\odot}\) and \(M_{max}\). Results for the \(\beta\)-stable nucleonic matter EoS denoted as NUC, the QHT-matter EoS Q4 and the quarkyonic matter EoS V380 are compared with the values taken from [7]. Figure 9: (Color online) Star masses as a function of radius \(R\). The dot-dashed curves are by the \(\beta\)-stable nucleonic matter EoS. The solid curve is obtained by the quarkyonic-matter EoS for A380 with \(\kappa=0.3\) in the case of using B1. The dashed (short-dashed) curve is for A380 with \(\kappa=0.3\) (\(\kappa=0.4\)) by the approximation to neglect potential sectors in quark energy densities. The dotted curve is obtained by replacing the potential energy density in Eq.(20) to Eq.(21). The horizontal lines indicates \(R_{1.4M_{\odot}}=12.56^{+1.00}_{-1.07}\) km and \(R_{2.0M_{\odot}}=12.41^{+1.00}_{-1.10}\) km. universally for every kind of baryons. However, such a repulsive effect does not cancel out completely the EoS softening by hyperon mixing: In the case of hadronic matter EoS with hyperon mixing, it is difficult to obtain maximum masses over \(2M_{\odot}\). The most promising approach to solve this "hyperon puzzle" is to assume the existence of quark phases in inner cores of neutron stars, namely hybrid stars having quark matter in their cores. When quark deconfinement phase transitions from a hadronic-matter EoS to a sufficiently stiff quark-matter EoS are taken into account in the neutron-star interiors, repulsive effects such as \(QQ\) repulsions in quark phases are needed in order to obtain sufficiently stiff EoSs resulting in massive hybrid stars with masses over \(2M_{\odot}\). In our QHT matter, it is possible to reproduce maximum masses over \(2M_{\odot}\) consistently with the NICER implication, where the \(QQ\) repulsion is taken to be strong enough and the quark-hadron transition density is adjusted so as to be about \(2\rho_{0}\) by tuning of the density dependence of effective quark mass. In the quarkyonic matter, the degrees of freedom inside the Fermi sea are treated as quarks, and nucleons exist at the surface of the Fermi sea. The existence of free quarks inside the Fermi sea gives nucleons extra kinetic energy by pushing them to higher momenta. This mechanism of increasing pressure is completely different from the above mechanism of EoS stiffening by strong \(QQ\) repulsions in the QHT matter. In calculations of \(MR\) diagrams with the quarkyonic-matter EoS, the critical quantity is the thickness \(\Delta_{qvc}\) of Fermi layer controlled by the parameters \(\Lambda\) and \(\kappa\). With the reasonable choice of these parameters, the \(MR\) curves of quarkyonic hybrid stars are obtained so as to be consistent with the NICER implication. As well as \(R_{2.0M_{\odot}}\), central densities and tidal deformabilities are inferred from the analysis of the NICER data. The QHT-matter and quarkyonic EoSs can be adjusted so as to reproduce these inferred quantities far closer to the median values than those for nucleonic matter EoS, demonstrating the clear impacts of quark phases in these cases.. Thus, the reasonable \(MR\) curves of neutron stars can be derived from both QHT-matter and quarkyonic-matter EoSs, having completely different mechanisms to stiffen EoSs. However, when both EoSs are adjusted so as to be consistent with the NICER implication, the maximum mass for the quakyonic-matter EoS is considerably larger than that for the QHT-matter EoS. ###### Acknowledgements. The authors would like to thank D. Blaschke for valuable comments and fruitful discussions.
2309.00020
Reply to the comment on "Quantum principle of relativity"
We discuss critical remarks raised by Horodecki towards our work on the connection between superluminal extension of special relativity and fundamental aspects of quantum theory.
Andrzej Dragan, Artur Ekert
2023-08-31T08:14:12Z
http://arxiv.org/abs/2309.00020v1
# Reply to the comment on "Quantum principle of relativity" ###### Abstract We discuss critical remarks raised by Horodecki [8] towards our work on the connection between superluminal extension of special relativity and fundamental aspects of quantum theory. In a recent paper [1], we demonstrated that extending special relativity to superluminal frames of reference inevitably leads to fundamental indeterminacy known from quantum theory. Furthermore, we argued that within this extension, motion along multiple paths also becomes inevitable, drawing parallels with quantum superpositions. Finally, we established that a probabilistic and covariant description of such motions necessitates the use of complex probability amplitudes. These claims elicited several concerns by Grudka and Wojcik [2] as well as Del Santo and Horvat [3], which we have previously addressed [4; 5]. Recent questions and critiques posed by Horodecki [8] are addressed below. Horodecki's main concern questions whether extended special relativity alone can be used to deduce quantum theory in its entirety, or if the proposed "quantum principle of relativity" remains incomplete in this regard. Our short answer is that we don't know yet. While we've demonstrated that some of the most profound properties of quantum theory, such as its inherent randomness and the superposition principle, can be deduced from extended relativity, this doesn't imply that all formal aspects of the theory can be similarly derived. In fact, we've illustrated that the probability amplitudes formalism represents merely the simplest covariant solution, leaving room for other potential descriptions. Horodecki references Heisenberg's classification of possible universes based on the values of physical constants \(\frac{1}{c}\) and \(\hbar\). Specifically, he considers the scenario where \(\frac{1}{c}=0\) and \(\hbar\neq 0\), representing a quantum but not relativistic model of reality, arguing that the universe could be quantum, without being relativistic. However, taking a low-energy limit of Dirac's theory and arriving at the approximate Pauli equation does not mean that the resulting theory is truly non-relativistic. In our work we have argued that the reason we have to consider probabilistic description involving superpositions is due to relativity. At this stage it is secondary, whether the dynamical equation is strict, or only approximate. It is best illustrated by the fact that the "non-relativistic" Pauli theory still involves spin with the gyromagnetic factor \(g=2\) which is truly relativistic. It is also in principle possible to imagine a universe, in which the speed of light is infinite. However this does not invalidate our claims, that quantum effects are a consequence of relativity, either. To show that, let us draw an analogy. Elliptical trajectories of planets arise from Newton's law of gravitation. However, it's conceivable for a universe to exhibit such trajectories even in the absence of gravity, driven by entirely different, alternative laws of physics. Obviously, this doesn't detract from the earlier assertion that Kepler's laws directly originate from Newtonian gravity. It is conceivable that in an alternative universe quantum theory might have emerged "out of the blue". However, our reasoning suggests that, in our universe, several fundamental aspects of quantum theory are a consequence of relativity. This can also lead to a speculation that fundamental constants may not be entirely independent, in particular \(\lim_{c\rightarrow\infty}\hbar=0\). Another observation made by Horodecki is that there are fundamentally two types of indeterministic events in quantum theory: those due to spontaneous decay processes, and those that occur within the process of measurement. Horodecki writes, "We have no convincing evidence that measurement randomness can always be attributed to the randomness associated with particle decays." If we interpret "particle decays" more broadly as particle interactions (considering that for tachyons a decay can be Lorentz-transformed into a collision), this statement becomes contentious. Presently, our understanding of physics hinges on elementary interactions within the standard model. Beyond gravity, there are no other known forces that could account for the emerging laws of physics. For example, photo-avalanche detectors produce unpredictable readings, initiated with an interaction between a single photon and a single electron, leading to a microscopic electric current which is then amplified by the detection mechanism. Consequently, there is no reason to reject the claim that, according to the present understanding of physics, quantum unpredictability can be traced back to the unpredictability of individual particle interactions. And, as we argued, the unpredictability of the latter can be understood using relativistic arguments. The next question raised by Horodecki is whether the laws of physics, such as the Born rule, can remain consistent in all reference frames, including superluminal ones, in the 1+3 dimensional case, when the former and the latter can be distinguished (not being fully invariant). The ability to differentiate between conventional (subluminal) reference frames is not a concern when considering non-inertial frames of reference. For instance, in the Rindler frame, which corresponds to a relativistically uniformly accelerated observer, the Born rule continues to apply seamlessly. Distinguishability of that frame from inertial frames is not a problem. Hence, the fact that superluminal inertial observers employ a different metric than their subluminal counterparts should not present any problem neither. In fact, general covariance, which is the foundation of general relativity, states that the laws of physics should not be affected by the choice of coordinates. The quantum principle of relativity that we proposed simply extends this domain of possible coordinate systems to include superluminal observers. Horodecki notices that in the derivation of the expressions for complex probability amplitudes we only employ conventional, subluminal covariance between frames. This is true. However the questions that have to be answered first are: why do we need to look for a probabilistic description of particle dynamics, and why do we need to consider motion along multiple paths in the first place? These questions are justified only by considering superluminal frames of reference. And they provide us with a justification to look for relativistically invariant probability distributions. Finally, Horodecki raises a crucial and well-founded critique: does our proposal yield any measurable and potentially observable effects? The straightforward answer is: if tachyons existed, then this would undoubtedly be the case. It is worth noting that we have recently demonstrated that a covariant quantum field theory of tachyons with a positive-definite spectrum and a stable, invariant vacuum can be constructed [7]. In this study, as well as in our previous work [6], we emphasize that the Higgs mechanism incorporates tachyonic fields. As a result, our ongoing research opens the way to study fully quantized theory of spontaneous symmetry breaking. This leaves us with a hope that the answer to the last question posed by Horodecki is affirmative.
2302.14333
Noether Symmetry analysis in Chameleon Field Cosmology
This work deals with chameleon field cosmology (a scalar field nonminimally coupled to cold dark matter) in the background of flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. Both classical and quantum cosmology have been investigated using Noether symmetry analysis of the underlying physical system. The Wheeler-DeWitt (WD) equation has been constructed on the minisuperspace and solutions have been obtained using conserved charge.
Roshni Bhaumik, Sourav Dutta, Subenoy Chakraborty
2023-02-28T06:02:30Z
http://arxiv.org/abs/2302.14333v1
# Noether Symmetry analysis in Chameleon Field Cosmology ###### Abstract This work deals with chameleon field cosmology (a scalar field nonminimally coupled to cold dark matter) in the background of flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. Both classical and quantum cosmology have been investigated using Noether symmetry analysis of the underlying physical system. The Wheeler-DeWitt (WD) equation has been constructed on the minisuperspace and solutions have been obtained using conserved charge. **Keywords**: Noether Symmetry; quantum cosmology; chameleon scalar field. ## I Introduction Standard cosmology has been facing a great challenge since the end of the last century. The observational evidences since 1998 [1; 2; 3; 4; 5] are not in favour of decelerated expansion (prediction of standard cosmology) rather they are in favour of accelerated expansion. So far there are two options proposed by the cosmologists to accomodate these observational evidences. One of the possibilities is to introduce some exotic matter (known as dark energy (DE)) within the framework of Einstein gravity. This mysterious matter component is totally of unknown nature except its large negative pressure. At first cosmologists took cosmological constant as the DE candidate. But due to two severe drawbacks (namely discrepancy in its predicted and observed value and coincidence problem [6]) the cosmological constant is not a well accepted DE model rather dynamical DE models [7; 8; 9; 10] are widely used in the literature. This work is an example of using dynamical DE model. Usually, a scalar field having potential \(V(\phi)\) is chosen as DE candidate so that the pressure component \(p_{\phi}=\frac{1}{2}\dot{\phi}^{2}-V(\phi)\) can evolve to have the required negative value for observed accelerated expansion. Here, the scalar field (chosen as dynamical DE) is nonminimal coupled to dark matter (DM) through an interference term in the action [11]. As a result, there is a new source term in the matter conservation equation. This kind of DE model is termed as chameleon field. This model is quite useful to obtain accelerated expansion of the universe and other interesting cosmological consequences [12] (for details see the Ref. [13]). On the other hand, since the last century, symmetry analysis has a significant role in studying global continuous symmetries (i.e, translation and rotation) as well as in local gauge symmetries, internal symmetries of the space-time (in cosmology) and permutation symmetry in quantum field theory [14; 15]. In particular, geometrical symmetries of the space-time and symmetries of the physical system have great role in analyzing any physical motion. From the perspective of Noether symmetry, the conserved charge has been used to identify the actual one among similar physical processes. Furthermore, in Noether symmetry approach, the Noether integral (i.e, the first integral) has been chosen as a tool for simplification of a system of differential equations or for the integrability of the system [16; 17; 18; 19; 20; 21]. In addition, an advantage of using Noether symmetry to any physical system involving arbitrary physical parameters or some arbitrary functions of the field variables is that symmetry analysis uniquely determines these physical parameters or arbitrary functions involved (for details see Ref.[22]). Also since recent past symmetry analysis has been used for physical systems in Riemannian spaces [23; 24; 25; 26; 27; 28; 29; 30]. Moreover, Noether symmetry analysis has opened new windows in studying quantum cosmology with suitable operator ordering, and the Wheeler DeWitt (WD) equation so constructed on the minisuperspace is associated with Lie point symmetries. It is possible to have a subset of the general solutions of the WD equation having oscillatory behaviours [31; 32] by imposing Noether symmetries. The Noether symmetries with Hartle criterion can identify those classical trajectories in minisuperspace [33; 34] which are solutions of the cosmological evolution equations i.e, one may consider Noether symmetries as a bridge between quantum cosmology and classical observable universe. This work is another example of extensive use of Noether symmetry analysis to both classical and quantum cosmology for chameleon field DE model. By imposing Noether symmetry to the Lagrangian and making canonical transformation of the dynamical variables it is possible to have classical solutions of the coupled nonlinear Einstein field equations. WD equation is constructed for the present chameleon DE cosmological model in the background of FLRW space-time and Noether symmetry is used as a tool to solve the WD equation. The plan of the paper is as follows: a brief overview of Noether symmetry is described in Section-II whereas Section- III presents the Noether symmetry and cosmological solutions to chameleon field DE model and Section-IV deals with quantum cosmology in the minisuperspace approach: a general prescription and the formation of WD equation in the present cosmological model and possible solution with Noether symmetry are presented in Section-V; finally the paper ends with a conclusion in Section-VI. ## II A brief overview of Noether symmetry approach Noether's first theorem states that any physical system is associated with some conserved quantities provided the Lagrangian of the system is invariant with respect to the Lie derivative [35; 36] along an appropriate vector field (\(\mathcal{L}_{\overrightarrow{V}}f=\overrightarrow{V}(f)\)). By imposing these symmetry constraints, the evolution equations of the physical system can either be solvable or simplified to a great extent [37; 38]. For a point like canonical Lagrangian \(L[q^{\alpha}(x^{i}),\dot{q}^{\alpha}(x^{i})]\), the Euler-Lagrange equations \[\partial_{j}\left(\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)= \frac{\partial L}{\partial q^{\alpha}} \tag{1}\] can be contracted with some unknown functions \(\lambda^{\alpha}(q^{\beta})\) as follows: \[\lambda^{\alpha}\bigg{[}\partial_{j}\left(\frac{\partial L}{\partial \partial_{j}q^{\alpha}}\right)-\frac{\partial L}{\partial q^{\alpha}}\bigg{]}=0 \tag{2}\] i.e, \[\lambda^{\alpha}\frac{\partial L}{\partial q^{\alpha}}+(\partial_{j}\lambda^ {\alpha})\left(\frac{\partial L}{\partial\partial_{j}q^{\alpha}}\right)= \partial_{j}\left(\lambda^{\alpha}\frac{\partial L}{\partial\partial_{j}q^{ \alpha}}\right)\] Thus \[\mathcal{L}_{\overrightarrow{X}}L=\lambda^{\alpha}\frac{\partial L}{\partial q ^{\alpha}}+(\partial_{j}\lambda^{\alpha})\frac{\partial L}{\partial\left( \partial_{j}q^{\alpha}\right)}=\partial_{j}\left(\lambda^{\alpha}\frac{ \partial L}{\partial\partial_{j}q^{\alpha}}\right) \tag{3}\] So according to Noether theorem the vector field [39; 40] \[\overrightarrow{X}=\lambda^{\alpha}\frac{\partial}{\partial q^{\alpha}}+( \partial_{j}q^{\alpha})\frac{\partial}{\partial\left(\partial_{j}q^{\alpha} \right)} \tag{4}\] can be chosen appropriately so that the Lagrangian of the system is invariant along the vector field i.e, \(\mathcal{L}_{\overrightarrow{X}}L=0\) and consequently, the physical system is called invariant under Noether symmetry with \(\overrightarrow{X}\), the infinitesimal generator of the symmetry. It is to be noted that the above symmetry vector as well as the Lagrangian is defined on the tangent space of configurations: \(TQ\{q^{\alpha},\dot{q}^{\alpha}\}\). In general Noether symmetry approach is very much relevant to identify conserved quantities of a physical system. The above symmetry condition is associated with a constant of motion for the Lagrangian having conserved phase flux along the vector field \(\overrightarrow{X}\). Furthermore, from Eq.(3) this symmetry criteria is associated with a constant of motion of the system [16; 17; 37] \[Q^{i}=\lambda^{\alpha}\frac{\partial L}{\partial\left(\partial_{i}q^{\alpha} \right)} \tag{5}\] satisfying \[\partial_{i}Q^{i}=0 \tag{6}\] So \(Q^{i}\) is identified as Noether current or conserved current. Furthermore, the energy function associated with system is \[E=\dot{q}^{\alpha}\frac{\partial L}{\partial\dot{q}^{\alpha}}-L \tag{7}\] The energy function (also known as Hamiltonian of the system) is a constant of motion provided there is no explicit time dependence in the Lagrangian [16; 17; 37]. Moreover, if the conserved current due to Noether symmetry has some physical meaning, [16; 17; 37] then symmetry analysis can identify reliable models. In the following, we shall show how symmetry analysis will simplify the present coupled cosmological model and as a result classical cosmological solutions can be obtained easily. In the context of quantum cosmology, Hamiltonian formulation is very useful and Noether symmetry condition is rewritten as follows: [38] \[\mathcal{L}_{\overrightarrow{X}_{H}}H=0 \tag{8}\] with \[\overrightarrow{X}_{H}=\dot{q}\frac{\partial}{\partial q}+\ddot{q}\frac{ \partial}{\partial\dot{q}}\] In minisuperspace models of quantum cosmology, symmetry analysis determines appropriate interpretation of the wave function. The conserved canonically conjugate momenta due to Noether symmetry can be written as follows: \[\Pi_{l}=\frac{\partial L}{\partial q^{l}}=\sum\nolimits_{l} \tag{9}\] \(l=1,2,...,m\), where '\(m\)' denotes the number of symmetries. Also, the operator version (i.e, quantization) of Eq. (9) i.e, \[-i\partial_{q^{l}}\left|\psi\right\rangle=\sum\nolimits_{l}\left|\psi\right\rangle \tag{10}\] identifies a translation along \(q^{l}\)-axis through symmetry analysis. Also Eq. (10) has oscillatory solution for real conserved quantity \(\sum\nolimits_{l}\) i.e, \[\left|\psi\right\rangle=\sum\limits_{l=1}^{m}e^{i\sum\nolimits_{l}q^{l}}\left| \phi(q^{k})\right\rangle,k<n, \tag{11}\] where the index '\(k\)' stands for directions along which there is no symmetry with \(n\) the dimension of the minisuperspace. Thus oscillatory part of the wave function implies existence of Noether symmetry and the conjugate momenta along the symmetry directions should be conserved and vice-versa [41]. Due to symmetries the first integrals of motion identify the classical trajectories. In fact, for 2D minisuperspace, it is possible to have complete solution of the system by Noether symmetry. ## III Noether symmetry and cosmological solutions to chameleon field de model This section is devoted to study chameleon field DE cosmological model. This model consists of a canonical scalar field (having self-interaction potential) nonminimally coupled to DM. So the potential function and the coupling function are the two unknown functions of the scalar field. The action integral of the model has the explicit form [42; 43] \[I=\int\left[\frac{R}{16\pi G}+\frac{1}{2}\phi_{,\mu}\phi^{,\mu}-V(\phi)+f(\phi) L_{m}\right]\sqrt{-g}d^{4}x \tag{12}\] where as usual \(R\) is the Ricci scalar, \(G\) is the Newtonian gravitational constant and \(\phi\) is the chameleon scalar field having potential \(V(\phi)\). Here, \(L_{m}\) is the Lagrangian for DM which is nonminimally coupled to the chameleon scalar field with \(f(\phi)\) (an analytic function), the coupling function. By choosing the DM to be an ideal gas, the matter Lagrangian can be chosen as \(L_{m}\simeq\rho_{m}\)[44]. In the background of flat FLRW space-time the point-like Lagrangian for the above cosmological model takes the following form: \[L(a,\dot{a},\phi,\dot{\phi})=3a\dot{a}^{2}-a^{3}\left(\frac{\dot{\phi}^{2}}{2}- V(\phi)\right)-\rho_{m}f(\phi)a^{3} \tag{13}\] Now the Euler-Lagrange equations (i.e, the Einstein field equations) for the Lagrangian (13) are given by \[3H^{2}=\rho_{m}f(\phi)+\frac{1}{2}\dot{\phi}^{2}+V(\phi), \tag{14}\] \[2\dot{H}+3H^{2}=-\frac{1}{2}\dot{\phi}^{2}+V(\phi)+\rho_{m}\omega f (\phi), \tag{15}\] where an over dot indicates differentiation with respect to the cosmic time '\(t\)'. Furthermore, the equation of motion \(T^{\mu\nu}_{;\nu}=0\) for the cosmological fluid with energy momentum tensor \(T_{\mu\nu}=T^{(\phi)}_{\mu\nu}+T^{(m)}_{\mu\nu}\) is given by \[\dot{\phi}\ddot{\phi}+3H\dot{\phi}^{2}+v(\phi)\dot{\phi}+\rho_{m}f^{\prime}( \phi)\dot{\phi}+\dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)=0 \tag{16}\] One may note that among these three evolution equations (14)-(16), only two are independent while (14) is termed as constraint equation. As in the present cosmological model there is the interaction term \(f(\phi)\) so one has \(\left(T^{(\phi)\mu\nu}\right)_{;\nu}=-Q\), \(\left(T^{(m)\mu\nu}\right)_{;\nu}=Q\) or eqivalently \[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)+f^{\prime}(\phi)\rho_{m}=-\frac{Q}{ \dot{\phi}} \tag{17}\] \[\dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)=Q \tag{18}\] As the present model reduces to that of Weyl integrable gravity with \(f(\phi)=f_{0}e^{\lambda\phi}\) ([45]) and setting \(Q=\alpha\rho_{m}f^{\prime}(\phi)\dot{\phi}\) (where \(\alpha\) is a non zero constant), Eqs. (17) and (18) become \[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)+(1+\alpha)f^{\prime}(\phi)\rho_{m}=0 \tag{19}\] and \[\dot{\rho_{m}}f(\phi)+3H(1+\omega)\rho_{m}f(\phi)-\alpha f^{\prime}(\phi)\dot{ \phi}\rho_{m}=0 \tag{20}\] The matter conservation equation (20) can be integrated to have \[\rho_{m}(t)=\rho_{0}a^{-3(1+\omega)}\{f(\phi)\}^{\alpha} \tag{21}\] Thus the scalar field evolution equation (i.e, the modified Klein-Gordon equation) (19) becomes \[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=-\rho_{0}a^{-3(1+\omega)}\left[\{f( \phi)\}^{\alpha+1}\right]. \tag{22}\] The configuration space for the present model is a 2D space \((a,\phi)\) and the infinitesimal generator for the Noether symmetry takes the form \[\overrightarrow{X}=p\frac{\partial}{\partial a}+q\frac{\partial}{\partial \phi}+\dot{p}\frac{\partial}{\partial\dot{a}}+\dot{q}\frac{\partial}{\partial \dot{\phi}}, \tag{23}\] where \(p=p(a,\phi)\) and \(q=q(a,\phi)\) are the unknown coefficients with \(\dot{p}=\frac{\partial p}{\partial a}\dot{a}+\frac{\partial p}{\partial\phi} \dot{\phi}\) and similarly for \(\dot{q}\). These coefficients of the Noether symmetry vector are determined from an overdetermined system of partial differential equation, obtained by imposing Noether symmetry to the Lagrangian i.e, \[\mathcal{L}_{\overrightarrow{X}}L=0\] i.e, \[p+2a\frac{\partial p}{\partial a} = 0\] \[3p+2a\frac{\partial q}{\partial\phi} = 0\] \[6\frac{\partial p}{\partial\phi}-a^{2}\frac{\partial q}{\partial a} = 0 \tag{24}\] with a differential equation for the potential and coupling function as follows: \[3\rho_{0}\omega pa^{-3\omega-1}F(\phi)+3pa^{2}V(\phi)+qa^{3}V^{\prime}(\phi)- \rho_{0}a^{-3\omega}qF^{\prime}(\phi)=0 \tag{25}\] where \(F(\phi)=\{f(\phi)\}^{\alpha+1}\). The above set of partial differential equations (24) are solvable using the method of separation of variables i.e, \(p(a,\phi)=p_{1}(a)p_{2}(\phi)\), \(q(a,\phi)=q_{1}(a)q_{2}(\phi)\) as \[p=a^{-\frac{1}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m\phi}\right)\] \[q=-4ma^{-\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right) \tag{26}\] where \(m^{2}=\frac{3}{8}\), \(c_{p}\), \(c_{q}\) and \(q_{0}\) are arbitrary constants. Using the above solutions (26) into (25), the solutions for \(V(\phi)\) and \(f(\phi)\) can take the form (with \(\omega=-1\)) \[V(\phi)-\rho_{0}F(\phi)=k\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right)^{2} \tag{27}\] where \(k\) is a positive integration constant. Thus, the infinitesimal generator of the Noether symmetry is determined (except for arbitrary integration constants) by imposing symmetry condition which in turn determines a relation between the potential function and the coupling function. Another important issue related to Noether symmetry is the conserved quantities associated with it. In general for a field theory in curved space there is no well-defined notion of energy. However, the conserved quantity derived from Noether's theorem is the energy-momentum tensor. In particular, when the system has time-like killing vector then there is an associated conserved energy. Though FLRW space-time has no time-like killing vector field, but the Lagrangian density is explicit time independent. Hence in analogy with point-like Lagrangian, it is possible to define an energy which will be conserved in nature. Thus in the context of Noether symmetry to the present cosmological model one can have two conserved quantities, namely conserved charge (defined in Eq. (5)) and conserved energy (defined in Eq. (7)) having explicit form \[Q=6\dot{a}a^{\frac{1}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m\phi}\right)+a^{3} \dot{\phi}\bigg{\{}4ma^{-\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi} \right)\bigg{\}}\] \[E=3a\dot{a}^{2}-\frac{1}{2}a^{3}\dot{\phi}^{2}-a^{3}V(\phi)+\rho_{0}F(\phi)a^{ -3\omega} \tag{28}\] Usually, associated with Noether symmetry there is a conserved current (defined in Eq. (5)), whose time component integrating over spatial volume gives a conserved charge. But in the present context as all the variables are time dependent only so \(Q\) defined in (28) is the Noether charge. Moreover, the above conserved charge can be expressed geometrically as the inner product of the infinitesimal generator with cartan one form [46] as follows: \[Q=i_{\overrightarrow{X}}\theta_{L} \tag{29}\] where \(i_{\overrightarrow{X}}\) denotes the inner product with the vector field \(\overrightarrow{X}\) and \[\theta_{L}=\frac{\partial L}{\partial a}da+\frac{\partial L}{\partial\phi}d\phi \tag{30}\] is termed as cartan one form. On the other hand, this geometric inner product representation is useful to find out cyclic variables in the Lagrangian. In context of solving coupled nonlinear evolution equations, determination of cyclic variables will be very useful as not only the Lagrangian but also the evolution equations will be simplified to a great extent. In the present context the transformation of the 2D augmented space: \((a,\phi)\rightarrow(u,v)\) transform the symmetry vector as \[\overrightarrow{X_{T}}=\left(i_{\overrightarrow{X}}du\right)\frac{\partial}{ \partial u}+\left(i_{\overrightarrow{X}}dv\right)\frac{\partial}{\partial v }+\left\{\frac{d}{dt}\left(i_{\overrightarrow{X}}du\right)\right\}\frac{d}{ d\dot{u}}+\left\{\frac{d}{dt}\left(i_{\overrightarrow{X}}dv\right)\right\}\frac{d}{d \dot{v} \tag{31}\] Geometrically, \(\overrightarrow{X_{T}}\) may be interpreted as the lift of a vector field on the augmented space. Now, without any loss of generality we restrict the above point transformation to [46] \[i_{\overrightarrow{X}}du=1\ \ \text{and}\ \ i_{\overrightarrow{X}}dv=0 \tag{32}\] so that \[\overrightarrow{X_{T}}=\frac{\partial}{\partial u}\ \ \text{and}\ \ \frac{\partial L_{T}}{\partial u}=0 \tag{33}\] i.e, \(u\) is the cyclic variable. The above geometric process of identification of cyclic variables can be interpreted so as to choose the transformed infinitesimal generator along any co-ordinate line (identified as the cyclic variable) [47]. Now the explicit form of the above point transformation (32) is the first-order linear partial differential equations having solution as follows: **Case I:**\(c_{p}=c_{q}\) \[u = \frac{2}{3}a^{\frac{3}{2}}\cosh m\phi,\] \[v = a^{\frac{3}{2}}\sinh m\phi. \tag{34}\] **Case II: \(c_{p}\neq c_{q}\)** \[u = \frac{1}{6c_{p}c_{q}}a^{\frac{3}{2}}\left(c_{p}e^{m\phi}+c_{q}e^{-m \phi}\right),\] \[v = a^{\frac{3}{2}}\left(c_{p}e^{m\phi}-c_{q}e^{-m\phi}\right). \tag{35}\] The simplified Lagrangian in the new variables has the following forms: \[L = 3\dot{u}^{2}-\frac{4}{3}\dot{v}^{2}+4kc_{p}^{2}v^{2},\ \ \ \ \ \mbox{(Case I)} \tag{36}\] \[= 12c_{p}c_{q}\dot{u}^{2}-\frac{1}{3c_{p}c_{q}}\dot{v}^{2}+kv^{2} \ \ \ \ \ \mbox{(Case II)}. \tag{37}\] The conserved quantities in the new variables can be expressed as follows: \[Q = 6\dot{u}\] \[E = 3\dot{u}^{2}-\frac{4}{3}\dot{v}^{2}-4kc_{p}^{2}v^{2}\ \ \ \ \ \mbox{( Case I)}\] and \[Q = 24c_{p}c_{q}\dot{u}\] \[E = 12c_{p}c_{q}\dot{u}^{2}-\frac{1}{3c_{p}c_{q}}\dot{v}^{2}-kv^{2} \ \ \ \ \ \mbox{(Case II)}\] Now solving the Euler-Lagrange equations for the new Lagrangian, the new augmented variables have the following forms: \[u = At+B\] \[v = k_{1}\cos\sqrt{3k}c_{p}t+k_{2}\sin\sqrt{3k}c_{p}t\ \ \ \ \ \ \mbox{( Case I)}\] and \[u = rt+s\] \[v = k_{1}^{\prime}\cos\sqrt{3c_{p}c_{q}k}t+k_{2}^{\prime}\sin\sqrt{3 c_{p}c_{q}k}\ \ \ \ \ \mbox{(Case II)}\] Hence, the cosmic scale factor and the chameleon scalar field have the following explicit expressions: \[a(t) = \left[\frac{9}{4}\left(At+B\right)^{2}-\left(k_{1}\cos\sqrt{3k}c_ {p}t+k_{2}\sin\sqrt{3k}c_{p}t\right)^{2}\right]^{\frac{1}{3}}\] \[\phi(t) = \frac{2\sqrt{2}}{3}\tanh^{-1}\left[\frac{2\left(k_{1}\cos\sqrt{3 k}c_{p}t+k_{2}\sin\sqrt{3k}c_{p}t\right)}{3(At+B)}\right]\ \ \ \ \ \mbox{( Case I)}\] and \[a(t) = \left[9c_{p}c_{q}\left(rt+s\right)^{2}-\frac{1}{4c_{p}c_{q}} \left(k_{1}^{\prime}\cos\sqrt{3c_{p}c_{q}k}t+k_{2}^{\prime}\sin\sqrt{3c_{p}c_{ q}k}t\right)^{2}\right]^{\frac{1}{3}}\] \[\phi(t) = \frac{2\sqrt{2}}{\sqrt{3}}\ln\frac{6c_{p}c_{q}\left(rt+s\right)+ \left(k_{1}^{\prime}\cos\sqrt{3c_{p}c_{q}k}t+k_{2}^{\prime}\sin\sqrt{3c_{p}c_{ q}k}t\right)}{2c_{p}\left[9c_{p}c_{q}\left(rt+s\right)^{2}-\frac{1}{4c_{p}c_{q}} \left(k_{1}^{\prime}\cos\sqrt{3c_{p}c_{q}k}t+k_{2}^{\prime}\sin\sqrt{3c_{p}c_{ q}k}t\right)^{2}\right]^{\frac{1}{2}}}\ \ \ \ \mbox{(Case II)}\] In the above solutions \((A,B,k_{1},k_{2})\) and \((r,s,k_{1}^{\prime},k_{2}^{\prime})\) are arbitrary integration constants. Quantum cosmology in the minisuperspace approach: a general prescription Minisuperspaces are considered as restrictions of geometrodynamics of the superspace and physically important and interesting models are defined on minisuperspaces. In cosmology, the simplest and widely used minisuperspace models are homogeneous and isotropic merics and matter fields and consequently the lapse function is homogeneous (i.e, \(N=N(t)\)) while shift function vanishes identically. So in 4D manifold, using \((3+1)\)-decomposition the metric can be written as follows: \[ds^{2}=-N^{2}(t)dt^{2}+h_{ab}(x,t)dx^{a}dx^{b} \tag{38}\] and the Einstein-Hilbert action can be written as follows: \[I(h_{ab},N)=\frac{m^{2}{}_{p}}{16\pi}\int dt\ d^{3}xN\sqrt{h}\left[k_{ab}k^{ ab}-k^{2}+(3)_{R}-2\Lambda\right], \tag{39}\] where \(k_{ab}\) is the extrinsic curvature of the 3 space, \(k=k_{ab}h^{ab}\) is the trace of the extrinsic curvature, \((3)_{R}\) is the curvature scalar of the three space and \(\Lambda\) is the cosmological constant. Now due to homogeneity of the three space, the metric \(h_{ab}\) is characterized by a finite number of time functions \(q^{\alpha}(t)\), \(\alpha=0,1,2,...,n-1\) and the above action can be written in the form of a relativistic point particle with self-interacting potential in a \(n\)D curved space-time as [42; 43] \[I\left(q^{\alpha}(t),N(t)\right)=\int_{0}^{1}dtN\left[\frac{1}{2N^{2}}f_{ \alpha\beta}(q)\dot{q}^{\alpha}\dot{q}^{\beta}-V(q)\right] \tag{40}\] So the equation of motion of the (equivalent) relativistic particle can be written as (considering variation of the action with respect to the field variables \(q^{\alpha}(t)\)) \[\frac{1}{N}\frac{d}{dt}\left(\frac{\dot{q}^{\alpha}}{N}\right)+\frac{1}{N^{2}} \Gamma^{\alpha}_{\mu\nu}\dot{q}^{\mu}\dot{q}^{\nu}+f^{\alpha\beta}\frac{ \partial\nu}{\partial q^{\beta}}=0 \tag{41}\] with \(\Gamma^{\alpha}_{\beta\gamma}\) being the christoffel symbols in the minisuperspace. Also there is a constraint equation obtained by variation with respect to the lapse function as follows: \[\frac{1}{2N^{2}}f_{\alpha\beta}\dot{q}^{\alpha}\dot{q}^{\beta}+V(q)=0 \tag{42}\] For canonical quantization scheme one has to switch over to Hamiltonian formulation. The momenta canonical to \(q^{\alpha}\) are given by \[p_{\alpha}=\frac{\partial L}{\partial q^{\alpha}}=f_{\alpha\beta}\frac{\dot{q} ^{\beta}}{N}, \tag{43}\] so the Hamiltonian is defined as follows: \[H=p_{\alpha}\dot{q}^{\alpha}-L=N\left[\frac{1}{2}f^{\alpha\beta}p_{\alpha}p_{ \beta}+V(q)\right]=N\mathcal{H} \tag{44}\] with \(f^{\alpha\beta}\) being the inverse metric. Using the definition of \(p_{\alpha}\) (i.e, equation (43)) into the constraint equation (42) one obtains \[\mathcal{H}(q^{\alpha},p_{\alpha})\equiv\frac{1}{2}f^{\alpha\beta}p_{\alpha}p_{ \beta}+V(q)=0 \tag{45}\] Now, writing \(p_{\alpha}\) as \(-i\hbar\frac{\partial}{\partial q_{\alpha}}\) in quantization scheme, the operator version of the above constraint equation (45) on a time-independent function (the wave function of the universe), one gets the WD equation in quantum cosmology as follows: \[\mathcal{H}\left(q^{\alpha},-i\hbar\frac{\partial}{\partial q^{\alpha}}\right) \psi(q^{\alpha})=0 \tag{46}\] In general, the minisuperspace metric depends on \(q^{\alpha}\), so the above WD equation has operator ordering problem. However by imposing the quantization in minisuperspace to be covariant in nature, one may resolve the above operator ordering problem. Furthermore, in the context of quantam cosmology for probability measure, \(\exists\) a conserved current for hyperbolic type of partial differential equation is as follows: \[\overrightarrow{J}=\frac{i}{2}(\psi^{*}\nabla\psi-\psi\nabla\psi^{*}) \tag{47}\] with \(\overrightarrow{\nabla}.\overrightarrow{J}=0\). Here, \(\psi\) is the solution of the hyperbolic-type WD differential equation. Thus it is possible to define the probability measure on the minisuperspace as follows: \[dp=|\psi(q^{\alpha})|^{2}dV \tag{48}\] where \(dV\) is a volume element on minisuperspace. V Formation of WD equation in the present cosmological model and possible solution with Noether symmetry In the present cosmological model, the 2D configuration space \(\{a,\phi\}\) is associated with conjugate momenta is given by \[p_{a} = \frac{\partial L}{\partial\dot{a}}=6a\dot{a}\] \[p_{\phi} = \frac{\partial L}{\partial\dot{\phi}}=-a^{3}\dot{\phi} \tag{49}\] So the Hamiltonian of the system (also known as Hamiltonian constraint) can be expressed as follows: \[\mathcal{H}=\frac{1}{12a}p_{a}^{2}-\frac{1}{2a^{3}}p_{\phi}^{2}-a^{3}V(\phi)+ \rho_{0}F(\phi)a^{-3\omega} \tag{50}\] with equivalent Hamilton's equations of motion \[\dot{a} = \frac{1}{6a}p_{a}\] \[\dot{\phi} = -\frac{1}{a^{3}}p_{\phi}\] \[\dot{p_{a}} = \frac{1}{12a^{2}}p_{a}^{2}-\frac{3}{2a^{4}}p_{\phi}^{2}+3a^{2}V( \phi)+3\rho_{0}\omega F(\phi)a^{-3\omega-1}\] \[\dot{p_{\phi}} = a^{3}V^{\prime}(\phi)-\rho_{0}F^{\prime}(\phi)a^{-3\omega} \tag{51}\] Furthermore, the Lagrangian (i.e, Eq. (13)) of the system can be interpreted geometrically, dividing it into two parts. The first two terms are known as kinetic part and the remaining two terms constitute the dynamic part. Also the kinetic part may be viewed as a 2D pseudo- Riemannian space with line element \[ds^{2}=-6ada^{2}+a^{2}d\phi^{2} \tag{52}\] This 2D Lorentzian manifold \((a,\phi)\) is known as minisuperspace (in quantum cosmology). The wave function of the universe in quantum cosmology is a solution of the WD equation, a second-order hyperbolic partial differential equation defined over minisuperspace and it is the operator version of the Hamiltonian constraint. Furthermore, in the context of WKB approximation one can write the wave function as \(\psi(x^{k})\sim e^{i\delta(x^{k})}\) and hence the WD equation (46) becomes first-order nonlinear partial differential equation which is nothing but (null) Hamilton-Jacobi (H-J) equation in the same geometry. In quantization of the model one has to construct the WD equation \(\hat{\cal H}\psi(u,v)=0\), with \(\hat{\cal H}\) the operator version of the Hamiltonian (50) and \(\psi(u,v)\), the wave function of the universe. In course of conversion to the operator version there is a problem related to the ordering of a variable and its conjugate momentum [48]. In the first term of the Hamiltonian (50) there is a product of '\(a\)' and '\(p_{a}\)', so one has to consider the ordering consideration: \(p_{a}\rightarrow-i\partial_{a},\ \ p_{\phi}\rightarrow-i\partial_{\phi}\). As a result there is a two-parameter family of WD equation \[\bigg{[}-\frac{1}{12}\frac{1}{a^{l}}\frac{\partial}{\partial a} \frac{1}{a^{m}}\frac{\partial}{\partial a}\frac{1}{a^{n}}+\frac{1}{a^{3}} \frac{\partial^{2}}{\partial\phi^{2}}-a^{3}V(\phi)+\rho_{0}F(\phi)a^{-3\omega }\bigg{]}\psi(a,\phi)=0 \tag{53}\] with the triplet of real numbers \((l,m,n)\) satisfying \(l+m+n=1\). Due to infinite possible choices for \((l,m,n)\) one may have infinite number of possible ordering. Also the semi-classical limit, namely, the Hamilton Jacobi equation (obtained by substituting \(\psi=\exp(is)\)) does not regard to the above triplet. In fact, the following choices are commonly used: i) \(l=2,m=-1,n=0\) : D'Alembert operator ordering. ii) \(l=0=n,m=1\): Vilenkin operator ordering. iii) \(l=1,m=0=n\): no ordering. Thus factor ordering affects the behaviour of the wave function while semi classical results will not be influenced by the above ordering problem. Now choosing the third option (i.e., no ordering) the WD equation for the present model has the following explicit form: \[\bigg{[}-\frac{1}{12a}\frac{\partial^{2}}{\partial a^{2}}+\frac{1}{2a^{3}} \frac{\partial^{2}}{\partial\phi^{2}}-a^{3}V(\phi)+\rho_{0}F(\phi)a^{-3\omega} \bigg{]}\psi(a,\phi)=0 \tag{54}\] The general solution of the above second-order hyperbolic partial differential equation is known as the wave function of the universe. This solution can be constructed from the separation of the eigen functions of the above WD operator as follows:[41] \[\psi(a,\phi)=\int W(Q)\psi(a,\phi,Q)\ dQ \tag{55}\] with \(\psi\) being an eigen function of the WD operator, \(W(Q)\) being a weight function and \(Q\) being the conserved charge. Now it is desirable to have wave function in quantum cosmology that is consistent with classical theory. In other words, one has to construct a coherent wave packet having good asymptotic behaviour in the minisuperspace and maximize around the classical trajectory. As the minisuperspace variables \(\{a,\phi\}\) are highly coupled in the WD operator so it is not possible to have any explicit solution of the WD equation even with separation of variable method. Thus one may analyze the present model in the context of quantum cosmology using the new variables \((u,v)\) (obtained by point transformation) in the augmented space **Case-I**: \(c_{p}=c_{q}\) In this case the Lagrangian is given by Eq. (36) for which \(u\) is the cyclic variable. So one has \[p_{1} = \frac{\partial L}{\partial\dot{u}}=6\dot{u}=\mbox{Conserved}\] \[p_{2} = \frac{\partial L}{\partial\dot{v}}=-\frac{8}{3}\dot{v} \tag{56}\] Hence, the Hamiltonian of the system takes the form \[{\cal H}=\frac{1}{12}p_{u}^{2}-\frac{3}{16}p_{v}^{2}-4kc_{p}^{2}v^{2} \tag{57}\] Thus the WD equation takes the following form: \[\bigg{[}-\frac{1}{12}\frac{\partial^{2}}{\partial u^{2}}+\frac{3}{16}\frac{ \partial^{2}}{\partial v^{2}}-4kc_{p}^{2}v^{2}\bigg{]}\chi(u,v)=0 \tag{58}\] The operator version of the conserved momentum in Eq. (56) can be written as follows: \[i\frac{\partial\chi(u,v)}{\partial u}=\Sigma_{0}\ \chi(u,v) \tag{59}\] Now writing \(\chi(u,v)=A(u)B(v)\), one has \[i\frac{dA}{du} = \Sigma_{0}\ A\] \[\mbox{i.e., }A(u) = A_{0}\exp(-i\Sigma_{0}\ u) \tag{60}\] with \(A_{0}\) being the constant of integration. Using Eq. (60) the WD equation (58) becomes a differential equation in \(B\) as follows: \[\frac{3}{16}\frac{d^{2}B}{dv^{2}}-4kc_{p}^{2}v^{2}B+\frac{\Sigma_ {0}^{2}}{12}B = 0\] \[\mbox{i.e., }\frac{d^{2}B}{dv^{2}}-(\lambda v^{2}-\mu)B = 0 \tag{61}\] with \(\lambda=\frac{64}{3}kc_{p}^{2},\ \mu=\frac{4}{9}\Sigma_{0}^{2}\). **Case-II**: \(c_{p}\neq c_{q}\) The Lagrangian of the system (given by Eq. (37)) shows the variable '\(u\)' to be cyclic and the conserved momentum has the following expression: \[p_{u}=\frac{\partial L}{\partial\dot{u}}=24c_{p}c_{q}\dot{u}=\Lambda_{0},\ \mbox{a constant} \tag{62}\] while the momentum conjugate to the variable '\(v\)' is given by \[p_{2}=\frac{\partial L}{\partial\dot{v}}=-\frac{2}{3c_{p}c_{q}}\dot{v} \tag{63}\] Hence, the Hamiltonian of the system is expressed as follows: \[{\cal H}=\frac{1}{48c_{p}c_{q}}p_{u}^{2}-\frac{3c_{p}c_{q}}{4}p_{v}^{2}-kv^{2} \tag{64}\] and consequently the WD equation takes the following form: \[\bigg{[}\frac{1}{48c_{p}c_{q}}\frac{\partial^{2}}{\partial u^{2}}+\frac{3c_{p} c_{q}}{4}\frac{\partial^{2}}{\partial v^{2}}-kv^{2}\bigg{]}\xi(u,v)=0 \tag{65}\] The operator version of the conserved momentum as before shows \[\xi(u,v) = C(u)D(v)\] \[\text{with }C(u) = C_{0}\exp(-i\Lambda_{0}u) \tag{66}\] Thus from the above WD equation (65) and using the separation of variables, the differential equation for \(D\) reduces to \[\frac{d^{2}D}{dv^{2}}-(lv^{2}-s)D=0 \tag{67}\] with \(l=\frac{4k}{3c_{p}c_{q}},\ s=\frac{\Lambda_{0}^{2}}{36c\frac{d}{c_{q}^{2}}}\). The solution of this second-order differential equation takes the following form: \[D(v) = C_{1}\sqrt{v}J_{\frac{1}{4}}\bigg{(}\frac{1}{2}\sqrt{-l}v^{2} \bigg{)}+C_{2}\sqrt{v}Y_{\frac{1}{4}}\bigg{(}\frac{1}{2}\sqrt{-l}v^{2}\bigg{)} \ \ \ \ \text{when}(s=0) \tag{68}\] \[= \frac{C_{1}M_{\frac{1}{4}}\frac{s}{\sqrt{l}}\frac{1}{4}\bigg{(} \sqrt{l}v^{2}\bigg{)}}{\sqrt{v}}+\frac{C_{2}W_{\frac{1}{4}\frac{s}{\sqrt{l}} \frac{1}{4}\bigg{(}\sqrt{l}v^{2}\bigg{)}}}{\sqrt{v}}\ \ \ \ \text{when}(s\neq 0),\] where \(J\) and \(Y\) are usual Bessel functions and \(M\) and \(W\) are known as Whittaker functions. We have represented the wave function graphically both for zero and non-zero \(s\) in Figs.3 and 4, respectively. From the figures, we see that at \(u=0\), \(v=0\) (i.e, \(l=0\)), wave function has finite nonzero value. Therefore in the present model it is possible to avoid the Big Bang singularity using quantum cosmology near the initial singularity. ## VI Conclusion This work is an example where symmetry analysis particularly Noether symmetry has been extensively used both in classical and quantum cosmology. Here, chameleon field DE model has been considered in the background of homogeneous and isotropic flat FLRW space-time. Although the full quantum theory is described on the infinite-dimensional superspace, but here we shall confine to minisuperspace which is a 2D Lorentzian manifold. Although the Einstein field equations are nonlinear coupled differential equation, but using a transformation in the augmented space and introducing geometric inner product it is possible to identify the cyclic variable(s) so that the field equations simplified to a great extent and consequently classical cosmological solutions are evaluated. There are two sets of solutions for two different choices of the arbitrary constants involved. Both the solutions show an expanding model of the universe with accelerating and decelerating phases (depending on the choices of the arbitrary constants involved). In particular, the present model describes the decelerating phase only for the choice \(c_{p}=c_{q}\) (Fig. 1), while the model makes a transition from decelerating phase to accelerating phase and then again it goes to decelerating phase for the choice \(c_{p}\neq c_{q}\) (Fig. 2). On the other hand, the application of Noether symmetry to the minisuperspace shows the path Figure 4: Graphical representation of the wave function when \(s\neq 0\) Figure 3: The wave function when \(s=0\) for solving WD equation. The conserved momentum due to Noether symmetry, after converting to quantum version, shows an oscillatory solution to the WD equation and consequently it gives the semi-classical limit of quantum cosmology. Furthermore, the nonoscillatory part of the WD equation is an ordinary differential equation having solution in the form of Bessel function or Whittaker function. The graphical presentation of this part of the solution has been shown in Figs. (3 and4), which clearly shows that the present quantum cosmological model can overcome the big bang singularity i.e., the present model may describe the early era of evolution without any singularity. Finally, one may conclude that Noether symmetry analysis is very useful in describing quantum cosmology in minisuperspace model and also leads to possible solution of the WD equation.
2309.16656
Visual In-Context Learning for Few-Shot Eczema Segmentation
Automated diagnosis of eczema from digital camera images is crucial for developing applications that allow patients to self-monitor their recovery. An important component of this is the segmentation of eczema region from such images. Current methods for eczema segmentation rely on deep neural networks such as convolutional (CNN)-based U-Net or transformer-based Swin U-Net. While effective, these methods require high volume of annotated data, which can be difficult to obtain. Here, we investigate the capabilities of visual in-context learning that can perform few-shot eczema segmentation with just a handful of examples and without any need for retraining models. Specifically, we propose a strategy for applying in-context learning for eczema segmentation with a generalist vision model called SegGPT. When benchmarked on a dataset of annotated eczema images, we show that SegGPT with just 2 representative example images from the training dataset performs better (mIoU: 36.69) than a CNN U-Net trained on 428 images (mIoU: 32.60). We also discover that using more number of examples for SegGPT may in fact be harmful to its performance. Our result highlights the importance of visual in-context learning in developing faster and better solutions to skin imaging tasks. Our result also paves the way for developing inclusive solutions that can cater to minorities in the demographics who are typically heavily under-represented in the training data.
Neelesh Kumar, Oya Aran, Venugopal Vasudevan
2023-09-28T17:55:24Z
http://arxiv.org/abs/2309.16656v1
# Visual In-Context Learning for Few-Shot Eczema Segmentation ###### Abstract Automated diagnosis of eczema from digital camera images is crucial for developing applications that allow patients to self-monitor their recovery. An important component of this is the segmentation of eczema region from such images. Current methods for eczema segmentation rely on deep neural networks such as convolutional (CNN)-based U-Net or transformer-based Swin U-Net. While effective, these methods require high volume of annotated data, which can be difficult to obtain. Here, we investigate the capabilities of visual in-context learning that can perform few-shot eczema segmentation with just a handful of examples and without any need for retraining models. Specifically, we propose a strategy for applying in-context learning for eczema segmentation with a generalist vision model called SegGPT. When benchmarked on a dataset of annotated eczema images, we show that SegGPT with just 2 representative example images from the training dataset performs better (mIoU: 36.69) than a CNN U-Net trained on 428 images (mIoU: 32.60). We also discover that using more number of examples for SegGPT may in fact be harmful to its performance. Our result highlights the importance of visual in-context learning in developing faster and better solutions to skin imaging tasks. Our result also paves the way for developing inclusive solutions that can cater to minorities in the demographics who are typically heavily under-represented in the training data. Keywords:Eczema Segmentation In-context Vision. ## 1 Introduction Eczema is one of the most common skin disorders, with over 10% of the population affected by it in the United States alone [2]. While there can be no substitute for expert dermatologists, automated diagnosis of the disease can enable patients to self-monitor their recovery using self-acquired images, potentially leading to more effective recovery due to the psychological effects [14, 10]. An important part of this automated analysis is developing a sufficiently robust algorithm that can accurately segment the eczema region from the patient-acquired digital camera images. [15, 8, 23]. The state-of-the-art approaches for automated eczema segmentation rely on deep neural networks (DNN) for supervised learning, with the most pop ular architecture being a convolutional neural network-based U-Net (CNN U-Net) [18, 5]. More recent approaches, such as Swin U-Net, leverage attention-based transformers that can capture long-range dependencies [7]. These methods have demonstrated remarkable performance improvements over traditional approaches that relied on hand-crafted visual features [8, 24]. However, a common theme across all these methods is the need for sufficient data to train the DNN, which increases with the increasing complexity of the network architecture [3]. For applications such as skin lesion segmentation that require expert annotations, acquiring a large dataset can be prohibitive from the point of view of both cost and time [26]. This is further exacerbated by the need for a diverse dataset covering all sources of variations such as skin tone, images from different body parts, and varying levels of disease [25, 15]. Current methods to train high-capacity networks using limited data employ transfer learning techniques such as domain adaptation, knowledge distillation, or finetuning [27, 11]. While these techniques have proven to be effective for many learning tasks, including skin segmentation [4], these methods still require enough labeled and unlabeled training data to work well. A long-standing goal in artificial intelligence is to learn task-agnostic representations that can be used across various tasks without requiring further learning [12]. This few/zero-shot form of learning eliminates the necessity of training task-specific models and hence any need for training data. While the goal seems ambitious, we are already seeing an outpour of results from the Natural Language Processing (NLP) community [21, 17]. Known popularly as large language models (LLMs), these models learn task-agnostic representations through pretraining on a large corpus of text data, and exhibit remarkable generalization on NLP downstream applications[17]. The operating principle is In-context Learning, where a few domain-specific input-output pairs are provided as in-context examples (prompts) to the model, along with the input test example [32, 20]. The prompt and the input test example are used to predict the corresponding output for the test example without updating model weights [32]. A growing body of work is now attempting to replicate this success of NLP for vision tasks using a paradigm known as visual in-context learning [6, 32, 28, 29, 19, 16]. The idea is similar to that in NLP: a high-capacity network such as a Vision Transformer (ViT) [9] is trained on a large corpus of images using Masked Image Modeling (MIM) [31, 13]. The complexity of the MIM task, i.e., predicting large missing patches in the image, forces the network to gain semantic and contextual understanding of the images resulting in it learning powerful general representations [13]. These learned representations are versatile enough to perform various downstream vision tasks such as keypoint detection, segmentation, depth estimation, etc., following prompt-style input representation [28, 29]. With visual in-context learning, it is possible to perform few-shot segmentation of eczema images without requiring an abundance of training data. This work presents an automated approach to few-shot eczema segmentation using visual in-context learning (Figure 1). Specifically, we employ a pretrained generalist vision model called SegGPT [29] and evaluate it for eczema segmen tation on a dataset of skin images acquired from the web and a consumer study. We report that SegGPT with just two example images in the prompt performs better (mIoU: 36.69) than a CNN U-Net trained on 428 images (mIoU: 32.60). The key to this result is the strategy for prompt selection: the examples in the prompt must be representative of the task. Our simple framework for prompt selection retrieves nearest neighbors of the test image from the training dataset, and uses them as examples to construct the prompt. We also discover that the performance of SegGPT is strongly dependent on the number of examples in the prompt, but surprisingly does not have a linear relation. Rather, the performance increases up to a certain number \(k<4\), and then starts decreasing. Our results highlight the promise of foundational generalist vision models in developing faster and better solutions for skin lesion segmentation tasks, paving the way for developing inclusive solutions that can also cater to minorities in the demographics who are typically heavily under-represented in the training data. ## 2 Methods The core of our method is a visual in-context learning model called SegGPT [29]. The input to the model is a visual prompt which is constructed by stitching together example input-output image pairs. The test input image with a blank output image is appended to the prompt. The task of the model is to complete the prompt, i.e., predict the blank output image for the given test input image (Figure 1) without updating any model weights. ### SegGPT Training The segmentation task in SegGPT is formulated as an image in-painting task [28]: given an input image, the prediction task is to inpaint the desired but missing output image. This allows for a standard image input and image output interface for the model, because of which the model can be trained on a large Figure 1: **Overview. For each test image, its \(k\)-nearest neighbors are retrieved from the training dataset. The retrieved images and their masks, along with the test image are stitched together to construct a prompt that is fed to a pretrained SegGPT model. The task of the model is to predict the missing output image for the given test image.** corpus of vision data irrespective of the actual vision task. So long as the input and output of the vision task can be represented as images, the model can leverage the dataset associated with the task for training. The training of SegGPT is based on MIM [13]. During training, two images from the same segmentation task are stitched into a larger image, along with their corresponding masks. MIM is then applied to the pixels of output mask images. The large masking ratio forces the model to gain contextual and semantic understanding of the image to complete the in-painting task [13, 31]. This understanding allows the trained model to understand what areas to segment from the example input-output pairs in the prompt. The model in SegGPT employs a vanilla vision transformer (ViT) [9] as the encoder consisting of stacked transformer blocks. A three layer head comprising of a linear layer, a 3 x 3 convolution layer, and another linear layer is used to map the features of each patch to the original resolution. A simple smooth \(l_{1}\) regression loss is used to train the network. ### Prompt Selection The performance of visual in-context learning models depends strongly upon the choice of prompts [32]. Inspired from the results in [32], we adopt a similarity-based method to retrieve the prompt from the training dataset. Specifically, for each test image, we compute its \(k\) nearest neighbors from the training dataset based on a distance metric such as Frobenius norm and structural similarity index (SSIM). These neighbors are then used to construct the prompts for the given test image. ## 3 Experiments and Results The goals of our experiments were to investigate whether visual in-context learning can perform as well as traditional methods that rely on training on large datasets. To that effect, we compared the performance of a pretrained SegGPT against a CNN-based U-Net trained to segment eczema images. ### Dataset description Our dataset consists of 528 high-resolution images collected from two primary sources: a) public Dermnet dataset of eczema images [1], and b) an in-house consumer study where images were taken directly by the participants using their smartphone cameras. The resulting dataset has multiple sources of variations: eczema images from different body parts, varying skin tones, and varying levels of eczema along with varying illumination, background, etc. which increases the complexity of the segmentation task. The dataset was labelled using human annotators. The human annotators were provided written instructions from domain experts on how to create the masks. Several examples were shown to them before they started the task. Each image was annotated by one human annotator. The resulting masks were then inspected by multiple domain experts and were found to be satisfactory. All images were resized to 448 x 448 which is the input dimension for the pretrained SegGPT model. Further, the images were normalized using z-score normalization using the ImageNet dataset statistics. The dataset was partitioned into 428 training images and 100 evaluation images. The baseline U-Net model used all 428 images for its training. The SegGPT model used a handful of \(k\) samples from the training set to construct prompts. ### Baseline for comparison As baseline, we used CNN-based U-Net which is widely used for skin segmentation [22]. The architecture was a typical 5-stage U-Net comprising of serial contracting and expansive paths [22]. Similar to [22], the contracting path consisted of repeated two 3x3 unpadded convolutions, followed by batch normalization, ReLU and maxpooling with window size 2x2. The number of channels were doubled after every stage. The expansive path followed the inverse operations of the contracting path- upsampling of feature maps followed by repeated up-convolutions that halve the number of channels, with ReLU non-linearity. The network was trained on 428 training images using Adam optimizer with a small weight decay factor for additional regularization. Learning rate was set to \(1e-4\). Batch size was set to 8. The network was trained for 50 epochs with cross-entropy loss function. To evaluate, we use the intersection over union score (mIoU) averaged over all test images. As the name suggest, mIoU measures the area of overlap between the predicted segmentation mask and the ground-truth mask, and hence is a more suitable metric to measure segmentation performance than pixel-wise accuracy. ### SegGPT Network Architecture and Prompt Retrieval We used an off-the-shelf pretrained SegGPT model that employs a vanilla ViT-large as its encoder. The model was trained to optimize the smooth \(l_{1}\) regression loss. The authors in [29] pretrained the model on a large collection of benchmark segmentation datasets: ADE20K, COCO panoptic, Cityscapes, COCO semantic, LIP person, PASCAL VOC, PACO, iSAID and loveDA aerial, CHASEDB, DRIVE, HRF and STARE retinal vessel. For prompt selection, we selected \(k\) nearest neighbor of each test image from the training dataset, varying \(k\) from 1 to 15 (Figure 1). We employed the following distance metrics: i) the commonly used Frobenius norm to compute euclidean difference between two matrices; ii) SSIM, which is a better indicator of similarity between images by virtue of being a perception-based metric. ### Segmentation of Eczema We evaluated pretrained SegGPT and baseline U-Net for segmentation of eczema. While U-Net achieved an mIoU score of 32.60 on the test images, the pretrained SegGPT with the optimal hyperparameter \(k=2\) achieved an mIoU score of 36.69 (Table 1). Not only did SegGPT outperform U-Net, it did so by using only 2 representative images from the training set and without changing any of its weights. This amounts to a 12.6% increase in performance with a 213 times decrease in the training data requirement. The key is that the images in the prompt must be representative of the segmentation task. The quantitative improvement is further highlighted in the qualitative comparison in Figure 2, where SegGPT produced comparable or higher quality masks than U-Net. #### 3.2.2 Effect of \(k\) and distance metric To understand the dependence of the performance of SegGPT on the number of examples used to construct the prompt, we evaluated its performance for a range of values of \(k\). We show in Figure 3 that the mIoU increases as we increase \(k\) up to a certain level, and then starts to decrease. This indicates that picking higher number of examples to construct the \begin{table} \begin{tabular}{l r} \hline \hline Method & mIoU \\ \hline U-Net & 32.60 \\ SegGPT (\(k=2\); SSIM) & 36.69 \\ \hline \hline \end{tabular} \end{table} Table 1: Segmentation Performance Figure 2: Qualitative comparison of eczema segmentation performance of U-Net and SegGPT for 4 example images. SegGPT produces masks that are closer to the groundtruth. prompt may not result in better performance. Although this may seem counter-intuitive, we believe that the reason for this observation is as follows: As we increase \(k\), we pick examples that are further away from the test image. Given the limited size of the dataset, images that are too far from the test image may not be representative. To highlight this point, we show the \(k\) nearest neighbors for two representative test images for both the distance metric in Figure 4. Since the SegGPT model is not retrained/conditioned on the prompt, it is not capable of distinguishing examples that may not be representative of the segmentation task, and instead places equal emphasis on all the examples in the prompt. As one might expect, the performance obtained when SSIM is used as a distance metric is slightly better than when Frobenius norm is used (Figure 3). Figure 4: Nearest neighbor images for two example test images. As \(k\) increases, the retrieved neighbor is further away from the test image. Figure 3: Dependence of the segmentation performance on the number of neighbors, \(k\) and the distance metric. The performance increases up until a low value of \(k\) and then decreases. Using SSIM as distance metric results in marginally better performance than using Frobenius norm. ## 4 Discussion In this work, we present a visual in-context learning approach for segmentation of eczema from patient-acquired images. We showed that with just two examples of the segmentation task in the prompt, and the right prompt-selection strategy, the pretrained SegGPT can perform better than the state-of-the-art CNN-based U-Net despite the fact that the latter sees 428 examples in its training. Our result adds to the mounting evidence that learning task-agnostic features on large diverse datasets with high-capacity models eliminates the need for performing any training or finetuning for downstream tasks [17, 20, 19, 16]. A diverse dataset for medical and skin imaging that is representative of the underlying demographics is hard to get and even harder to annotate [26]. The current methods to deal with the limited data use either self-supervision which relies on having enough unlabelled data, or finetuning which again requires sufficient labelled data [27]. On the other hand, visual in-context learning can leverage a handful of representative labeled examples to perform the task with competitive performance, drastically reducing the time and effort for data collection. While our current method uses the entire training dataset to search for representative examples, in a consumer-facing application, patients will need to provide just 1 or 2 annotated images for the model to make accurate predictions. In addition, the approach holds significance for application areas where it is impractical to wait for enough labelled data to arrive before accurate predictions can be made. Such application areas include consumer-facing applications where patients can self-monitor the trajectory of the improvement of their skin condition when following a treatment protocol [14, 10, 30]. In such cases, asking patients to wait until we have enough data from them may not be prudent, and in-context learning can play a key role. Given the fact that the performance of in-context learning depends strongly on the choice of prompts, more systematic ways of prompt retrieval can be investigated. The current approach relies on pixel-level distance between the two images. However, measuring similarity between images at feature-level may result in more effective prompts. The work in [32] presents two such approaches relying on supervised and unsupervised learning. Additionally, although the focus of this work is to obtain competitive performance without any training or finetuning, if additional performance improvement is desired, the ViT can further be finetuned on domain-specific data. A key issue in skin imaging is the under-representation of certain demographics in the training data, as a result of which methods may be biased towards heavily represented groups. In-context learning, with its ability to generalize from just a few data points, has the potential to tackle this issue. More experiments are needed on benchmark evaluation datasets that contain data from under-represented groups to confirm the hypothesis. Overall, our work highlights the importance of the increasingly-popular in-context learning framework, and the possible directional shift from the traditional train-test-finetune paradigm.
2309.10227
Learning Dynamic MRI Reconstruction with Convolutional Network Assisted Reconstruction Swin Transformer
Dynamic magnetic resonance imaging (DMRI) is an effective imaging tool for diagnosis tasks that require motion tracking of a certain anatomy. To speed up DMRI acquisition, k-space measurements are commonly undersampled along spatial or spatial-temporal domains. The difficulty of recovering useful information increases with increasing undersampling ratios. Compress sensing was invented for this purpose and has become the most popular method until deep learning (DL) based DMRI reconstruction methods emerged in the past decade. Nevertheless, existing DL networks are still limited in long-range sequential dependency understanding and computational efficiency and are not fully automated. Considering the success of Transformers positional embedding and "swin window" self-attention mechanism in the vision community, especially natural video understanding, we hereby propose a novel architecture named Reconstruction Swin Transformer (RST) for 4D MRI. RST inherits the backbone design of the Video Swin Transformer with a novel reconstruction head introduced to restore pixel-wise intensity. A convolution network called SADXNet is used for rapid initialization of 2D MR frames before RST learning to effectively reduce the model complexity, GPU hardware demand, and training time. Experimental results in the cardiac 4D MR dataset further substantiate the superiority of RST, achieving the lowest RMSE of 0.0286 +/- 0.0199 and 1 - SSIM of 0.0872 +/- 0.0783 on 9 times accelerated validation sequences.
Di Xu, Hengjie Liu, Dan Ruan, Ke Sheng
2023-09-19T00:42:45Z
http://arxiv.org/abs/2309.10227v1
Learning Dynamic MRI Reconstruction with Convolutional Network Assisted Reconstruction Swin Transformer ###### Abstract Dynamic magnetic resonance imaging (DMRI) is an effective imaging tool for diagnosis tasks that require motion tracking of a certain anatomy. To speed up DMRI acquisition, k-space measurements are commonly under-sampled along spatial or spatial-temporal domains. The difficulty of recovering useful information increases with increasing under-sampling ratios. Compress sensing was invented for this purpose and has become the most popular method until deep learning (DL) based DMRI reconstruction methods emerged in the past decade. Nevertheless, existing DL networks are still limited in long-range sequential dependency understanding and computational efficiency and are not fully automated. Considering the success of Transformer's positional embedding and "swin window" self-attention mechanism in the vision community, especially natural video understanding, we hereby propose a novel architecture named Reconstruction Swin Transformer (RST) for 4D MRI. RST inherits the backbone design of the Video Swin Transformer with a novel reconstruction head introduced to restore pixel-wise intensity. A convolution network called SADXNet is used for rapid initialization of 2D MR frames before RST learning to effectively reduce the model complexity, GPU hardware demand, and training time. Experimental results in the cardiac 4D MR dataset further substantiate the superiority of RST, achieving the lowest RMSE of 0.0286\(\pm\)0.0199 and 1-SSIM of 0.0872\(\pm\)0.0783 on 9 times accelerated (9x) validation sequences. Transformer, Dynamic MRI, Reconstruction, Deep Learning. ## 1 Introduction Tracking dynamic processes using time-resolved magnetic resonance imaging (MRI) can reveal anatomical and physiological anomalies that evade detection based on static images [1]. Theoretically, a dynamic target can be acquired frame-wise under the assumption that motion in each frame at the time of acquisition is within the chosen pixel size. However, the assumption of which requires extremely high spatial-temporal (ST) resolutions and thus is often violated due to slow data acquisition speed [2]. To speed up the process of dynamic MRI (DMRI) acquisition, a wide range of approaches have been proposed. Advanced fast imaging sequences [3] and modern gradient system allow efficient data sampling in k-space. Supported by massive coil arrays [4], parallel imaging [5, 6] has increased scanning speed by 2-4 folds in clinical practice. Followed by that, further acceleration has been achieved through reducing
2309.00034
Ultraluminous X-ray sources are beamed
We show that magnetar models for ULX behaviour have serious internal inconsistencies. The magnetic fields required to increase the limiting luminosity for radiation pressure above the observed (assumed isotropic) luminosities are completely incompatible with the spin-up rates observed for pulsing ULXs. We note that at least one normal Be-star + neutron star system, with a standard (non-magnetar) field, is observed to become a ULX during a large outburst, and return to its previous Be-star binary state afterwards. We note further that recent polarimetric observations of the well-studied binary Cyg X-3 reveal that it produces strong emission directed away from the observer, in line with theoretical suggestions of its luminosity from evolutionary arguments. We conclude that the most likely explanation for ULX behaviour involves radiation beaming by accretion disc winds. A large fraction of X-ray binaries must pass through a ULX state in the course of their evolution.
Jean-Pierre Lasota, Andrew King
2023-08-31T17:56:01Z
http://arxiv.org/abs/2309.00034v2
# Ultraluminous X-ray sources are beamed ###### Abstract We show that magnetar models for ultraluminous X-ray sources (ULXs) have serious internal inconsistencies. The magnetic fields required to increase the limiting luminosity for radiation pressure above the observed (assumed isotropic) luminosities are completely incompatible with the spin-up rates observed for pulsing ULXs. We note that at least one normal Be-star + neutron star system, with a standard (non-magnetar) field, is observed to become a ULX during a large outburst and return to its previous Be-star binary state afterwards. We note further that recent polarimetric observations of the well-studied binary Cyg X-3 reveal that it produces strong emission directed away from the observer, in line with theoretical predictions of its total accretion luminosity from evolutionary arguments. We conclude that the most likely explanation for ULX behaviour involves radiation beaming by accretion disc winds. A large fraction of X-ray binaries must pass through a ULX state in the course of their evolution. keywords: accretion, accretion discs - black hole physics - binaries: close - pulsars: general - X-rays: binaries. ## 1 Introduction Ultraluminous X-ray sources (ULXs) are defined by the two conditions (i) apparent luminosities (assumed isotropic) \(L_{X}>10^{39}\,\rm erg\,s^{-1}\), and (ii) locations away from galaxy centres. These restrictions select a group of objects not straightforwardly explained either as accreting stellar-mass binaries, or as more massive accretors. Condition (i) requires \(L_{X}\) to exceed the Eddington luminosity for a \(10\rm M_{\odot}\) black hole, i.e. \[L_{\rm Edd}=1.3\times 10^{38}m\,\rm erg\,s^{-1}, \tag{1}\] with \(m=M/\rm M_{\odot}=10\), which implies a corresponding Eddington accretion rate \[\dot{M}_{\rm Edd}\equiv\frac{L_{\rm Edd}}{\eta c^{2}} = 1.4\times 10^{18}\eta_{0.1}^{-1}m\,\rm g\,s^{-1}\] \[= 2.2\times 10^{-8}\,\eta_{0.1}^{-1}m\,\rm M_{\odot}yr^{-1}, \tag{3}\] where \(\eta=0.1\eta_{0.1}\) is the radiative efficiency of accretion. Condition (ii) rules out the central massive black holes in galaxies. ULXs were identified as a separate class of objects at the end of the previous millenium (Colbert & Mushotzky, 1999). By now, only two models of ULX behaviour remain under serious consideration. The older of these two current models for ULX behaviour is disc-wind beaming (King et al., 2001). This asserts that the assumption of isotropic emission made in computing \(L_{X}\) from observations is not valid for binary systems transferring mass at rates \(\dot{m}\dot{M}_{\rm Edd}\), with \(\dot{m}\gg 1\), because in this case radiation pressure expels most of the transferred mass in quasispherical winds which are opaque except along narrow channels along the accretion disc axis (Shakura & Sunyaev, 1973). This means that most of the emitted accretion luminosity \(\sim L_{\rm Edd}\) is beamed along these channels. ULXs are sources where the observer lies in one of the beams: the effect is that the apparent (assumed isotropic) luminosity inferred is \[L_{\rm sph}\sim\frac{1}{b}L_{\rm Edd}\gg L_{\rm Edd}, \tag{4}\] where the total solid angle of the two channnels is \(4\pi b\). The more recent model for ULX behaviour, which we shall refer to as the'magnetar model', was inspired by the discovery by Bachetti et al. (2014) that the source ULX-2 in the galaxy M82 is pulsed. This implies that the accretor is a magnetized neutron star1. The magnetar model asserts that unusually strong surface fields of neutron-star accretors reduce the electron scattering opacity defining the Eddington luminosity, making \(L_{\rm Edd}\) numerically larger: the high luminosities of ULXs are actually _sub_-critical in this picture. For magnetar-strength fields (\(\gtrsim 10^{14}\) G) the modified \(L_{\rm Edd}\) exceeds the assumed-isotropic luminosity \(L_{X}\). Footnote 1: This discovery had the effect of making an early model for ULXs invoking accretion on to more massive (‘intermediate–mass’, or ‘IMBH’) black holes relatively unattractive. We note that the two models imply fundamentally different significances for the ULX phenomenon. Disc-wind beam ing asserts that the ULX state is one that a large fraction of otherwise standard X-ray binaries pass through during a particular phase of their evolution, whereas the magnetar hypothesis reduces the ULX class to a relatively small subset of these systems defined by very strong magnetic fields. The aim of this paper is to evaluate recent evidence allowing a clear decision between beaming and strong magnetic fields as the basic cause of ULX behaviour. ## 2 Beaming The suggestion by King et al. (2001) of beaming as the explanation for the high apparent luminosities of ULXs was motivated by study of the X-ray binary Cyg X-2 (King and Ritter, 1999). The neutron star in this system has evidently survived the companion star attempting to transfer \(\sim 3{\rm M}_{\odot}\) to it at a highly super-Eddington rate, without retaining more than a small fraction of it. This corresponds closely to the picture of how a disc deals with a super-Eddington mass rate suggested by Shakura and Sunyaev (1973). A radiation-pressure powered wind from the disc surface keeps the disc accretion rate at the local Eddington limit corresponding to each disc radius. This raises the true total emitted accretion luminosity only by a logarithmic factor, to \[L_{\rm acc}\simeq L_{\rm Edd}[1+\ln\dot{m}], \tag{5}\] so that even a huge (by X-ray binary standards) accretion rate of \(\sim 10^{4}M_{\rm Edd}\) would give a total accretion luminosity of only \(10L_{\rm Edd}\), i.e \(\sim 2\times 10^{39}\,{\rm erg\,s^{-1}}\) for a \(1.4{\rm M}_{\odot}\) neutron star. But importantly the emission is now highly anisotropic: the outflowing wind is densest near the radius at which the full Eddington luminosity is attained, and has a large optical depth both along the disc plane and in the vertical direction. Thus most of the disc radiation emitted within the wind region diffuses by scattering until it is able to escape through the central open funnels parallel to the disc axis. Since the funnel is tall and thin and has scattering walls, the escaping radiation is beamed by a factor \(b\ll 1\), so that the apparent luminosity deduced by an observer in the beam, who assumes the luminosity to be isotropic, is \[L_{\rm app}=\frac{1}{b}L_{\rm Edd}[1+\ln\dot{m}]\gg L_{\rm Edd}. \tag{6}\] King (2009) showed that for \(\dot{m}\gg 1\), the observed correlation \(L_{bb}\propto T_{bb}^{-4}\) between ULX soft X-ray blackbody luminosity and temperature implies that \[b\simeq\frac{73}{\dot{m}^{2}}. \tag{7}\] This agrees with deductions from simple accretion disc theory, as conditions far from the disc centre are set by the mass supply rate, while those near the disc centre all converge to what is set by a near-Eddington central accretion rate. King and Lasota (2020) noted that when the accretor is a magnetized neutron star its magnetic axis is not necessarily aligned with the disc (i.e. funnel) axis, and it is very common for the neutron star spin to be misaligned from the binary orbit defining the accretion disc plane. When these three axes are not aligned the system appears as a pulsing ULX, or PULX. For a neutron-star spin axis strongly misaligned from the central disc axis at the spherization radius, large polar caps produce the sinusoidal pulse light curves observed in PULXs since a significant part of the pulsed emission can escape without scattering, giving a large pulse fraction. Using this disc-wind-beaming model King et al. (2017); King and Lasota (2019, 2020) (see also King et al., 2023) were able to obtain self-consistent sets of parameters for the 10 known PULXs, finding magnetic fields in the range of \(\sim 2\times 10^{10}-10^{13}\)G, mass-transfer rates \(\dot{m}\) between \(\sim 10\) and \(\sim 100\), and beaming factors from \(\sim 0.01\) to \(\sim 0.5\). ## 3 Magnetar Models But as we noted above, soon after the discovery of the first PULX a different explanation of the apparent super-Eddington luminosities observed in these X-ray sources became possible. Dall'Osso et al. (2015); Eksi et al. (2015) assumed that the PULX magnetic fields had magnetar (\(\gtrsim 10^{14}\) G) fieldstrengths. These substantially reduce the scattering cross-sections and so increase the critical luminosity at which the radiation pressure force equals the pull of gravity. In this scenario PULX luminosities are above the usual Eddington luminosity, but actually sub-critical, so that accretion proceeds in the same way as in other X-ray pulsars. Indeed, very strong magnetic fields lower the Thomson and Compton scattering opacity (Canuto et al., 1971; Herold, 1979) for photons with energies \(E_{\gamma}\) lower than the cyclotron frequency \(E_{\rm cyc}\): \[\frac{\sigma_{B1}}{\sigma_{T}}\approx \sin^{2}\theta+\left(\frac{E_{\gamma}}{E_{\rm cyc}}\right)^{2} \cos^{2}\theta \tag{8}\] \[\frac{\sigma_{B2}}{\sigma_{T}}\approx \left(\frac{E_{\gamma}}{E_{\rm cyc}}\right)^{2},\,\rm{for}\,\frac{E_{ \gamma}}{E_{\rm cyc}}\ll 1, \tag{9}\] where indices 1 and 2 correspond to the two linear photon polarizations, \(\sigma_{T}\) is the Thomson cross-section and \(\theta\) is the angle between the directions of the magnetic field and light propagation. The opacities depend on the photon polarization, but as shown by Paczynski (1992) their Rosseland means differ at most by a factor 2, depending on the angle between the direction of the photon propagation and the field lines. Therefore in the presence of a very strong magnetic field, the critical luminosity corresponding to the equality of the radiation pressure and gravitational forces can be written as \[L_{\rm crit}\approx 2B_{12}^{4/3}\left(\frac{g}{2\times 10^{14}{\rm cm\,s^{-2}}} \right)^{-1/3}L_{\rm Edd}, \tag{10}\] where \(g=GM/R^{2}\)(Paczynski, 1992). Thus in this picture the apparent (assumed isotropic) PULX luminosities \(\gtrsim 10^{40}\,{\rm erg\,s^{-1}}\) must be emitted by a plasma permeated by magnetar-strength fields \(>10^{14}\)G. Although at first sight attractive, the idea of magnetars in PULXs faces the difficulty that these very strongly magnetized neutron stars have never been observed in binary systems (see King and Lasota, 2019; King et al., 2023 and references therein). Accepting it requires belief in a cosmic conspiracy making them detectable in binaries only when these have high mass transfer rates. We shall see in the next Section that this idea disagrees with observations in any case. We now know that out of the \(\sim 1800\) observed ULXs (see King et al., 2023 and references therein) at least 10 contain magnetized neutron stars, detected through their periodic pulses (PULXs). Four of them are transient: they are members of Be-X binary systems, which become X-ray sources when the eccentric orbit of the compact companion (in most if not all cases a neutron star) of the massive Be star crosses its circumstellar disc. In most cases this disc-crossing produces sub-Eddington-luminosity outbursts (called "Type I"), but from time to time, most probably because of von Zeipel-Kozai-Lidov oscillations of the circumstellar disc (Martin et al., 2014), it results in a giant (super-Eddington; "Type II") outburst. Swift/XRT observations of galaxies NGC 4945, NGC 7793 and M81 suggest that although persistent ULXs dominate the high end of galaxy luminosity functions, the number of systems emitting ULX luminosities are probably dominated by transient sources. These transients are most probably not Be-X systems (Brightman et al., 2023). ## 4 Magnetic Fields in ULXs cannot have magnetar strengths There is a simple physical argument that rules out the presence of magnetars in observed PULXs. The argument is based on the value of their spin-up rate \(\dot{\nu}\) (\(\nu\) is the pulsar's spin frequency). After the discovery of the first PULX M82 ULX-2, Kluzniak & Lasota (2015) pointed out that it differs from other X-ray pulsars (XRPs) not only through its higher luminosity but also in its extremely high spin-up rate. It is immediately obvious that both 'normal' X-ray systems and PULXs lie on exactly the same strong correlation between spin-up rate \(\dot{\nu}\) and X-ray luminosity \(L_{X}\) - see Fig 1. This correlation extends more than seven orders of magnitude in luminosity, and arises because the spin-up results from the accretion torque on the neutron star: \[\dot{\nu}=\frac{\dot{J}(R_{M})}{2\pi I}=\frac{\dot{M}(GMR_{M})^{1/2}}{2\pi I} \propto\dot{M}^{6/7}\mu^{2/7}, \tag{11}\] where \(R_{\rm M}\propto\dot{M}^{-2/7}\mu^{4/7}\) (from Eq. 12) is the magnetospheric radius, \(\mu=BR^{3}\) the neutron star's magnetic moment (with \(B\) the field and \(R\) the neutron-star radius) and \(I\) is the neutron star's moment of inertia. The magnetospheric radius is defined by the equation (Frank et al., 2002) \[R_{\rm M}=2.6\times 10^{8}q\,\left(\frac{\dot{M}}{10^{17}\,{\rm g}\,{\rm s}^{- 1}}\right)^{-2/7}\left(\frac{\dot{M}}{{\rm M}_{\odot}}\right)^{-3/7}\mu_{30}^{ 4/7}\,{\rm cm}, \tag{12}\] where the factor \(q\sim 1\) takes into account the geometry of the accretion flow at the magnetosphere and \(\mu=10^{30}\mu_{30}{\rm Gcm}^{3}\). Assuming \(M\approx 1{\rm M}_{\odot}\), \(q\approx 1\), \(I=10^{45}{\rm g}\,{\rm cm}^{2}\) and using Eq. (12), Eq. (11) gives \[\dot{M}\approx 5.7\times 10^{18}\dot{\nu}_{-10}^{7/6}\mu_{30}^{-1/3}\,{\rm g }\,{\rm s}^{-1} \tag{13}\] as the accretion rate required to spin up a magnetised neutron star at the rate \(\dot{\nu}\). Now we can calculate the luminosity produced by this accretion rate. Supercritical luminosities are not proportional to the accretion rate (see Eq. 5). But very strong magnetic fields make the critical luminosity much larger than the Eddington value, i.e. \(L_{\rm crit}\gg L_{\rm Edd}\) (see Eq. 10). So for \(L_{\rm crit}>L_{X}\gtrsim L_{\rm Edd}\) the standard formula \(L_{X}=0.1\dot{M}c^{2}\) applies, even though \(L_{X}\) exceeds the usual Eddington value. Then for magnetar PULXs (\(\mu\gtrsim 10^{31}{\rm Gcm}^{3}\)) we get from Eq. (13) the luminosity \[L_{X}\approx 2\times 10^{38}\dot{\nu}_{-10}^{7/6}\mu_{31}^{-1/3}\,{\rm erg }\,{\rm s}^{-1}\approx L_{\rm Edd}. \tag{14}\] But in deriving this equation we assumed \(L\gtrsim L_{\rm crit}\gg L_{\rm Edd}\), which would require a much smaller field (i.e. \(\mu_{31}\ll 1\)).2 This contradiction shows that magnetars cannot be present in systems with both \(L_{X}>10^{39}\,{\rm erg}\,{\rm s}^{-1}\) and \(\dot{\nu}\gtrsim 10^{-10}{\rm s}^{-2}\). Footnote 2: Eq. (14) explains why sub–Eddington–luminosity XRPs (\(\mu_{31}\sim 0.001-1\)) have \(\dot{\nu}<10^{-10}{\rm s}^{-2}\). In other words, PULXs cannot contain magnetars. This in turn means that the super-Eddington luminosity observed in PULXs is not intrinsic, and must presumably be anisotropic, i.e. beamed. Importantly, since \(L_{\rm crit}\sim B^{4/3}\), in a dipole field the critical luminosity decreases radially outwards as \(R^{-4}\). So at radius \(\sim 100\) stellar radii all of the cross-section suppression is lost, well inside the magnetosphere. Then a hyper-Eddington luminosity emitted near the neutron-star surface would blow away all of the gas in the upper part of the accretion column, thus cutting off the mass supply supposedly producing the posited hyper-Eddington emission. This rules out the interpretation of the CRSF observed in the magnetized, non-pulsing ULX-8 in M51 as an effect of protons orbiting a \(9\times 10^{14}{\rm G}\) magnetic field, as envisaged by Brightman et al. (2018), and provides another strong argument against the presence of magnetars in ULXs3. Footnote 3: We are grateful to the anonymous referee of the present paper for suggesting this line of argument ## 5 Magnetic Fields in PULXs We conclude from the last Section that neutron stars in PULXs have magnetic fields spanning the same range as the usual XRPs - from \(10^{8}\) G to several \(10^{13}\) G (Revnivtsev & Mereghetti, 2018). They are evidently normal XRPs observed in a special phase of the evolution of their parent binary systems, as is implicit in the original suggestion by King et al. Figure 1: The \(L_{29}\) –\(\dot{\nu}_{-10}\) diagram for XRPs and PULXs. Red dots: the ten PULXs with known spin-up rates. Blue diamonds: selected (for comparison) sub–Eddington–luminosity X-ray pulsars (For details see King et al., 2023) (2001). We can see examples of this in real time in observations of Be-star PULXs. These are normal XRPs for most of their lifetimes, and become PULXs only during their occasional giant outbursts. This allows one to follow the transformation of an XRB into a PULX and its return to 'normal' again. The best studied case is that of the binary SMC X-3. It shows that as the system enters the ULX phase, the neutron-star spin evolution becomes dominated by the accretion torque, as assumed in Eq. (11). Between giant outbursts this sources is an XRP which spins down. In Fig. 2 (Townsend et al., 2017) this corresponds to the time right up to the beginning of a giant outburst, on MJD 57599; then a significant spin-up is observed. From SMC X-3's long-term spin history Townsend et al. (2017) deduce that the angular momentum transferred by accretion during the 5-month giant outburst was larger than the total angular momentum lost by magnetic braking over the previous 18 years of the spin-down phase. The long-term spin-down rate of SMC X-3 is about 500 times lower than the rate of spin-up observed during the giant outburst, showing that the torques acting during this outburst are far larger than during the out-of-outburst phases. During weaker (Type I) outbursts, the spin period continues to increase, but during the giant outburst the spin-up rate is tightly correlated with the X-ray luminosity through the super-Eddington phase (Weng et al., 2017), in agreement with Eq. (11). This means that in PULXs the spin-up rate is strongly correlated with the X-ray luminosity both in time and over the population. There are no magnetars in PULXs. ## 6 Cyg X-3: The Second Hidden ULX in the Galaxy Quite recently, Veledina et al. (2023) performed X-ray polarimetry indicating 'unambiguously' that the Wolf-Rayet X-ray binary Cyg X-3, consisting of a helium star transferring mass to a black hole on its thermal timescale, is a ULX with a beaming factor4\(b\approx 1/65\), but seen from the side. This system is assumed to contain a black hole. Earlier inferred examples of'sideways' ULXs notably include the extreme source SS433 (cf Begelman et al., 2006; King & Muldrew, 2016). Footnote 4: We use here the symbol \(b\) as defined in King (2009); by contrast Veleedina et al. 2023 use \(b\) to denote his \(1/b\). From Eq. (7) we find that this requires an Eddington factor \(\dot{m}\simeq 69\). This is consistent with the estimates of the mass transfer rate found by Lommen et al. (2005) on evolutionary grounds. Together with the similar estimates for SS433 (Begelman et al., 2006; King & Muldrew, 2016) this appears to be explicit confirmation that compact binaries with mass transfer rates exceeding the Eddington rate produce beamed emission, as first suggested by King et al. (2001). Moreover, the estimate (7) appears to be in reasonable agreement with observation. The good match between the luminosities of some ULX nebulae and the luminosity of their ULX irradiators has been used as an argument against strong geometrical beaming in these ultraluminous sources since in these cases the nebula would see an isotropic emission. However, the sources in question are spectrally soft and it is not surprising that the irradiating luminosity inferred from photoionisation modelling is consistent with the observed luminosity (see King et al. 2023 for a detailed discussion). ## 7 Conclusion We have shown that magnetar models for ULX behaviour have serious internal inconsistencies. In particular the field-strengths required to increase the radiation pressure luminosity limit above the observed (assumed isotropic) luminosities are completely incompatible with the spinup rates observed for PULXs. In addition we note that at least one normal Be-star system, with a standard (non-magnetar) field, is observed to become a ULX during a large outburst. In contrast, recent polarimetric observations of the well-studied binary Cyg X-3 reveal that it produces strong emission beamed away from the observer. We conclude that ULXs are beamed. ## 8 Data Availability No new data were generated or analysed in support of this research.
2301.07179
Modeling Vascular Branching Alterations in Polycystic Kidney Disease
The analysis of biological networks encompasses a wide variety of fields from genomic research of protein-protein interaction networks, to the physiological study of biologically optimized tree-like vascular networks. It is certain that different biological networks have different optimization criteria and we are interested in those networks optimized for fluid transport within the circulatory system. Many theories currently exist. For instance, distributive vascular geometry data is typically consistent with a theoretical model that requires simultaneous minimization of both the power loss of laminar flow and a cost function proportional to the total volume of material needed to maintain the system (Murray's law). However, how this optimized system breaks down (or is altered) due to disease has yet to be characterized in detail in terms of branching geometry and geometric interrelationships. This is important for understanding how vasculature remodels under changes of functional demands. For instance, in polycystic kidney disease (PKD), drastic cyst development may lead to a significant alteration of the vascular geometry (or vascular changes may be a preceding event). Understanding these changes could lead to a better understanding of early disease as well as development and characterization of treatment interventions. We have developed an optimal transport network model which simulates distributive vascular systems in health as well as disease in order to better understand changes that may occur due to PKD. We found that reduced perfusion territories, dilated distributive vasculature, and vessel rarefaction are all consequences of cyst development derived from this theoretical model and are a direct result of the increased heterogeneity of local renal tissue perfusion demands.
Timothy L. Kline
2022-12-20T14:12:56Z
http://arxiv.org/abs/2301.07179v1
# Modeling Vascular Branching Alterations in Polycystic Kidney Disease ###### Abstract The analysis of biological networks encompasses a wide variety of fields from genomic research of protein-protein interaction networks, to the physiological study of biologically optimized tree-like vascular networks. It is certain that different biological networks have different optimization criteria and we are interested in those networks optimized for fluid transport within the circulatory system. Many theories currently exist. For instance, distributive vascular geometry data is typically consistent with a theoretical model that requires simultaneous minimization of both the power loss of laminar flow and a cost function proportional to the total volume of material needed to maintain the system (Murray's law). However, how this optimized system breaks down (or is altered) due to disease has yet to be characterized in detail in terms of branching geometry and geometric interrelationships. This is important for understanding how vasculature remodels under changes of functional demands. For instance, in polycystic kidney disease (PKD), drastic cyst development may lead to a significant alteration of the vascular geometry (or vascular changes may be a preceding event). Understanding these changes could lead to a better understanding of early disease as well as development and characterization of treatment interventions. We have developed an optimal transport network model which simulates distributive vascular systems in health as well as disease in order to better understand changes that may occur due to PKD. We found that reduced perfusion territories, dilated distributive vasculature, and vessel rarefaction are all consequences of cyst development derived from this theoretical model and are a direct result of the increased heterogeneity of local renal tissue perfusion demands. Graph theory, the study of network structures in terms of vertices and edges, is an applicable approach in disparate research studies; from the study of protein-protein interaction networks [1; 2], to the study of vascular networks [3; 4]. For instance, the branching geometry of vascular trees is investigated by analyzing the geometric properties of different vascular structures [5; 6]. In this sense, the vascular branching geometry is modeled as a collection of graph edges (interbranch segments) and nodes (bifurcation points), and properties such as interbranch segment length and diameter are measured. The experimental data can then be compared to theoretical models of optimal transport networks [7; 8] such as that derived by Murray [9]. Murray's law is based on the concept that the geometry of the fluid transport system (i.e. vasculature) is consistent with simultaneous minimization of both the power loss of laminar flow and of a cost function proportional to the total volume of material needed to maintain the system (i.e. lumenal contents) - factors that have opposing geometric consequences. Knowledge from vasculature has been used in the design of optimized microfluidic channels [10] and synthetic vessel constructs [11]. Utilizing an optimal transport network modeling approach (grounded in graph theory), random fluctuations have been attributed to the loop-like venation in leaves [12; 13]. Here, individual nodes are modeled to behave as current sources (an analogy to electrical transport networks), the leaf's root is modeled as a current drain (i.e., the source of nutrients or pathway for waste removal), and the connections (edges) between the nodes carry a variable current. The conductance (which is related to the diameter of an edge) is modeled to minimize the network's dissipation, while constrained to do so utilizing a limited amount of material. The occurrence of loop-like networks in the case of varying current sources suggests that random attacks (such as bugs feeding on the leaf) are partially overcome by the optimal transport network being composed of multiple pathways to a single perfusion region. Thus, in certain biological systems, it is suggested that loops may form which thereby serve to increase the robustness of the transport network. Here we develop an optimal transport network model in order to study distributing vascular systems subjected to varying degrees of heterogeneous demands in order to simulate cystic development in polycystic kidney disease (PKD). We hypothesize that this model will help further our understanding of disease-based vascular changes which may precede other structural and functional changes and thereby serve as early disease biomarkers. We consider a formulation of the optimal transport network problem which involves the minimization of the total dissipation rate, under the constraint that a limited amount of material is available [14]. In graph theory, a network can be modeled as a set of vertices or nodes \(k\), and a set of edges or connections between the nodes, (_k, l_). Our analogous system is an electrical transport network where each node acts as either a current source or drain (exact arrangement discussed later), and a variable current _I\({}_{kl}\)_ flows through the edges. Each edge has an associated conductance _k\({}_{kl}\)_, and a length _L\({}_{il}\)_. The dissipation rate \(J\) is related to the currents and conductances by \[J=\ \sum_{(k,l)}\frac{t_{kl}^{2}}{k_{kl}k_{kl}}. \tag{1}\] The dissipation rate is then minimized through the currents and conductances with the local constraint that \(i_{k}=\sum_{l}l_{kl}\) (Kirchhoff's current law), and a global constraint that the sum of the conductances raised to a given power is kept constant, as in \[K^{\gamma}=\sum_{(k,l)}\kappa_{kl}^{\gamma}. \tag{2}\] \(\kappa_{kl}^{\gamma}\) can be recognized as the building cost of a channel (i.e., a vessel branch segment) and the variable \(\gamma\) depends on the nature of the network. As discussed by Corson [12], for the case of electrical wires, \(\gamma=1\), and for a model based on Murray's law, \(\gamma=0.5\). In the case of Murray's law, the total luminal contents (i.e. surface area of blood vessels) is assumed fixed, thus, \(\sum_{(k,l)}V=K^{0.5}\). Also, \(\gamma>1\) is of little relevance since in this case it is more economical to build several parallel links having a small conductance rather than a large one of equivalent capacity. Note that this is necessary in the case that distinct elements need to be carried between certain nodes (e.g., what may occur if planning a city street layout). For \(\gamma<0\) the model is degenerate, and thus \(\gamma\) is said to lie between 0 and 1, though network properties regarding the network's phase transition when \(\gamma\) crosses 1 have been explored [14]. Using a Lagrange multiplier \(\lambda\), we define the minimization problem as \[\Xi(\kappa kl,lkl)=\sum_{(k,l)}\frac{l_{kl}^{2}}{lklkkl}\ -\ \lambda\sum_{(k,l)} \kappa_{kl}^{\gamma}. \tag{3}\] The minima of the dissipation, with constant \(K\) has the following necessary conditions \[\frac{\partial\Xi}{\partial l_{kl}}=0,\frac{\partial\Xi}{\partial\kappa_{kl} }=0. \tag{4}\] Solving for the derivative of \(\Xi\) with respect to \(\kappa_{kl}\), Bohn et al [14] gave an explicit scaling relation between the currents and the conductivity in the minimal configuration \[\kappa_{kl}=\ \frac{\kappa(l_{kl}^{2}/l_{kl})^{1/(1+\gamma)}}{\left(\Sigma_{(m,n )}(l_{kl}^{2}/l_{kl})^{1/(1+\gamma)}\right)^{1/\gamma}}. \tag{5}\] This is the equation used to solve for each edge capacitance (i.e. derive the optimal branching geometry in terms of vessel diameters and arrangement). To set up our network model, we begin with a collection of \(n^{2}\) nodes, arranged in an \(n\)\(x\)\(n\) grid. Our problem is formulated in \(\mathbb{R}^{2}\) meaning that each node is connected to at most 4 adjacent nodes (side nodes are connected to 3 other nodes, and corner nodes are connected to 2 other nodes) through edges which have the conductance \(\kappa_{kl}\) and carry the current \(l_{kl}\) between node \(k\) and node \(l\). The length of each edge \(L_{kl}\) is here set to unity. The network consists of sources \(s\) and drains \(d\), where \(s+d=n^{2}\) and \(\sum i_{s}=-\sum i_{d}\). The initial condition consists of a random distribution of conductances. Then, the potential \(U\) at the individual nodes are found by solving the system of linear equations \(i_{k}=\sum_{l}\kappa_{kl}(U_{k}-U_{l})\) (which are solved by noting that \(AX=B\) in matrix notation). The currents \(l_{kl}\) can then be determined and plugged in to Eq. 5 to determine the new conductances. These new conductances are then used in the next iteration of the minimization algorithm. These steps are repeated until the values of the conductivity have converged. In order to understand how the changes in vascular branching likely impact regional perfusion, we modeled the flow properties of the vascular network. A map was created that contains information pertaining to the resistance to flow at each pixel within the network. The map assigns a value to each pixel by use of the Hagen-Poiseuille equation [15] \[\Delta P=\frac{8\mu\cdot l\cdot Q}{\pi\cdot r^{4}}, \tag{6}\] which describes slow viscous incompressible flow along a tubular pathway. \(AP\) is the pressure difference between an inlet and outlet, \(\mu\) is the dynamic viscosity of the fluid, \(l\) is the length of the path, \(Q\) is the volumetric flow rate, and \(r\) is the radius along the path. The map's value at each pixel therefore characterizes the resistance by \[R_{mn}=\sum\frac{l}{r^{4}}\,, \tag{7}\] where \(l\) is the length, \(r\) is the radius, and \(R_{mn}\) is the determined resistance value at pixel (m,n). To create the map we have implemented the fast marching method [16], using a starting point at the vessel tree root. The fast marching method is a numerical scheme for solving the Eikonal equation \[\left|\nabla u_{mn}\right|=F_{mn}\,, \tag{8}\] where \(u\) is the arrival time function and \(F\) corresponds to the weight function or the speed of the front progression. The method relies on an approximation to this gradient given by \[\begin{bmatrix}\max(D_{mn}^{-x}u,-D_{mn}^{x},0)^{2}+\\ \max(D_{mn}^{-y}u,-D_{mn}^{y},0)^{2}\end{bmatrix}^{1/2}=F_{mn}\,, \tag{9}\] where the \(D_{mn}\) are represented by, \[D_{mn}^{-x}u=\frac{u_{j}-u_{j-1}}{h}\,,D_{mn}^{+x}u=\frac{u_{j+1}-u_{j}}{h}. \tag{10}\] Here \(h\) is the pixel's length along \(m\) (relations are similar for \(D\) in the \(n\) direction). By using a weighting function going as \(r^{4}\), the difficulty of perfusing a particular region, based on network geometry, can be obtained. Shown in Fig. 1 are the results for the computed optimal transport network composed of 30 x 30 nodes (panel A) and interconnections (panel B) used to simulate a distributive vasculature system supplying rather homogeneous tissue (locations with nutrient supply/waste disposal demands held constant during optimization). In all cases, \(\gamma\) was set to 0.5 (to simulate vascular networks characterized by Murray's law and bounded to have a constant vessel surface area). The individual sources were initialized to all have randomly assigned current values, and the single drain located at the bottom left node, was initialized to have the negative of the sum of all other source nodes. Two example vessel trees are shown (panel C and panel D), highlighting how the random initialization results in very different network architectures. Shown in Fig. 2 are the results for the computed optimal transport network again composed of 30 x 30 nodes used to simulate a distributive vasculature system supplying heterogeneous tissue demands due to cystic development. Vascular network with no cysts is shown (panel A), as well as with cysts with increased demand (panel B). In addition, vessel rarefaction is seen to increase as a result of an increased demand from cystic development. Shown in Fig. 3 are network architectures for no cysts, as well as 5 cysts requiring 15x tissue demand. As demand from cysts increases, the overall efficiency of the network in non-cystic regions (locations away from cysts) is seen to decrease. Shown in Fig. 4 are reduced perfusion territories resulting from cystic development. Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disorder involving a single gene and is the fourth leading cause of end-stage renal disease (ESRD) [17, 18]. Despite the renal complications associated with cyst formation compromising renal function, cardiovascular disease is the main cause of morbidity and mortality in ADPKD patients [19]. In addition, other extra-renal manifestations such as intracranial aneurysms, subarachnoid hemorrhage, and spontaneous cervicocephalic artery dissections may cause debilitating injury and often premature death [20-23]. The Figure 1: Network initialization and example vascular networks. Panel A: network nodes used to simulate local demands for supply of nutrients/waste disposal. Panel B: network edges used to simulate the available vascular network to be optimized by our model system. Panels C and D: example vascular networks generated by the algorithmic approach. Due to random initialization of the source nodes, rather different network architectures can result. Colormap value and thickness of the segments correspond to branch diameter. Figure 3: Vessel rarefaction is a consequence of cystic burden and is elucidated by our model. Panel A: no cysts. Panel B: 5 cysts with 15x demand. Red arrows highlight vascular regions lost due to the more heterogeneous tissue demand of the cystic regions. The darker and thicker the branch segment, the larger the diameter. Figure 2: Panel A: vascular network with no cysts. Panel B: vascular network with 5 cystic regions with a 15x local demand on supply. Notice how vasculature dilates in response to the increased need to supply certain regions more than others. importance of our understanding of the vascular phenotype of ADPKD is thus crucially important. Currently, endothelial dysfunction is the earliest observable manifestation of ADPKD [24; 25]. Oxidative stress and vascular inflammation have been linked to this vascular dysfunction which includes increased contraction and decreased relaxation of the renal distributive vasculature, which results in tissue ischemia (a stimulus for angiogenesis of the exchange vasculature of the kidneys) [21]. Evidence of angiogenesis on the surface of renal cysts has been shown and high levels of angiogenic growth factors including vascular endothelial growth factor (VEGF) have been reported in cyst fluid and in the circulation system [26]. Other notable findings include the expression of polycystic (the large protein encoded by the PKD1 and PKD2 genes) in arterial smooth muscle [27], defective nitric oxide generation from diminished vasodilator cNOS activity [28; 29], up-regulation of the endothelin isoform ET-1 contributing to vasoconstriction [30], and high levels of lipoprotein(a) which more than likely contributes to the high incidence of cardiovascular events in ADPKD [24]. Parameters known to antedate the decrease in renal function of ADPKD patients include renal structure, renal blood flow (RBF), and mean arterial pressure (MAP) [31; 32]. Renal blood flow reduction has been shown to parallel total kidney volume (TKV) increases, to precede decreases in glomerular filtration rate (GFR), and to predict structural and functional disease progression [33]. The proliferation of renal epithelial cells and the formation and growth of cysts that replace normal parenchyma of the kidney suggests that a great deal of remodeling and expansion of the vasculature must occur to provide oxygenation and nutrition to the cyst cells. Well-defined vascular networks surrounding cysts, dilated capillaries, as well as the loss of normal vascular architecture have been shown in scanning electron microscopy studies of vascular casts [34]. In addition, decreased vascular densities have been revealed by micro-CT where the vasculature of kidney samples was injected with a lead-based polymer [35]. Evidently, a debilitating feedback loop promoting allocation of vascular maintenance away from renal vascular supporting healthy tissue is indicated. In this current study, we modeled the expected changes that should occur in response to the demands of cystic development. Dilation of the distributive vasculature results from the optimal transport modeling approach used in this study, as well as vessel rarefaction and reduced perfusion territories. We believe that providing models to characterize vascular changes occurring due to disease will facilitate a better understanding of disease mechanisms and help in developing earlier disease biomarkers. This study was supported in part by the Mayo Clinic Robert M. and Billie Kelley Pirnie Translational PKD Center and the NIDDK grants P30DK090728 and K01DK110136.
2303.18146
On homogeneous spaces for diagonal ind-groups
We study the homogeneous ind-spaces $\mathrm{GL}(\mathbf{s})/\mathbf{P}$ where $\mathrm{GL}(\mathbf{s})$ is a strict diagonal ind-group defined by a supernatural number $\mathbf{s}$ and $\mathbf{P}$ is a parabolic ind-subgroup of $\mathrm{GL}(\mathbf{s})$. We construct an explicit exhaustion of $\mathrm{GL}(\mathbf{s})/\mathbf{P}$ by finite-dimensional partial flag varieties. As an application, we characterize all locally projective $\mathrm{GL}(\infty)$-homogeneous spaces, and some direct products of such spaces, which are $\mathrm{GL}(\mathbf{s})$-homogeneous for a fixed $\mathbf{s}$. The very possibility for a $\mathrm{GL}(\infty)$-homogeneous space to be $\mathrm{GL}(\mathbf{s})$-homogeneous for a strict diagonal ind-group $\mathrm{GL}(\mathbf{s})$ arises from the fact that the automorphism group of a $\mathrm{GL}(\infty)$-homogeneous space is much larger than $\mathrm{GL}(\infty)$.
Lucas Fresse, Ivan Penkov
2023-03-31T15:29:42Z
http://arxiv.org/abs/2303.18146v1
# On homogeneous spaces for diagonal ind-groups ###### Abstract. We study the homogeneous ind-spaces \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) where \(\operatorname{GL}(\mathbf{s})\) is a strict diagonal ind-group defined by a supernatural number \(\mathbf{s}\) and \(\mathbf{P}\) is a parabolic ind-subgroup of \(\operatorname{GL}(\mathbf{s})\). We construct an explicit exhaustion of \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) by finite-dimensional partial flag varieties. As an application, we characterize all locally projective \(\operatorname{GL}(\infty)\)-homogeneous spaces, and some direct products of such spaces, which are \(\operatorname{GL}(\mathbf{s})\)-homogeneous for a fixed \(\mathbf{s}\). The very possibility for a \(\operatorname{GL}(\infty)\)-homogeneous space to be \(\operatorname{GL}(\mathbf{s})\)-homogeneous for a strict diagonal ind-group \(\operatorname{GL}(\mathbf{s})\) arises from the fact that the automorphism group of a \(\operatorname{GL}(\infty)\)-homogeneous space is much larger than \(\operatorname{GL}(\infty)\). Key words and phrases:Diagonal ind-group; generalized flag; embedding of flag varieties 2010 Mathematics Subject Classification: 22E65; 14M15; 14M17 ###### Contents * 1 Introduction * 2 The ind-group \(\operatorname{GL}(\mathbf{s})\) * 3 On embeddings of flag varieties * 4 A review of generalized flags * 5 Embedding of flag varieties arising from diagonal embedding of groups * 6 Ind-varieties of generalized flags as homogeneous spaces of \(\operatorname{GL}(\mathbf{s})\) * 7 The case of direct products of ind-varieties of generalized flags * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flags * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.8 The case of direct products of ind-varieties of generalized flag * 7.9 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.1 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.2 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.3 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.4 The case of direct products of ind-varieties of generalized flag * 7.5 The case of direct products of ind-varieties of generalized flag * 7.6 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag * 7.7 The case of direct products of ind-varieties of generalized flag map \[x\mapsto\begin{pmatrix}x&0\\ 0&x\end{pmatrix}.\] A general definition of a diagonal Lie algebra has been given by A. Baranov and A. Zhilinskii in [1], and this definition carries over in a straightforward way to classical Lie groups, producing the class of diagonal Lie groups. Locally projective homogeneous ind-spaces of diagonal ind-groups have been studied much less extensively than those of \(\mathrm{GL}(\infty)\), see [4] and [2]. In this paper, we undertake such a study for a class of diagonal ind-groups which we call strict diagonal ind-groups of type A. These ind-groups are characterized by supernatural numbers \(\mathbf{s}\), and are denoted \(\mathrm{GL}(\mathbf{s})\). We consider reasonably general parabolic subgroups \(\mathbf{P}\subset\mathrm{GL}(\mathbf{s})\) and describe the homogeneous ind-space \(\mathrm{GL}(\mathbf{s})/\mathbf{P}\) as direct limits of embeddings \[G_{n-1}/P_{n-1}\to G_{n}/P_{n}\] of usual ind-varieties. Our main result is an explicit formula for the so arising embeddings, and this formula is an analogue of the formula for standard extensions introduced in [10] (and used in a particular case in [3]). The class of locally projective homogeneous ind-spaces of strict (and of general) diagonal ind-groups will require further detailed studies. In the current paper we restrict ourselves to the following application of the above explicit formula: we determine which locally projective homogeneous ind-spaces of \(\mathrm{GL}(\infty)\), i.e., ind-varieties of generalized flags [3], are also \(\mathrm{GL}(\mathbf{s})\)-homogeneous for a given infinite supernatural number \(\mathbf{s}\). Furthermore, we also characterize explicitly direct products of ind-varieties of generalized flags which are \(\mathrm{GL}(\mathbf{s})\)-homogeneous. The very possibility of an ind-variety of generalized flags being a homogeneous space for \(\mathrm{GL}(\mathbf{s})\), where \(\mathbf{s}\) is an infinite supernatural number, is an interesting phenomenon, and can be seen as one possible motivation for our studies of \(\mathrm{GL}(\mathbf{s})\)-homogeneous ind-spaces. Indeed, recall the following fact for a finite-dimensional algebraic group. If \(G\) is a centerless simple algebraic group of classical type and rank at least four and \(P\) is a parabolic subgroup, a well-known result of A. Onishchik [9] implies that the connected component of unity of the automorphism group of the homogeneous space \(G/P\) coincides with \(G\), except in two special cases when \(G/P\) is a projective space and \(G\) is a symplectic group, and when \(G/P\) is a maximal orthogonal isotropic grassmannian and \(G\) is an orthogonal group of type B. Consequently, unless \(G/P\) is a projective space or a maximal isotropic grassmannian, \(G/P\) cannot be a homogeneous \(G^{\prime}\)-space for a centerless algebraic group \(G^{\prime}\not\cong G\). The explanation of why the situation is very different if one replaces \(G\) by the ind-group \(\mathrm{GL}(\infty)\), is that, as shown in [7], the automorphism group of an ind-variety of generalized flags is much larger than \(\mathrm{GL}(\infty)\). In this way, our results provide embeddings of \(\mathrm{GL}(\mathbf{s})\) into such automorphism groups, with the property that the action of \(\mathrm{GL}(\mathbf{s})\) on the respective ind-variety of generalized flags is transitive. As a corollary we obtain that a "generic" ind-variety of generalized flags is \(\operatorname{GL}(\mathbf{s})\)-homogeneous also for any ind-group \(\operatorname{GL}(\mathbf{s})\). This statement is in some sense opposite to the classical statement in the finite-dimensional case. The paper is organized as follows. Sections 2, 3, 4 are devoted to preliminaries. We start by introducing the ind-groups \(\operatorname{GL}(\mathbf{s})\) where \(\mathbf{s}\) is a supernatural number. We then discuss Cartan, Borel, and parabolic ind-subgroups of \(\operatorname{GL}(\mathbf{s})\). In Section 3 we review the notions of linear embedding of flag varieties and standard extension of flag varieties, and in Section 4 we recall the necessary results on ind-varieties of generalized flags. In Section 5 we prove our explicit formula for embeddings of partial flag varieties \(\operatorname{GL}(n)/Q\hookrightarrow\operatorname{GL}(dn)/P\) induced by pure diagonal embeddings \(\operatorname{GL}(n)\hookrightarrow\operatorname{GL}(dn)\). In Section 6 we use this formula to describe all \(\operatorname{GL}(\mathbf{s})\)-homogeneous ind-varieties of generalized flags. Finally, in Section 7 we characterize direct products of ind-varieties of generalized flags, which are \(\operatorname{GL}(\mathbf{s})\)-homogeneous. ### Acknowledgement The work of I. P. was supported in part by DFG Grant PE 980/8-1. ## 2. The ind-group \(\operatorname{GL}(\mathbf{s})\) ### Direct systems associated to a supernatural number Throughout this paper we consider a fixed supernatural number \(\mathbf{s}\), in other words \[\mathbf{s}=\prod_{p\in\mathcal{P}}p^{\alpha_{p}}\] where \(\mathcal{P}\) is a (possibly infinite) set of prime numbers and \(\alpha_{p}\) is either a positive integer or \(\infty\). Moreover, we suppose that \(\mathbf{s}\) is infinite, hence at least one of the exponents \(\alpha_{p}\) is infinite or the set \(\mathcal{P}\) is infinite. By \(\mathcal{D}(\mathbf{s})\) we denote the set of finite divisors of \(\mathbf{s}\). Let \(\mathcal{A}\) be a direct system of sets with injective maps. We say that \(\mathcal{A}\) is _associated to the supernatural number_\(\mathbf{s}\) if the sets in \(\mathcal{A}\) \[A(s),\quad s\in\mathcal{D}(\mathbf{s})\] are parametrized by the finite divisors of \(\mathbf{s}\), and the injective maps \[\delta_{s,s^{\prime}}:A(s)\hookrightarrow A(s^{\prime})\] correspond to pairs of divisors \(s,s^{\prime}\in\mathcal{D}(\mathbf{s})\) such that \(s|s^{\prime}\). Then, if \(L(\mathcal{A})=\lim_{\rightarrow}A(s)\), the resulting map \[\delta_{s}:A(s)\hookrightarrow L(A)\] is injective for every \(s\in\mathcal{D}(\mathbf{s})\). **Definition 2.1**.: We call _exhaustion of \(\mathbf{s}\)_ any sequence \(\{s_{n}\}_{n\geq 1}\) of integers such that * \(s_{n}\in\mathcal{D}(\mathbf{s})\) for all \(n\), * \(s_{n}\) divides \(s_{n+1}\) for all \(n\), * any \(s\in\mathcal{D}(\mathbf{s})\) is a multiple of \(s_{n}\) for some \(n\). **Lemma 2.2**.: _Let \(\{s_{n}\}_{n\geq 1}\) be an exhaustion of \(\mathbf{s}\). Then \(L(\mathcal{A})\) coincides with the limit of the inductive system formed by the sets \(A(s_{n})\) and the maps \(\delta_{n}=\delta_{s_{n},s_{n+1}}:A(s_{n})\hookrightarrow A(s_{n+1})\)._ Proof.: Straightforward. According to the lemma, the limit \(L(\mathcal{A})\) can be described in terms of an exhaustion \[L(\mathcal{A})=\bigcup_{n}A(s_{n}).\] * In the case where \(A(s)\) are vector spaces and the maps \(\delta_{s,s^{\prime}}\) are linear, then \(L(\mathcal{A})\) is the direct limit in the category of vector spaces. * In the case where \(A(s)\) are algebraic varieties and the maps \(\delta_{s,s^{\prime}}\) are closed embeddings, the limit \(L(\mathcal{A})\) is an ind-variety as defined in [11] and [8]. * In the case where \(A(s)\) are algebraic groups and the maps \(\delta_{s,s^{\prime}}\) are group homomorphisms, the limit is both an ind-variety and a group. It is in particular an ind-group1. Footnote 1: An ind-group is an ind-variety with a group structure such that the multiplication \((x,y)\mapsto xy\) and the inversion \(x\mapsto x^{-1}\) are morphisms of ind-varieties. ### Definition of the groups \(\mathrm{GL}(\mathbf{s})\) and \(\mathrm{SL}(\mathbf{s})\) Whenever \(s,s^{\prime}\) are two positive integers such that \(s\) divides \(s^{\prime}\), we have a diagonal embedding \[\delta_{s,s^{\prime}}:\mathrm{GL}(s)\to\mathrm{GL}(s^{\prime}),\ x\mapsto \mathrm{diag}(\underbrace{x,\ldots,x}_{\frac{s^{\prime}}{s}\text{ blocks}}).\] We refer to the embeddings \(\delta_{s,s^{\prime}}\) as _strict diagonal embeddings_. A more general definition of _diagonal embeddings_ is given, at the Lie algebra level, in [1]. The groups \(\mathrm{GL}(s)\) (for \(s\in\mathcal{D}(\mathbf{s})\)) and the maps \(\delta_{s,s^{\prime}}\) (for all pairs of integers \(s,s^{\prime}\in\mathcal{D}(\mathbf{s})\) such that \(s\) divides \(s^{\prime}\)) form a direct system. By definition, the ind-group \(\mathrm{GL}(\mathbf{s})\) is the limit of this direct system. The group \(\mathrm{GL}(\mathbf{s})\) can be viewed as the group of infinite \(\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\)-matrices consisting of one diagonal block of size equal to any (finite) divisor \(s\) of \(\mathbf{s}\), repeated infinitely many times along the diagonal: \[\mathrm{GL}(\mathbf{s}) = \left\{\begin{pmatrix}x&0&\cdots\\ 0&x&\ddots\\ \vdots&\ddots&\ddots\end{pmatrix}:x\in\mathrm{GL}(s),\ s\in\mathcal{D}( \mathbf{s})\right\}. \tag{2.1}\] Similarly, we define \(\mathrm{SL}(\mathbf{s})\) as the limit of the direct system formed by the groups \(\mathrm{SL}(s)\) and the same maps \(\delta_{s,s^{\prime}}\). In fact, \(\mathrm{SL}(\mathbf{s})\) is the derived group of \(\mathrm{GL}(\mathbf{s})\). By \(\mathfrak{gl}(\mathbf{s})\) and \(\mathfrak{sl}(\mathbf{s})\), we denote the Lie algebras of \(\mathrm{GL}(\mathbf{s})\) and \(\mathrm{SL}(\mathbf{s})\), respectively. Thus \(\mathfrak{sl}(\mathbf{s})=[\mathfrak{gl}(\mathbf{s}),\mathfrak{gl}(\mathbf{s})]\). **Remark 2.3**.: Lemma 2.2 shows that the group \(\operatorname{GL}(\mathbf{s})\) can be obtained through any exhaustion \[\operatorname{GL}(\mathbf{s})=\bigcup_{n}\operatorname{GL}(s_{n})\] where \(\{s_{n}\}_{n\geq 1}\) is an exhaustion of \(\mathbf{s}\) (see Definition 2.1). However, the ind-group \(\mathbf{GL}(\mathbf{s})\) has various other exhaustions. If we set \[\mathbf{K}(n):=\underbrace{\operatorname{GL}(s_{n})\times\cdots\times \operatorname{GL}(s_{n})}_{\frac{s_{n+1}}{s_{n}}\text{ factors}}\] and \[\psi_{n}:\mathbf{K}(n)\to\mathbf{K}(n+1),\ (x_{1},\ldots,x_{d_{n}})\mapsto( \underbrace{\operatorname{diag}(x_{1},\ldots,x_{d_{n}}),\ldots,\operatorname {diag}(x_{1},\ldots,x_{d_{n}})}_{\frac{s_{n+2}}{s_{n+1}}\text{ terms}}),\] then the direct system \(\{\mathbf{K}(n)\stackrel{{\psi_{n}}}{{\to}}\mathbf{K}(n+1)\}\) intertwines in a natural way with the direct system \(\{\operatorname{GL}(s_{n})\stackrel{{\delta_{s_{n},s_{n+1}}}}{{ \longrightarrow}}\operatorname{GL}(s_{n+1})\}\) considered above. This yields an equality \[\operatorname{GL}(\mathbf{s})=\lim_{\to}\operatorname{GL}(s_{n})=\lim_{\to} \mathbf{K}(n).\] \(\blacksquare\) We say that two exhaustions \(\mathbf{G}=\bigcup_{n}G_{n}=\bigcup_{n}G^{\prime}_{n}\) of a given ind-group are _equivalent_ if there are \(n_{0}\geq 1\) and a commutative diagram such that the vertical arrows are isomorphisms of algebraic groups and the horizontal arrows are the embeddings of the exhaustions. **Lemma 2.4**.: (a) _Any exhaustion of \(\operatorname{SL}(\mathbf{s})\) by almost simple, simply connected algebraic groups is equivalent to \(\{\operatorname{SL}(s_{n}),\delta_{s_{n},s_{n+1}}\}_{n\geq 1}\) for an exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\)._ (b) _Any exhaustion of \(\operatorname{GL}(\mathbf{s})\) by classical groups (i.e. by groups of the form \(\operatorname{GL}(n)\), \(\operatorname{SL}(n)\), \(\operatorname{SO}(n)\), or \(\operatorname{Sp}(n)\)) is equivalent to \(\{\operatorname{GL}(s_{n}),\delta_{s_{n},s_{n+1}}\}_{n\geq 1}\) for an exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\)._ Proof.: (a) It suffices to prove the claim at the level of Lie algebras. Let \(\mathfrak{sl}(\mathbf{s})=\bigcup_{n}\mathfrak{g}_{n}\) be an exhaustion by simple Lie algebras, hence of classical type for \(n\) large enough. There is a subsequence \(\{\mathfrak{g}_{k_{n}}\}_{n\geq 1}\) and an exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\) such that we have a commutative diagram of embeddings By [1, Lemma 2.7], the embeddings \(\eta_{n}\) and \(\xi_{n}\) are diagonal, in the sense that there is an isomorphism of \(\mathfrak{sl}(s_{n})\)-modules \[W_{n}\cong V_{n}^{\oplus t}\oplus V_{n}^{*\oplus r}\oplus\mathbb{C}^{\oplus s}\] and an isomorphism of \(\mathfrak{g}_{k_{n}}\)-modules \[V_{n+1}\cong W_{n}^{\oplus t^{\prime}}\oplus W_{n}^{*\oplus r^{\prime}}\oplus \mathbb{C}^{\oplus s^{\prime}}\] for some triples of nonnegative integers \((t,r,s)\) and \((t^{\prime},r^{\prime},s^{\prime})\), where \(V_{n}\) and \(W_{n}\) denote the natural representations of \(\mathfrak{sl}(s_{n})\) and \(\mathfrak{g}_{k_{n}}\), and \(\mathbb{C}\) is a trivial representation. Also since \(\delta_{s_{n},s_{n+1}}\) is strict diagonal, we have an isomorphism of \(\mathfrak{sl}(s_{n})\)-modules \[V_{n+1}\cong\underbrace{V_{n}\oplus\ldots\oplus V_{n}}_{\frac{s_{n+1}}{s_{n}} \text{ copies}}. \tag{2.2}\] Arguing by contradiction, assume that \(\mathfrak{g}_{k_{n}}\) is not of type A. Then [1, Proposition 2.3] implies that \(t=r\). Moreover, \(t^{\prime}+r^{\prime}>0\) since otherwise \(V_{n+1}\) would be a trivial representation of \(\mathfrak{sl}(s_{n})\). Altogether this implies that \(V_{n}^{*}\) is isomorphic to a direct summand of \(V_{n+1}\) considered as an \(\mathfrak{sl}(s_{n})\)-module, which is impossible in view of (2.2). We conclude that \(\mathfrak{g}_{k_{n}}\) is of type A for all \(n\). Moreover, from (2.2), we obtain \(s=s^{\prime}=1\) and either \(r=r^{\prime}=0\) or \(t=t^{\prime}=0\). Up to replacing \(\mathfrak{g}_{k_{n}}=\mathfrak{sl}(W_{n})\) by \(\mathfrak{sl}(W_{n}^{*})\), we can assume that \(r=r^{\prime}=0\), and so \(\mathfrak{g}_{k_{n}}\cong\mathfrak{sl}(s^{\prime}_{k_{n}})\) for some integer such that \(s_{n}|s^{\prime}_{k_{n}}\), \(s^{\prime}_{k_{n}}|s_{n+1}\), and the embedding \(\mathfrak{g}_{k_{n}}\hookrightarrow\mathfrak{g}_{k_{n+1}}\) is induced by \(\delta_{s^{\prime}_{k_{n}},s^{\prime}_{k_{n+1}}}\). If \(k:=k_{n}+1<k_{n+1}\), we get a commutative diagram where the horizontal arrows are embeddings. Relying as above on [1, Proposition 2.3], we get that \(\mathfrak{g}_{k}\) is necessarily of type A, and up to replacing \(\mathfrak{g}_{k}=\mathfrak{sl}(W)\) by \(\mathfrak{sl}(W^{*})\), we can assume that \(\mathfrak{g}_{k}\cong\mathfrak{sl}(s^{\prime}_{k})\) for some \(s^{\prime}_{k}\) with \(s^{\prime}_{k_{n}}|s^{\prime}_{k}\), \(s^{\prime}_{k}|s^{\prime}_{k_{n+1}}\), and that the embeddings \(\mathfrak{g}_{k_{n}}\hookrightarrow\mathfrak{g}_{k}\hookrightarrow\mathfrak{g} _{k_{n+1}}\) are induced by \(\delta_{s^{\prime}_{k_{n}},s^{\prime}_{k}}\) and \(\delta_{s^{\prime}_{k},s^{\prime}_{k_{n+1}}}\). By iterating the reasoning, we obtain an exhaustion \(\{s^{\prime}_{n}\}_{n\geq 1}\) of \(\mathbf{s}\) such that the exhaustions \(\mathfrak{sl}(\mathbf{s})=\bigcup_{n}\mathfrak{g}_{n}\) and \(\mathfrak{sl}(\mathbf{s})=\bigcup_{n}\mathfrak{sl}(s^{\prime}_{n})\) are equivalent. This shows (a). (b) From (a) it follows that for every \(n\), the derived group \((G_{n},G_{n})\) is isomorphic to \(\operatorname{SL}(s_{n})\) and, after identifying \((G_{n},G_{n})\) with \(\operatorname{SL}(s_{n})\) and \((G_{n+1},G_{n+1})\) with \(\operatorname{SL}(s_{n+1})\), the map \((G_{n},G_{n})\hookrightarrow(G_{n+1},G_{n+1})\) becomes the restriction of \(\delta_{s_{n},s_{n+1}}\). This implies that \(G_{n}\) is either isomorphic to \(\operatorname{SL}(s_{n})\) or to \(\operatorname{GL}(s_{n})\). For \(n\geq 1\) large enough, \(G_{n}\) has to contain the center \(Z(\operatorname{GL}(\mathbf{s}))\), which is isomorphic to \(\mathbb{C}^{*}\). Since the connected component of the center of \(\operatorname{SL}(s_{n})\) is trivial, this forces \(G_{n}\cong\operatorname{GL}(s_{n})\). Moreover, since \(G_{n}=Z(G_{n})(G_{n},G_{n})\) and the embedding \(G_{n}\hookrightarrow G_{n+1}\) maps \(Z(G_{n})=Z(\operatorname{GL}(\mathbf{s}))\) into \(Z(G_{n+1})\), we deduce that this embedding \(G_{n}\hookrightarrow G_{n+1}\) coincides with \(\delta_{s_{n},s_{n+1}}:\operatorname{GL}(s_{n})\hookrightarrow\operatorname{ GL}(s_{n+1})\) after suitably identifying \(G_{n}\) with \(\operatorname{GL}(s_{n})\) and \(G_{n+1}\) with \(\operatorname{GL}(s_{n+1})\).. The following statement is a corollary of the classification of general diagonal Lie algebras [1]. We give a proof for the sake of completeness. **Proposition 2.5**.: (a) _If \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) are two different infinite supernatural numbers, then the ind-groups \(\operatorname{GL}(\mathbf{s})\) and \(\operatorname{GL}(\mathbf{s}^{\prime})\) (resp. \(\operatorname{SL}(\mathbf{s})\) and \(\operatorname{SL}(\mathbf{s}^{\prime})\)) are not isomorphic._ (b) _If \(\mathbf{s}\) is an infinite supernatural number, then \(\operatorname{GL}(\mathbf{s})\) is not isomorphic to \(\operatorname{GL}(\infty)\), and \(\operatorname{SL}(\mathbf{s})\) is not isomorphic to \(\operatorname{SL}(\infty)\)._ Proof.: (a) Since \(\operatorname{SL}(\cdot)\) is the derived group of \(\operatorname{GL}(\cdot)\), it suffices to establish the claim concerning \(\operatorname{SL}(\mathbf{s})\) and \(\operatorname{SL}(\mathbf{s}^{\prime})\). Assume there is an isomorphism of ind-groups \(\varphi:\operatorname{SL}(\mathbf{s}^{\prime})\to\operatorname{SL}(\mathbf{s})\). Then any exhaustion \(\{s^{\prime}_{n}\}\) of \(\mathbf{s}^{\prime}\) yields an exhaustion \(\operatorname{SL}(\mathbf{s})=\bigcup_{n}\varphi(\operatorname{SL}(s^{\prime}_{ n}))\) of the group \(\operatorname{SL}(\mathbf{s})\), and Lemma 2.4 implies \(\mathbf{s}=\mathbf{s}^{\prime}\), a contradiction. (b) By definition, \(\operatorname{SL}(\infty)\) has an exhaustion by the groups \(\operatorname{SL}(n)\) (\(n\geq 1\)) via the standard embeddings \(\operatorname{SL}(n)\to\operatorname{SL}(n+1)\), \(x\mapsto\begin{pmatrix}x&0\\ 0&1\end{pmatrix}\). Clearly this exhaustion is not equivalent to \(\{\operatorname{SL}(s_{n}),\delta_{s_{n},s_{n+1}}\}_{n\geq 1}\) for any exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\). Therefore, the ind-groups \(\operatorname{SL}(\mathbf{s})\) and \(\operatorname{SL}(\infty)\) are not isomorphic by Lemma 2.4 (a). The same argument shows that \(\operatorname{GL}(\mathbf{s})\) and \(\operatorname{GL}(\infty)\) are not isomorphic. ### Parabolic and Borel subgroups An ind-subgroup \(\mathbf{H}\subset\operatorname{GL}(\mathbf{s})\) is said to be a _(locally splitting) Cartan subgroup_ if there is an exhaustion \(\operatorname{GL}(\mathbf{s})=\bigcup_{n}G_{n}\) by classical groups such that \(G_{n}\cap\mathbf{H}\) is a Cartan subgroup of \(G_{n}\) for all \(n\). For instance, the subgroup of invertible periodic diagonal matrices in the realization (2.1) is a Cartan subgroup of \(\operatorname{GL}(\mathbf{s})\). If \(\mathbf{P}\) is an ind-subgroup of \(\operatorname{GL}(\mathbf{s})\), then the quotient \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) is an ind-variety obtained as the direct limit of the quotients \(\operatorname{GL}(s)/\mathbf{P}(s)\) for \(s\in\mathcal{D}(\mathbf{s})\). For the purposes of this paper, we say that an ind-subgroup \(\mathbf{P}\subset\operatorname{GL}(\mathbf{s})\) is a _parabolic subgroup_ if there exists an exhaustion \(\operatorname{GL}(\mathbf{s})=\bigcup_{n}G_{n}\) by classical groups such that \(G_{n}\cap\mathbf{P}\) is a parabolic subgroup of \(G_{n}\) for all \(n\) (cf. [2]). This implies in particular that the ind-variety \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) is _locally projective_ as it has an exhaustion \[\operatorname{GL}(\mathbf{s})/\mathbf{P}=\bigcup_{n}G_{n}/(G_{n}\cap\mathbf{P})\] by projective varieties. If, in addition, the unipotent radical of \(G_{n}\cap\mathbf{P}\) is contained in the unipotent radical of \(G_{n+1}\cap\mathbf{P}\) for every \(n\), then we say that \(\mathbf{P}\) is a _strong parabolic subgroup_. An ind-subgroup \(\mathbf{B}\subset\operatorname{GL}(\mathbf{s})\) is said to be a _Borel subgroup_ if it is locally solvable and parabolic. This means equivalently that there is an exhaustion \(\operatorname{GL}(\mathbf{s})=\bigcup_{n}G_{n}\) as above for which \(G_{n}\cap\mathbf{B}\) is a Borel subgroup of \(G_{n}\) for all \(n\). Note that a Borel subgroup is necessarily a strong parabolic subgroup. **Lemma 2.6**.: _A subgroup \(\mathbf{G}^{\prime}\) of \(\operatorname{GL}(\mathbf{s})\) is a Cartan (respectively, parabolic or Borel) subgroup of \(\mathbf{G}\) if and only if there is an exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\) such that for every \(n\) the intersection \(\mathbf{G}^{\prime}\cap\operatorname{GL}(s_{n})\) is a Cartan (respectively, parabolic or Borel) subgroup of \(\operatorname{GL}(s_{n})\)._ Proof.: This follows from Lemma 2.4. The following example shows that for a given parabolic subgroup \(\mathbf{P}\subset\operatorname{GL}(\mathbf{s})\), the property that the group \(G_{n}\cap\mathbf{P}\) is a parabolic subgroup of \(G_{n}\) may no longer hold for a refinement of the exhaustion used to define \(\mathbf{P}\). **Example 2.7**.: Let \(\mathbf{s}=2^{\infty}\), \(s_{n}=2^{2n-2}\), and \(s^{\prime}_{n}=2^{n-1}\). Then both \(\{s_{n}\}_{n\geq 1}\) and \(\{s^{\prime}_{n}\}_{n\geq 1}\) are exhaustions of \(\mathbf{s}\), and \(\{s^{\prime}_{n}\}_{n\geq 1}\) is a refinement of \(\{s_{n}\}_{n\geq 1}\). Let \(H_{n}\subset\operatorname{GL}(s_{n})\) be the subgroup of diagonal matrices. We define a Borel subgroup \(B_{n}\subset\operatorname{GL}(s_{n})\) that contains \(H_{n}\), by induction in the following way: \(B_{1}:=\operatorname{GL}(1)\), and \[B_{n+1}:=\left(\begin{array}{cccc}B_{n}&*&*&*\\ 0&B_{n}&*&*\\ 0&0&B_{n}&0\\ 0&0&*&B_{n}\end{array}\right)\] for \(n\geq 2\), where all the blocks are square matrices of size \(s_{n}\). Then \(B_{n+1}\cap\operatorname{GL}(s_{n})=B_{n}\) for all \(n\), which implies that \(\mathbf{B}=\bigcup_{n\geq 1}B_{n}\) is a well-defined Borel subgroup of \(\operatorname{GL}(\mathbf{s})\) arising from the exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\). However, for all \(n\), \[\mathbf{B}\cap\operatorname{GL}(s^{\prime}_{2n})=\begin{pmatrix}B_{n}&0\\ 0&B_{n}\end{pmatrix}\] is not a Borel subgroup (nor a parabolic subgroup) of \(\operatorname{GL}(s^{\prime}_{2n})\). ## 3. On embeddings of flag varieties In this section we review some preliminaries on finite-dimensional (partial) flag varieties. In particular, we recall the notions of linear embedding and standard extension introduced in [10]. ### Grassmannians and (partial) flag varieties Let \(V\) be a finite-dimensional vector space. For an integer \(0\leq p\leq\dim V\), we denote by \(\operatorname{Gr}(p;V)\) the grassmannian of \(p\)-dimensional subspaces in \(V\). This grassmannian can be realized as a projective variety by the Plucker embedding \(\operatorname{Gr}(p;V)\hookrightarrow\mathbb{P}(\bigwedge^{p}V)\). Moreover, the Picard group \(\operatorname{Pic}(\operatorname{Gr}(p;V))\) is isomorphic to \(\mathbb{Z}\) with generator \(\mathcal{O}_{\operatorname{Gr}(p;V)}(1)\), the pull-back of the line bundle \(\mathcal{O}(1)\) on \(\mathbb{P}(\bigwedge^{p}V)\). For a sequence of integers \(0<p_{1}<\ldots<p_{k-1}<p_{k}<\dim V\), we denote by \(\operatorname{Fl}(p_{1},\ldots,p_{k};V)\) the variety of (partial) flags \[\operatorname{Fl}(p_{1},\ldots,p_{k};V)=\{(V_{1},\ldots,V_{k})\in \operatorname{Gr}(p_{1};V)\times\cdots\times\operatorname{Gr}(p_{k};V):V_{1} \subset\ldots\subset V_{k}\}.\] We have \[\operatorname{Pic}(\operatorname{Fl}(p_{1},\ldots,p_{k};V))\cong\mathbb{Z}^{k}.\] If we let \(L_{i}\) be the pull-back \[L_{i}=\operatorname{proj}_{i}^{*}\mathcal{O}_{\operatorname{Gr}(p_{i};V)}(1)\] along the projection \[\operatorname{proj}_{i}:\operatorname{Fl}(p_{1},\ldots,p_{k};V)\to \operatorname{Gr}(p_{i};V)\] (for \(i=1,\ldots,k\)), then \([L_{1}],\ldots,[L_{k}]\) is a set of generators of the Picard group. By _embedding of flag varieties_ we mean a closed immersion \[\varphi:X=\operatorname{Fl}(p_{1},\ldots,p_{k};V)\hookrightarrow Y= \operatorname{Fl}(q_{1},\ldots,q_{\ell};W).\] If \(\mathcal{F}=\{F_{1},\ldots,F_{k}\}\in X\) is a variable point, we set \[C_{i}(\varphi)=\bigcap_{\mathcal{F}\in X}\varphi(\mathcal{F})_{i}.\] Then \(C_{1}(\varphi)\subset\ldots\subset C_{\ell}(\varphi)\) is a chain of subspaces of \(W\) with possible repetitions. We define the _support_ of \(\varphi\) to be the set of indices \(i\in\{1,\ldots,\ell\}\) such that \(\dim C_{i}(\varphi)<q_{i}\). ### Linear embedding Let \[Q:=\operatorname{Gr}(q_{1};W_{1})\times\cdots\times\operatorname{Gr}(q_{\ell };W_{\ell})\] where \(W_{1},\ldots,W_{\ell}\) is a sequence of vector spaces and \(0<q_{j}<\dim W_{j}\) for all \(j\). Consider an embedding \[\psi:X=\operatorname{Fl}(p_{1},\ldots,p_{k};V)\to Q.\] We use the notation of the previous section for \(X\). The Picard group of \(Q\) is isomorphic to \(\mathbb{Z}^{\ell}\), with generators associated to the line bundles \(M_{j}=\operatorname{proj}_{j}^{*}\mathcal{O}_{\operatorname{Gr}(q_{j};W_{j}) }(1)\). **Definition 3.1**.: We say that the embedding \(\psi\) is _linear_ if we have \[[\psi^{*}M_{j}]=0\quad\text{or}\quad[\psi^{*}M_{j}]\in\{[L_{1}],\ldots,[L_{k}]\}\] for all \(j\in\{1,\ldots,\ell\}\). Let \[\varphi:X=\operatorname{Fl}(p_{1},\ldots,p_{k};V)\hookrightarrow Y= \operatorname{Fl}(q_{1},\ldots,q_{\ell};W)\] be an embedding of flag varieties. The following definition is equivalent to [10, Definition 2.1]. **Definition 3.2**.: The embedding \(\varphi\) is said to be _linear_ if the composed embedding \(\psi=\pi\circ\varphi\) is linear, where \(\pi:=\prod_{j=1}^{\ell}\operatorname{proj}_{j}:Y\to\prod_{j=1}^{\ell} \operatorname{Gr}(q_{j};W)\). \(\blacksquare\) ### Standard extension **Definition 3.3** ([10]).: (a) The embedding \(\varphi:\operatorname{Fl}(p_{1},\ldots,p_{k};V)\hookrightarrow\operatorname{ Fl}(q_{1},\ldots,q_{\ell};W)\) is said to be a _strict standard extension_ if there are * a decomposition of vector spaces \(W=V^{\prime}\oplus Z\) with a linear isomorphism \(\varepsilon:V\stackrel{{\sim}}{{\to}}V^{\prime}\), * a chain of subspaces \(Z_{1}\subset\ldots\subset Z_{\ell}\) of \(Z\) (with possible repetitions), * a nondecreasing map \(\kappa:\{1,\ldots,\ell\}\to\{0,1,\ldots,k,k+1\}\), such that \[\varphi\big{(}\{V_{1},\ldots,V_{k}\}\big{)}=\{\varepsilon(V_{\kappa(1)})+Z_{1 },\ldots,\varepsilon(V_{\kappa(\ell-1)})+Z_{\ell-1},\varepsilon(V_{\kappa(\ell )})+Z_{\ell}\} \tag{3.1}\] where \(V_{0}:=0\) and \(V_{k+1}:=V\). (b) More generally, we say that \(\varphi\) is a _standard extension_ if \(\varphi\) itself is a strict standard extension or its composition of \(\varphi\) with the duality map \(\operatorname{Fl}(q_{1},\ldots,q_{\ell};W)\to\operatorname{Fl}(\dim W-q_{1}, \ldots,\dim W-q_{\ell};W^{*})\) is a strict standard extension. \(\blacksquare\) **Remark 3.4**.: Since the map \(\varphi\) of (3.1) is an embedding of flag varieties, the following conditions must hold: \(1,\ldots,k\) have preimages by \(\kappa\), and the map \(j\in\{1,\ldots,\ell\}\mapsto(\kappa(j),Z_{j})\) is injective and does not contain \((0,0)\) nor \((k+1,Z)\) in its image. \(\blacksquare\) Note that, if \(\varphi\) is a strict standard extension, then \(C_{i}(\varphi)=Z_{i}\) for all \(i\in\{1,\ldots,\ell\}\), and the support of \(\varphi\) is the interval \(\kappa^{-1}([1,k])\). Also, a composition of standard extensions is a standard extension. **Example 3.5**.: Let \(W=V\oplus Z\), where \(\dim Z=d\). For \(1\leq k_{0}\leq k+1\), we consider the embeddings \[\varphi:\operatorname{Fl}(p_{1},\ldots,p_{k};V) \hookrightarrow \operatorname{Fl}(q_{1},\ldots,q_{k};W)\] \[\{V_{1},\ldots,V_{k}\} \mapsto \{V_{1},\ldots,V_{k_{0}-1},V_{k_{0}}+Z,\ldots,V_{k}+Z\}\] and \[\bar{\varphi}:\operatorname{Fl}(p_{1},\ldots,p_{k};V) \hookrightarrow \operatorname{Fl}(\bar{q}_{1},\ldots,\bar{q}_{k+1};W)\] \[\{V_{1},\ldots,V_{k}\} \mapsto \{V_{1},\ldots,V_{k_{0}-1},V_{k_{0}-1}+Z,V_{k_{0}}+Z,\ldots,V_{k} +Z\}\] where \[q_{i}=\left\{\begin{array}{ll}p_{i}&\text{if $1\leq i<k_{0}$},\\ p_{i}+d&\text{if $k_{0}\leq i\leq k$}\end{array}\right.\quad\text{and}\quad\bar{q}_{i}= \left\{\begin{array}{ll}p_{i}&\text{if $1\leq i<k_{0}$},\\ p_{i-1}+d&\text{if $k_{0}\leq i\leq k+1$}.\end{array}\right.\] Here, we still use the convention that \(V_{0}:=0\) and \(V_{k+1}:=V\), and we set accordingly \(p_{0}:=0\) and \(p_{k+1}:=\dim V\). Then \(\varphi\) and \(\bar{\varphi}\) are strict standard extensions, associated with the respective chains of subspaces \[\underbrace{0\subset\ldots\subset 0}_{k_{0}-1\text{ times}}\subset\underbrace{Z\subset\ldots \subset Z}_{k+1-k_{0}\text{ times}}\quad\text{and}\quad\underbrace{0\subset\ldots \subset 0}_{k_{0}-1\text{ times}}\subset\underbrace{Z\subset\ldots\subset Z}_{k+2-k_{0} \text{ times}}\] and respective maps \(\kappa\) and \(\bar{\kappa}\), where \(\kappa(i)=i\) for all \(i\), \(\bar{\kappa}(i)=i\) for \(i\leq k_{0}-1\), \(\bar{\kappa}(i)=i-1\) for \(i\geq k_{0}\). **Remark 3.6**.: Every strict standard extension is the composition of, possibly several, maps \(\varphi\) and \(\bar{\varphi}\) as in Example 3.5. ## 4. A review of generalized flags ### Generalized flags Let \(V\) be an infinite-dimensional vector space of countable dimension and let \(E=\{e_{1},e_{2},\ldots\}\) be a basis of \(V\). By \(\langle S\rangle\), we denote the span of vectors in a subset \(S\subset V\). Following [3], we call _generalized flag_ a collection \(\mathcal{F}\) of subspaces of \(V\) that satisfies the following conditions: * \(\mathcal{F}\) is totally ordered by inclusion; * every subspace \(F\in\mathcal{F}\) has an immediate predecessor or an immediate successor in \(\mathcal{F}\); * \(V\setminus\{0\}=\bigcup_{(F^{\prime},F^{\prime\prime})}(F^{\prime\prime} \setminus F^{\prime})\), where the union is over pairs of consecutive subspaces in \(\mathcal{F}\). Moreover, a generalized flag \(\mathcal{F}\) is said to be _\(E\)-compatible_ if every subspace \(F\in\mathcal{F}\) is spanned by elements of \(E\). An \(E\)-compatible generalized flag \(\mathcal{F}\) can be encoded by a (not order preserving) surjective map \(\sigma:\mathbb{Z}_{>0}\to A\) onto a totally ordered set \((A,\leq)\) such that \(\mathcal{F}=\{F^{\prime}_{a},F^{\prime\prime}_{a}\}_{a\in A}\) where \(F^{\prime}_{a}=\langle e_{k}:\sigma(k)<a\rangle\) and \(F^{\prime\prime}_{a}=\langle e_{k}:\sigma(j)\leq a\rangle\). More generally, a generalized flag \(\mathcal{F}\) is said to be _weakly \(E\)-compatible_ if it is \(E^{\prime}\)-compatible for some basis \(E^{\prime}\) of \(V\) differing from \(E\) in finitely many vectors. Let \[\operatorname{GL}(E)=\{g\in\operatorname{GL}(V):g(e_{k})=e_{k}\text{ for all but finitely many }k\}.\] Then \(\operatorname{GL}(E)\) is an ind-group, isomorphic to the finitary classical ind-group \(\operatorname{GL}(\infty)\). The group \(\operatorname{GL}(E)\) acts on the set of all weakly \(E\)-compatible generalized flags. Furthermore, it is established in [3] that weakly \(E\)-compatible generalized flags \(\mathcal{F}\) of \(V\) are in one-to-one correspondence with splitting parabolic subgroups \(\mathbf{P}\subset\operatorname{GL}(E)\). More precisely, the map \[\mathcal{F}\mapsto\mathbf{P}=\operatorname{Stab}_{\operatorname{GL}(E)}( \mathcal{F})\] is a bijection between these two sets. By a _natural representation_ of \(\operatorname{GL}(\mathbf{s})\) we mean a direct limit of natural representations of \(\operatorname{GL}(s)\) for \(s\in\mathcal{D}(\mathbf{s})\). Two natural representations do not have to be isomorphic; see [5]. Assume now that \(V\) is a natural representation for \(\operatorname{GL}(\mathbf{s})\), \(\mathbf{H}\subset\operatorname{GL}(\mathbf{s})\) is a Cartan subgroup such that there is a basis \(E\) of \(V\) consisting of eigenvectors of \(\mathbf{H}\). The group \(\mathrm{GL}(\mathbf{s})\) acts in a natural way on the generalized flags in \(V\), and a generalized flag is \(E\)-compatible if and only if it is \(\mathbf{H}\)-stable. However, generalized flags are less suited for describing parabolic subgroups of \(\mathrm{GL}(\mathbf{s})\) than for describing parabolic subgroups of \(\mathrm{GL}(\infty)\cong\mathrm{GL}(E)\), since the stabilizer of a generalized flag in \(\mathrm{GL}(\mathbf{s})\) is not always a parabolic subgroup. Moreover, there are parabolic subgroups of \(\mathrm{GL}(\mathbf{s})\) which cannot be realized as stabilizers of generalized flags in a prescribed natural representation. These observations are illustrated by the following two examples. **Example 4.1**.: For every \(n\geq 0\), we define inductively a subset \(I_{n}\subset\{1,\ldots,2^{n+1}\}\) by setting \[I_{0}:=\{1\}\subset\{1,2\},\qquad I_{n}:=I_{n-1}\cup\{2^{n}+i:i\in\{1,\ldots,2 ^{n}\}\setminus I_{n-1}\}\ \ \text{for}\ n\geq 1.\] Note that \(\{I_{n}\}_{n\geq 0}\) is a nested sequence of sets, and let \(I:=\bigcup_{n\geq 0}I_{n}\). For \(V=\langle e_{1},e_{2},\ldots\rangle\) as above, put \[W:=\langle e_{i}:i\in I\rangle.\] Thus \(\mathcal{F}:=\{0\subset W\subset V\}\) is a generalized flag. By Lemma 2.4 (b), any exhaustion of \(\mathrm{GL}(2^{\infty})\) by classical groups is equivalent to \(\{\mathrm{GL}(s_{n}),\delta_{s_{n},s_{n+1}}\}_{n\geq 1}\) for an exhaustion \(\{s_{n}=2^{k_{n}}\}_{n\geq 1}\) of \(\mathbf{s}\). Every element \(g\in\mathrm{GL}(2^{k_{n}})\) stabilizing \(\mathcal{F}\) should be such that the blockwise diagonal matrix \[\begin{pmatrix}g&0\\ 0&g\end{pmatrix}\] stabilizes \(\langle e_{i}:i\in I_{k_{n}}\rangle=\langle e_{i}:i\in I_{k_{n}-1}\rangle \oplus\langle e_{2^{k_{n}-1}+i}:i\in\{1,\ldots,2^{k_{n}}\}\setminus I_{k_{n}-1}\rangle\), hence \(g\) should stabilize both subspaces \(\langle e_{i}:i\in I_{k_{n}-1}\rangle\) and \(\langle e_{i}:i\in\{1,\ldots,2^{k_{n}}\}\setminus I_{k_{n}-1}\rangle\). This implies that the stabilizer of \(W\) in \(\mathrm{GL}(2^{k_{n}})\) is not a parabolic subgroup, for all \(n\geq 1\). Therefore, \(\mathrm{Stab}_{\mathrm{GL}(2^{\infty})}(\mathcal{F})\) is not a parabolic subgroup of \(\mathrm{GL}(2^{\infty})\). \(\blacksquare\) **Example 4.2**.: (a) Let \(V=\bigcup_{n}\mathbb{C}^{2^{n}}\) be seen as a natural representation of \(\mathrm{GL}(2^{\infty})\) where the embedding \(\mathbb{C}^{2^{n}}\cong\mathbb{C}^{2^{n}}\times\{0\}^{2^{n}}\subset\mathbb{C} ^{2^{n+1}}\) is considered. For \(n\geq 1\), let \(P_{n}\subset\mathrm{GL}(2^{n})\) be the stabilizer of \(L_{n}:=\{0\}^{2^{n}-1}\times\mathbb{C}\), the line spanned by the \(2^{n}\)-th vector of the standard basis of \(\mathbb{C}^{2^{n}}\). Then \(P_{n+1}\cap\mathrm{GL}(2^{n})=P_{n}\) for all \(n\geq 1\), hence \(\mathbf{P}:=\bigcup_{n\geq 1}P_{n}\) is a parabolic subgroup of \(\mathrm{GL}(2^{\infty})\). However, \(\mathbf{P}\) acts transitively on the nonzero vectors of \(V\), so that there is no nonzero proper subspace of \(V\) which is stable by \(\mathbf{P}\). Therefore, \(\mathbf{P}\) cannot be realized as the stabilizer of a generalized flag in \(V\). (b) If in part (a) we replace the embeddings defining the structure of natural representation on \(V\) by \(\mathbb{C}^{2^{n}}\cong\{0\}^{2^{n}}\times\mathbb{C}^{2^{n}}\subset\mathbb{C} ^{2^{n+1}}\), then \(L_{n}=L_{1}\) for all \(n\geq 1\) and the parabolic subgroup \(\mathbf{P}\) of (a) becomes the stabilizer of the generalized flag \(\{0\subset L_{1}\subset V\}\). \(\blacksquare\) ### Ind-varieties of generalized flags **Definition 4.3**.: (a) Two generalized flags \(\mathcal{F}\) and \(\mathcal{G}\) are said to be \(E\)_-commensurable_[3] if \(\mathcal{F}\) and \(\mathcal{G}\) are weakly \(E\)-compatible and there is an isomorphism of totally ordered sets \(\phi:\mathcal{F}\to\mathcal{G}\) and there is a finite-dimensional subspace \(U\subset V\) such that, for all \(F\in\mathcal{F}\), \(F+U=\phi(F)+U\) and \(\dim F\cap U=\dim\phi(F)\cap U\). (b) Given an \(E\)-compatible generalized flag \(\mathcal{F}\), we define \(\operatorname{Fl}(\mathcal{F},E)\) as the set of all generalized flags which are \(E\)-commensurable with \(\mathcal{F}\). Let \(\mathcal{F}\) be an \(E\)-compatible generalized flag. We now recall the ind-variety structure on \(\operatorname{Fl}(\mathcal{F},E)\)[3]. To do this, we write \(E=\{e_{k}\}_{k\geq 1}\) and, for \(n\geq 1\), set \(V_{n}:=\langle e_{1},\ldots,e_{n}\rangle\). The collection of subspaces \(\{F\cap V_{n}:F\in\mathcal{F}\}\) determines a flag \(F_{1}^{(n)}\subset\ldots\subset F_{p_{n}-1}^{(n)}\) in \(F_{p_{n}}^{(n)}:=V_{n}\); furthermore we set \(d_{i}^{(n)}:=\dim F_{i}^{(n)}\) and \[X_{n}:=\operatorname{Fl}(d_{1}^{(n)},\ldots,d_{p_{n}-1}^{(n)};V_{n}).\] We define an embedding \(\eta_{n}:X_{n}\to X_{n+1}\) in the following way. Let \(i_{0}\in\{1,\ldots,p_{n+1}\}\) be minimal such that \(e_{n+1}\in F_{i_{0}}^{(n+1)}\). We have either \(p_{n+1}=p_{n}\) or \(p_{n+1}=p_{n}+1\). In the former case we set \[\eta_{n}:\{M_{1},\ldots,M_{p_{n}-1}\}\mapsto\{M_{1},\ldots,M_{i_{0}-1},M_{i_{0 }}\oplus\langle e_{n+1}\rangle,\ldots,M_{p_{n}-1}\oplus\langle e_{n+1}\rangle\}.\] In the latter case, we define \[\eta_{n}:\{M_{1},\ldots,M_{p_{n}-1}\}\mapsto\{M_{1},\ldots,M_{i_{0}-1},M_{i_{0 }-1}\oplus\langle e_{n+1}\rangle,\ldots,M_{p_{n}-1}\oplus\langle e_{n+1}\rangle\}.\] **Proposition 4.4** ([3]).: (a) _The maps \(\{\eta_{n}\}_{n\geq 1}\) are strict standard extensions and they yield an exhaustion \(\operatorname{Fl}(\mathcal{F},E)=\bigcup_{n\geq 1}X_{n}\). This endows \(\operatorname{Fl}(\mathcal{F},E)\) with a structure of locally projective ind-variety._ (b) _If \(\mathbf{P}=\operatorname{Stab}_{\operatorname{GL}(E)}(\mathcal{F})\), then there is a natural isomorphism of ind-varieties \(\operatorname{GL}(E)/\mathbf{P}\stackrel{{\sim}}{{\to}} \operatorname{Fl}(\mathcal{F},E)\)._ Note also that, up to isomorphism, the ind-variety \(\operatorname{Fl}(\mathcal{F},E)\) only depends on the _type_ of \(\mathcal{F}\), i.e., on the isomorphism type of the totally ordered set \((\mathcal{F},\subset)\) and on the dimensions \(\dim F^{\prime\prime}/F^{\prime}\) of the quotients of consecutive subspaces in \(\mathcal{F}\). ## 5. Embedding of flag varieties arising from diagonal embedding of groups In this section we study embeddings of flag varieties induced by strictly diagonal embeddings of general linear groups. Let us fix the following data: * positive integers \(m<n\) such that \(m\) divides \(n\), and \(d:=\frac{n}{m}\); * \(\operatorname{GL}(m)\) seen as a subgroup of \(\operatorname{GL}(n)\) through the diagonal embedding \[x\mapsto\operatorname{diag}(x,\ldots,x);\] * a decomposition of the natural representation \(V:=\mathbb{C}^{n}\) of \(\operatorname{GL}(n)\) as \[V=W^{(1)}\oplus\ldots\oplus W^{(d)}\] where \(W^{(i)}:=\{0\}^{(i-1)m}\times\mathbb{C}^{m}\times\{0\}^{(d-i)m}\); let \(\chi_{i}:W:=\mathbb{C}^{m}\to W^{(i)}\) be the natural isomorphism. For a subspace \(M\subset W\), we write \(M^{(i)}:=\chi_{i}(M)\). ### Restriction of parabolic subgroup Let \(\{e_{1},\ldots,e_{n}\}\) be a basis of \(V\) such that \(\{e_{1},\ldots,e_{m}\}\) is a basis of \(W^{(1)}\cong W\). By \(H=H(n)\subset\operatorname{GL}(n)\) we denote the maximal torus for which \(e_{1},\ldots,e_{n}\) are eigenvectors. Then \(H^{\prime}:=H\cap\operatorname{GL}(m)\) is a maximal torus of \(\operatorname{GL}(m)\). A parabolic subgroup \(P=P(n)\subset\operatorname{GL}(n)\) that contains \(H\) is the stabilizer of a flag \[\mathcal{F}_{\alpha}=\{\langle e_{i}:\alpha(i)\leq j\rangle\}_{j=1}^{p-1}\] for some surjective map \(\alpha:\{1,\ldots,n\}\to\{1,\ldots,p\}\). The following statement determines under what condition the intersection \(P\cap\operatorname{GL}(m)\) is a parabolic subgroup. **Lemma 5.1**.: _Consider the map_ \[\beta:\{1,\ldots,m\}\to\{1,\ldots,p\}^{d},\ r\mapsto(\alpha(r),\alpha(m+r), \ldots,\alpha((d-1)m+r)),\] _and denote by \(\mathcal{I}\) the image of \(\beta\). Let \(\leq\) denote the partial order on \(\{1,\ldots,p\}^{d}\) such that \((x_{1},\ldots,x_{d})\leq(y_{1},\ldots,y_{d})\) if \(x_{i}\leq y_{i}\) for all \(i\)._ (a) _The intersection \(Q:=P\cap\operatorname{GL}(m)\) is a parabolic subgroup of \(\operatorname{GL}(m)\) if and only if \(\leq\) restricts to a total order on \(\mathcal{I}\). Moreover, letting \(b_{1},\ldots,b_{q}\) be the elements of \(\mathcal{I}\) written in increasing order, we have_ \[Q=\operatorname{Stab}_{\operatorname{GL}(m)}(\mathcal{F}_{\beta})\] _where_ \[\mathcal{F}_{\beta}=\{\langle e_{i}:\beta(i)\leq b_{j}\}_{j=1}^{q-1}.\] _In particular, if \(d_{j}=\#\beta^{-1}(\{b_{1},\ldots,b_{j}\})\) then \(\operatorname{GL}(m)/Q\) can be identified with the flag variety \(\operatorname{Fl}(d_{1},\ldots,d_{q-1};W)\)._ (b) _If \(Q\) is a parabolic subgroup, the inclusion \(U_{Q}\subset U_{P}\) of unipotent radicals holds if and only if any two distinct elements \((x_{1},\ldots,x_{d}),(y_{1},\ldots,y_{d})\) of \(\mathcal{I}\) satisfy \(x_{i}\neq y_{i}\) for all \(i\in\{1,\ldots,d\}\)._ Proof.: (a) We have a decomposition \[\mathfrak{gl}(n)=\mathfrak{gl}(V)=\mathfrak{h}\oplus\bigoplus_{1\leq i\neq j \leq n}\mathfrak{g}_{i,j}\] where \(\mathfrak{h}=\operatorname{Lie}H\) and \(\mathfrak{g}_{i,j}=\mathbb{C}(e_{i}\otimes e_{j}^{*})\). With this notation, \[\mathfrak{p}:=\operatorname{Lie}P=\mathfrak{h}\oplus\bigoplus_{\alpha(i)\leq \alpha(j)}\mathfrak{g}_{i,j}\supset\mathfrak{nil}(\mathfrak{p})=\bigoplus_{ \alpha(i)<\alpha(j)}\mathfrak{g}_{i,j}, \tag{5.1}\] where \(\mathfrak{nil}(\mathfrak{p})\) is the nilpotent radical of \(\mathfrak{p}\). There is a similar decomposition \[\mathfrak{gl}(m)=\mathfrak{gl}(W)=\mathfrak{h}^{\prime}\oplus\bigoplus_{1\leq i \neq j\leq m}\mathfrak{g}_{i,j}^{\prime}.\] Set \(\mathfrak{q}:=\operatorname{Lie}Q\) where \(Q=P\cap\operatorname{GL}(m)\) as before. Since we already know that \(\mathfrak{h}^{\prime}\subset\mathfrak{q}\), the subalgebra \(\mathfrak{q}\) is parabolic if and only if \[1\leq i\neq j\leq m\quad\Longrightarrow\quad(\mathfrak{g}^{\prime}_{i,j} \subset\mathfrak{q}\quad\text{or}\quad\mathfrak{g}^{\prime}_{j,i}\subset \mathfrak{q}). \tag{5.2}\] In view of (5.1) and the diagonal embedding \(\mathfrak{gl}(m)\subset\mathfrak{gl}(n)\), whenever \(1\leq i\neq j\leq m\) we have the equivalence \[\mathfrak{g}^{\prime}_{i,j}\subset\mathfrak{q} \iff \mathfrak{g}_{i+km,j+km}\subset\mathfrak{p}\ \ \forall k=0,\ldots,d-1\] \[\iff \alpha(i+km)\leq\alpha(j+km)\ \forall k=0,\ldots,d-1\] \[\iff \beta(i)\leq\beta(j).\] Hence, from (5.2) we obtain that \(\mathfrak{q}\) is a parabolic subalgebra of \(\mathfrak{gl}(m)\) if and only if \[1\leq i\neq j\leq m\quad\Longrightarrow\quad(\beta(i)\leq\beta(j)\quad\text{ or}\quad\beta(j)\leq\beta(i)).\] The condition means that \(\leq\) is a total order set on \(\mathcal{I}\). We also have the equality \[\mathfrak{q}=\mathfrak{h}\oplus\bigoplus_{\beta(i)\leq\beta(j)} \mathfrak{g}^{\prime}_{i,j} = \{X\in\mathfrak{gl}(W):X(\langle e_{i}:\beta(i)\leq b_{j}\rangle) \subset\langle e_{i}:\beta(i)\leq b_{j}\rangle\ \forall j\}\] \[= \operatorname{Lie}(\operatorname{Stab}_{\operatorname{GL}(m)}( \mathcal{F}_{\beta}))\] which implies that \(Q=\operatorname{Stab}_{\operatorname{GL}(m)}(\mathcal{F}_{\beta})\). (b) Assume that \(Q\) is a parabolic subgroup of \(\operatorname{GL}(m)\). The inclusion \(U_{Q}\subset U_{P}\) holds if and only if the similar inclusion holds for the nilradicals of the Lie algebras. Through the diagonal embedding of \(\mathfrak{gl}(m)\) into \(\mathfrak{gl}(n)\), the nilradical of \(\mathfrak{q}\) can be described as \[\mathfrak{nil}(\mathfrak{q})=\bigoplus_{\begin{subarray}{c}1\leq i\neq j\leq m \\ \beta(i)<\beta(j)\end{subarray}}(\mathfrak{g}^{\prime}_{i,j}\oplus\mathfrak{g}^ {\prime}_{i+m,j+m}\oplus\ldots\oplus\mathfrak{g}^{\prime}_{i+(d-1)m,j+(d-1)m }).\] Therefore, the desired inclusion \(\mathfrak{nil}(\mathfrak{q})\subset\mathfrak{nil}(\mathfrak{p})\) holds if and only if, for all \(i,j\in\{1,\ldots,m\}\), \[\beta(i)<\beta(j)\quad\Longleftrightarrow\quad\alpha(i+km)<\alpha(j+km)\ \ \forall k\in\{0,\ldots,d-1\}.\] This condition is equivalent to the one stated in (b) (knowing that the partial order \(\leq\) restricts to a total order on \(\mathcal{I}\), due to (a)). ### Diagonal embedding of flag varieties Assuming that the condition of Lemma 5.1 (a) is fulfilled, we now describe the embedding of partial flag varieties \[\phi:\operatorname{GL}(m)/Q=\operatorname{Fl}(d_{1},\ldots,d_{q-1};W)\to \operatorname{GL}(n)/P \tag{5.3}\] obtained in this case. We rely on a combinatorial object, introduced in the next definition. **Definition 5.2**.: (a) We call _E-graph_ an unoriented graph with the following features: * The vertices consist of two sets \(\{l_{1},\ldots,l_{q}\}\) ("left vertices") and \(\{r_{1},\ldots,r_{p}\}\) ("right vertices"), displayed from top to bottom in two columns, and two vertices are joined by an edge only if they belong to different sets. * The edges display into \(d\) subsets \(E_{c}\) corresponding to a given colour \(c\in\{1,\ldots,d\}\). * Every vertex is incident with at least one edge, and every vertex is incident with at most one edge of a given colour. The vertex \(l_{q}\) is incident with exactly \(d\) edges (one per colour). * Two edges of the same colour never cross, that is, if \((l_{i},r_{j})\) and \((l_{k},r_{\ell})\), with \(i<k\), are joined with two edges of the same colour, then \(j<\ell\). In an E-graph, we call "bounding edges" the edges passing through \(l_{q}\), and we call "ordinary edges" all other edges. (b) With the notation of Lemma 5.1, we define the E-graph \(\mathcal{G}(\alpha,\beta)\) such that * we put an edge of colour \(k\) between \(l_{i}\) and \(r_{j}\) whenever \(b_{i}=(x_{1},\ldots,x_{d})\) satisfies \(x_{k}=j\) and \(i\) is maximal for this property. (The conditions given in Lemma 5.1 (a) justify that \(\mathcal{G}(\alpha,\beta)\) is a well-defined E-graph.) \(\blacksquare\) In the following statement we describe explicitly the embedding \(\phi\) of (5.3) and its properties in terms of the E-graph \(\mathcal{G}(\alpha,\beta)\). **Proposition 5.3**.: (a) _The map \(\phi:Y=\operatorname{GL}(m)/Q=\operatorname{Fl}(d_{1},\ldots,d_{q-1};W)\to X= \operatorname{GL}(n)/P\) is given by_ \[\phi:\{F_{1},\ldots,F_{q-1}\}\mapsto\{V_{1},\ldots,V_{p-1}\}\] _where for all \(j\in\{1,\ldots,p-1\}\) we have_ \[V_{j}=V_{j-1}+F_{i_{1}}^{(1)}\oplus\ldots\oplus F_{i_{d}}^{(d)}, \tag{5.4}\] _where \(V_{0}=F_{0}:=0\), \(F_{q}:=W\), and_ \[i_{k}:=\left\{\begin{array}{ll}i&\mbox{if the vertex $r_{j}$ is incident with an edge $(l_{i},r_{j})$ of colour $k$ in $\mathcal{G}(\alpha,\beta)$,}\\ 0&\mbox{if there is no edge of colours $k$ passing through $r_{j}$.}\end{array}\right.\] _We have also_ \[V_{j}=F_{i^{\prime}_{1}}^{(1)}\oplus\ldots\oplus F_{i^{\prime}_{d}}^{(d)},\] _where \(i^{\prime}_{k}\) is the index of the left end point of the last edge of colour \(k\) arriving at or above \(r_{j}\) in \(\mathcal{G}(\alpha,\beta)\), with \(i^{\prime}_{k}=0\) if there is no such edge._ (b) _Let \(([L_{1}],\ldots,[L_{p-1}])\) and \(([M_{1}],\ldots,[M_{q-1}])\) denote the sequences of preferred generators of \(\operatorname{Pic}X\) and \(\operatorname{Pic}Y\), respectively. The map \(\phi^{*}:\operatorname{Pic}X\to\operatorname{Pic}Y\) is given by_ \[\phi^{*}[L_{j}]=\sum_{k=1}^{d}[M_{i^{\prime}_{k}}],\] _where we set by convention \([M_{0}]=[M_{q}]=0\)._ (c) _The map \(\phi\) is linear if and only if, whenever \(r_{j},r_{j^{\prime}}\) with \(j<j^{\prime}\) are incident with edges of the same colour \(c\) in the graph \(\mathcal{G}(\alpha,\beta)\), every ordinary edge arriving at \(r_{j^{\prime\prime}}\) for \(j\leq j^{\prime\prime}<j^{\prime}\) is also of colour \(c\)._ (d) _The map \(\phi\) is a standard extension if and only if all ordinary edges of \(\mathcal{G}(\alpha,\beta)\) are of the same colour. Moreover, in this case, \(\phi\) is a strict standard extension._ Proof.: (a) As in Section 5.1, we write \(P=\operatorname{Stab}(\mathcal{F}_{\alpha})\) where \(\alpha:\{1,\ldots,n\}\to\{1,\ldots,p\}\) is surjective. Then we have \(Q=\operatorname{Stab}(\mathcal{F}_{\beta})\) where \(\beta:\{1,\ldots,m\}\to\mathcal{I}\subset\{1,\ldots,p\}^{d}\) is described in Lemma 5.1. Let \(\hat{\phi}:\operatorname{GL}(m)/Q\to\operatorname{GL}(n)/P\) be the map given by formula (5.4). Thus we have to show that \(\hat{\phi}=\phi\). Since the maps \(\phi\) and \(\hat{\phi}\) are \(\operatorname{GL}(m)\)-equivariant, it suffices to show that \(\hat{\phi}(\mathcal{F}_{\beta})=\mathcal{F}_{\alpha}\). We write \(\mathcal{F}_{\alpha}=\{F_{\alpha,1},\ldots,F_{\alpha,p-1}\}\) and \(\mathcal{F}_{\beta}=\{F_{\beta,1},\ldots,F_{\beta,q-1}\}\). For \(j\in\{1,\ldots,p\}\), we have \[F_{\alpha,j}=\langle e_{i}:\alpha(i)\leq j\rangle=F_{\alpha,j-1}+\langle e_{i} :\alpha(i)=j\rangle \tag{5.5}\] where \(F_{\alpha,0}:=0\). Every \(i\in\{1,\ldots,n\}\) can be written \(i=(k-1)m+r\in\{1,\ldots,n\}\) with \(k\in\{1,\ldots,d\}\) and \(r\in\{1,\ldots,m\}\), so that \(e_{i}=\chi_{k}(e_{r})\). Assume that \(\alpha(i)=j\). Then there is \(b_{i^{\prime}}=(x_{1},\ldots,x_{d})\in\mathcal{I}\) with \(i^{\prime}\in\{1,\ldots,q\}\) maximal such that \(x_{k}=j\). Moreover there is \(s\in\{r,\ldots,m\}\) such that \(x_{\ell}=\alpha((\ell-1)m+s)\) for all \(\ell\in\{1,\ldots,d\}\). This implies that the graph \(\mathcal{G}(\alpha,\beta)\) contains an edge of colour \(k\) joining \(b_{i^{\prime}}\) and \(j\), and we have \[e_{i}=\chi_{k}(e_{r})\in\chi_{k}(F_{\beta,i^{\prime}})=F_{\beta,i^{\prime}}^{( k)}\] where \(F_{\beta,q}:=W\). Conversely, assume that there is an edge of colour \(k\) joining \(b_{i^{\prime}}\) and \(j\). The subspace \(F_{\beta,i^{\prime}}^{(k)}\) is spanned by vectors of the form \(\chi_{k}(e_{r})\) with \(r\in\{1,\ldots,m\}\) such that \(\beta(r)=(\alpha((\ell-1)m+r)_{\ell=1}^{d}\leq b_{i^{\prime}}\). The latter inequality implies \(\alpha((k-1)m+r)\leq j\). Hence \(\chi_{k}(e_{r})=e_{(k-1)m+r}\in F_{\alpha,j}\). Combining these observations with (5.5), we deduce that \[F_{\alpha,j}=F_{\alpha,j-1}+F_{\beta,i_{1}}^{(1)}\oplus\ldots\oplus F_{\beta, i_{d}}^{(d)}\] where \(i_{1},\ldots,i_{d}\) are as defined in the statement of the proposition. Therefore, the claimed equality \(\hat{\phi}(\mathcal{F}_{\beta})=\mathcal{F}_{\alpha}\) holds. The second formula stated in (a) is an immediate consequence of (5.4). The proof of (a) is complete. Part (b) is a corollary of the second formula in (a), whereas parts (c) and (d) of the proposition easily follow from parts (a) and (b). The proof of the proposition is complete. **Remark 5.4**.: Proposition 5.3 shows how the E-graph \(\mathcal{G}(\alpha,\beta)\) describes the embedding \(\phi:Y\to X\). Moreover, the chain of constant spaces \((C_{j}(\phi))\) is expressed in the following way. We enumerate the colours \(k_{1},\ldots,k_{d}\) so that \(i_{1}\leq\ldots\leq i_{d}\) where \(r_{i_{j}}\) is the right end point of the bounding edge of colour \(k_{j}\). Then \[C_{j}(\phi)=F_{q}^{(k_{1})}\oplus\ldots\oplus F_{q}^{(k_{j})}\quad\text{for }j=1, \ldots,d.\] \(\blacksquare\) **Example 5.5**.: (a) Let us consider for instance the graph It encodes an embedding \[\phi:X=\operatorname{Fl}(d_{1},d_{2};\mathbb{C}^{n}) \rightarrow Y=\operatorname{Fl}(d_{1},d_{1}+d_{2},d_{2}+n;\mathbb{C}^{2n}= \mathbb{C}^{n}\oplus\overline{\mathbb{C}^{n}})\] \[\{V_{1},V_{2}\} \mapsto \{V_{1},V_{1}\oplus\overline{V_{2}},V_{2}\oplus\overline{ \mathbb{C}^{3}}\}.\] If we denote by \(([L_{1}],[L_{2}])\) and \(([M_{1}],[M_{2}],[M_{3}])\) the sets of preferred generators of the Picard groups of \(X\) and \(Y\) respectively, then the induced map \(\phi^{*}:\operatorname{Pic}Y\rightarrow\operatorname{Pic}X\) is given by \[[M_{1}]\mapsto[L_{1}],\quad[M_{2}]\mapsto[L_{1}]+[L_{2}],\quad[M_{3}]\mapsto [L_{2}].\] Thus \(\phi\) is not linear in this case. (b) Here we consider the graph There are two colours which means that the embedding is from a flag variety of a space \(V\) to the flag variety of a doubled space \(W=V\oplus\overline{V}\): \[\operatorname{Fl}(d_{1},\dots,d_{q-1};V)\hookrightarrow\operatorname{Fl}(d_{1 }^{\prime},\dots,d_{q}^{\prime};W=V\oplus\overline{V}).\] The embedding has the following explicit form \[\{F_{1},\dots,F_{q-1}\}\mapsto\{F_{1},\dots,F_{i-1},F_{i-1}\oplus\overline{V}, \dots,F_{q-1}\oplus\overline{V}\}. \tag{5.6}\] Note that \(\dim F_{i-1}\oplus\overline{V}/\dim F_{i-1}=\dim V\). The dimensions of the other quotients are unchanged. (c) Now consider In this case we get an embedding \[\operatorname{Fl}(d_{1},\dots,d_{q};V)\hookrightarrow\operatorname{Fl}(d_{1}^{ \prime},\dots,d_{q}^{\prime};W=V\oplus\overline{V})\] given by \[\{F_{1},\dots,F_{q-1}\}\mapsto\{F_{1},\dots,F_{i-1},F_{i}\oplus\overline{V}, \dots,F_{q-1}\oplus\overline{V}\}. \tag{5.7}\] The only quotient whose dimension changes is \(F_{i}\oplus\overline{V}/F_{i-1}\) which has dimension \(\dim V+\dim F_{i}/F_{i-1}\). By Proposition 5.3 (d), the embeddings of parts (a) and (b) of this example are the only possible standard extensions that can come from a diagonal embedding \(\operatorname{GL}(n)\hookrightarrow\operatorname{GL}(2n)\). (d) In the case of a diagonal embedding of the form \(\operatorname{GL}(n)\hookrightarrow\operatorname{GL}(dn)\), if the embedding of flag varieties is a standard extension, then it can be described as a composition of embeddings of the previous form, involving a subspace \(\overline{V}\) still of dimension \(n\). **Remark 5.6**.: The fact that \(U_{Q}\subset U_{P}\) is equivalent to the following property of the graph \(\mathcal{G}(\alpha,\beta)\): every left vertex is incident with exactly \(d\) edges (one per colour). Proposition 5.3 has the following corollary. **Corollary 5.7**.: _For an embedding \(\phi:Y=\operatorname{GL}(n)/Q\to X=\operatorname{GL}(m)/P\) as in Proposition 5.3 and for every \(j\in\{1,\dots,q-1\}\), we have \(\operatorname{Im}\phi^{*}\not\subset\langle[M_{i}]:i\in\{1,\dots,q-1\}\setminus \{j\}\rangle\)._ ### Application to ind-varieties **Definition 5.8**.: Let \(\{s_{n}\}_{n\geq 1}\) be an exhaustion of \(\mathbf{s}\). We call \(\mathbf{s}\)_-graph_ a graph with infinitely many columns of vertices \(B_{n}\), with \(1\leq|B_{n}|\leq s_{n}\) for all \(n\geq 1\), such that the subgraph consisting of \(B_{n},B_{n+1}\) and the corresponding edges is an E-graph. A parabolic subgroup \(\mathbf{P}\) of \(\operatorname{GL}(\mathbf{s})\) gives rise to an \(\mathbf{s}\)-graph. According to the above proposition, this graph encodes the embeddings of flag varieties in an exhaustion of \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\). Conversely, any \(\mathbf{s}\)-graph arises from a parabolic subgroup \(\mathbf{P}\) of \(\operatorname{GL}(\mathbf{s})\). ## 6. Ind-varieties of generalized flags as homogeneous spaces of \(\operatorname{GL}(\mathbf{s})\) Our purpose in this section is to characterize ind-varieties of generalized flags (introduced in Section 4.2) which can be realized as homogeneous spaces \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) for the given supernatural number \(\mathbf{s}\). ### The case of finitely many finite-dimensional subspaces We start with a special situation which is easier to deal with: let \(\mathbf{X}=\operatorname{Fl}(\mathcal{F},E)\) where \(\mathcal{F}=\{F_{a}^{\prime},F_{a}^{\prime\prime}\}_{a\in A}\) is an \(E\)-compatible generalized flag, for an arbitrary totally ordered set \((A,\leq)\), but with the assumption that \[\dim F_{a}^{\prime\prime}/F_{a}^{\prime}=+\infty\ \ \text{for all but finitely many $a\in A$.} \tag{6.1}\] **Theorem 6.1**.: _If condition (6.1) holds, then for every supernatural number \(\mathbf{s}\), there is an isomorphism of ind-varieties \(\operatorname{Fl}(\mathcal{F},E)\cong\operatorname{GL}(\mathbf{s})/\mathbf{P}\) for an appropriate parabolic subgroup \(\mathbf{P}\subset\operatorname{GL}(\mathbf{s})\)._ Proof.: In the situation of the theorem, the ind-variety \(\mathbf{X}=\operatorname{Fl}(\mathcal{F},E)\) has an exhaustion \[X_{1}\hookrightarrow X_{2}\hookrightarrow\cdots\hookrightarrow X_{n}\overset{ \phi_{n}}{\hookrightarrow}X_{n+1}\hookrightarrow\cdots\] such that \(X_{n}\) is a finite-dimensional variety of flags in the space \(\mathbb{C}^{s_{n}}\) for some exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\) with \(s_{1}\) sufficiently large, and \(\phi_{n}:X_{n}\to X_{n+1}\) is one of the two maps from Example 5.5 (b) and (c). Using the maps \(\phi_{n}\), one constructs nested parabolic subgroups \(P_{n}\subset\operatorname{GL}(s_{n})\) such that \(X_{n}\cong\operatorname{GL}(s_{n})/P_{n}\) and \(P_{n}=\operatorname{GL}(s_{n})\cap P_{n+1}\) for all \(n\). The union \(\mathbf{P}=\bigcup_{n\geq 1}P_{n}\) is then a parabolic subgroup of \(\operatorname{GL}(\mathbf{s})\) which satisfies the conditions of the theorem. ### The general case To treat the general case, we need to start with a definition. **Definition 6.2**.: Let \(\mathcal{F}=\{F^{\prime}_{a},F^{\prime\prime}_{a}\}_{a\in A}\) be an \(E\)-compatible generalized flag and let \(A^{\prime}=\{a\in A:\dim F^{\prime\prime}_{a}/\dim F^{\prime}_{a}<+\infty\}\). We say that the ind-variety \(\operatorname{Fl}(\mathcal{F},E)\) is _\(\mathbf{s}\)-admissible_ if either \(A^{\prime}\) is finite or \(A^{\prime}\) is infinite and there are a exhaustion \(\{s_{n}\}_{n\geq 1}\) for \(\mathbf{s}\) and a numbering \(A^{\prime}=\{k_{n}\}_{n\geq 1}\) (not necessarily compatible with the total order on \(A^{\prime}\)) such that, for all \(n\geq 0\): \[\tfrac{\dim F^{\prime\prime}_{k_{n}}/F^{\prime}_{k_{n}}}{s_{n}}\in\{1,\ldots, \tfrac{s_{n+1}}{s_{n}}-1\}\text{ and }s_{n}|\dim F^{\prime\prime}_{a}/F^{\prime}_{a}\text{ for all }a\in A^{\prime}\setminus\{k_{1},\ldots,k_{n}\}.\] **Theorem 6.3**.: _The following conditions are equivalent:_ * _The ind-variety_ \(\operatorname{Fl}(\mathcal{F},E)\) _is_ \(\mathbf{s}\)_-admissible._ * _There is a parabolic subgroup_ \(\mathbf{P}\subset\operatorname{GL}(\mathbf{s})\) _and an isomorphism of ind-varieties_ \(\operatorname{Fl}(\mathcal{F},E)\cong\operatorname{GL}(\mathbf{s})/\mathbf{P}\)_._ Proof.: (i)\(\Rightarrow\)(ii): The ind-variety \(\operatorname{Fl}(\mathcal{F},E)\) admits an exhaustion \(\operatorname{Fl}(\mathcal{F},E)=\bigcup_{n}X_{n}\) with embeddings of the form \[\phi_{n}:X_{n}=\operatorname{Fl}(p_{1},\ldots,p_{k_{n}};V_{n}) \to X_{n+1}=\operatorname{Fl}(q_{1},\ldots,q_{\ell_{n}};V_{n}\oplus C _{n}),\] \[\{F_{1},\ldots,F_{k_{n}}\} \mapsto \{F_{\tau(1)}\oplus C_{1}^{n},\ldots,F_{\tau(\ell_{n})}\oplus C _{\ell_{n}}^{n}\} \tag{6.2}\] (with \(F_{0}:=0\), \(F_{k_{n}+1}:=V_{n}\)) for a nondecreasing surjective map \(\tau:\{1,\ldots,\ell_{n}\}\to\{0,1,\ldots,k_{n},k_{n}+1\}\) and a sequence \(C_{1}^{n}\subset\ldots\subset C_{\ell_{n}}^{n}\) (with possible repetitions) of subspaces of \(C_{n}\). Assume that there is another exhaustion \(\operatorname{Fl}(\mathcal{F},E)=\bigcup_{n}Y_{n}\) for which the embeddings are as described in Proposition 5.3, where \(Y_{n}=\operatorname{Fl}(r_{1},\ldots,r_{m_{n}};W_{n})\) and \(\dim W_{n}=s_{n}\) for an exhaustion \(\{s_{n}\}\) of \(\mathbf{s}\). Then the two exhaustions interlace, and there is no loss of generality in assuming that the interlacing holds for the sequences \((X_{n})\) and \((Y_{n})\), and not only for subsequences: _Claim_.: The embedding \(\xi_{n}\) is a standard extension. First we show that \(\xi_{n}\) is linear. Arguing by contradiction, assume that there is a generator \([M_{i}]\) among the sequence \([M_{1}],\ldots,[M_{q-1}]\) of preferred generators of \(\operatorname{Pic}X_{n}\) such that \(\xi_{n}^{*}[M_{i}]\) is neither \(0\) nor a preferred generator of \(\operatorname{Pic}Y_{n}\). Since \(\phi_{n}^{*}=\chi_{n}^{*}\circ\xi_{n+1}^{*}\), we have the inclusion \(\operatorname{Im}\phi_{n}^{*}\subset\operatorname{Im}\chi_{n}^{*}\), and due to Corollary 5.7 we get that there is a generator \([L]\in\operatorname{Pic}Y_{n+1}\) such that \[\chi_{n}^{*}[L]=\sum_{j=1}^{q-1}\lambda_{j}[M_{j}]\quad\text{with }\lambda_{i} \neq 0.\] Since the map \(\chi_{n}\) is an embedding, we have \(\lambda_{j}\geq 0\) for all \(j\) and in particular \(\lambda_{i}\geq 1\). The same argument applied to \(\xi_{n}\) implies that \(\xi_{n}^{*}[M_{j}]\) should be a linear combination of the preferred generators of \(\operatorname{Pic}Y_{n}\) with nonnegative integer coefficients. This implies that \(\psi_{n}^{*}[L]=\xi_{n}^{*}\chi_{n}^{*}[L]\) is neither \(0\) nor a preferred generator of \(\operatorname{Pic}Y_{n}\), contradicting the linearity of the standard extension \(\psi_{n}\). Recall that in [10] the notion of an embedding factoring through a direct product is introduced. Note that \(\xi_{n}\) cannot factor through a direct product: otherwise, \(\psi_{n}\) would also factor through a direct product, which is impossible since this is a standard extension. Consequently, \(\xi_{n}\) is a standard extension, and the claim is established. Now we can assume that \(\xi_{1}\) is a strict standard extension. Since the maps \(\phi_{n}\) are strict standard extensions, by using the formula for \(\psi_{n}\) in Proposition 5.3 we derive that \(\xi_{n}\) is a strict standard extension for all \(n\geq 1\). Due to (6.2) and Proposition 5.3, one has \(W_{n}=V_{n}\oplus Z_{n}\) and the map \(\xi_{n}\) has the form \[\xi_{n}:\{F_{1},\ldots,F_{k_{n}}\}\mapsto\{F_{\sigma(1)}\oplus Z_{1}^{n}, \ldots,F_{\sigma(p_{n})}\oplus Z_{p_{n}}^{n}\}.\] Since this applies likewise to \(\xi_{n+1}\), and taking into account the form of \(\phi_{n}\) in (6.2), we see that the map \(\psi_{n}\) has the form \[\{F_{1},\ldots,F_{k_{n}},F_{1}^{\prime},\ldots,F_{p_{n}}^{\prime}\}\mapsto\{F_ {1},\ldots,F_{k_{n}},R_{1}^{n},\ldots,R_{\ell_{n}}^{n},\Gamma_{1}^{n},\ldots, \Gamma_{\delta_{n}}^{n}\}\] for an arbitrary map \(\zeta_{n}:\{F_{1}^{\prime},\ldots,F_{p_{n}}^{\prime}\}\mapsto\{R_{1}^{n}, \ldots,R_{\ell_{n}}^{n}\}\) as described in Proposition 5.3, and where \(\Gamma_{i}^{n}\) are constant subspaces which are copies of \(W_{n}\) in \(W_{n+1}=\bigoplus_{i=1}^{d_{n}}W_{n}^{(i)}\). This implies \(\dim V_{n}=d_{n}^{\prime}s_{n}\) for some \(d_{n}^{\prime}\in\{1,\ldots,d_{n}=\frac{s_{n+1}}{s_{n}}\}\). Since \(\{V_{n}\oplus Z_{1}^{n},\ldots,V_{n}\oplus Z_{p_{n}}^{n}\}=\{\Gamma_{1}^{n-1}, \ldots,\Gamma_{\delta_{n-1}}^{n-1}\}\), we must have \(p_{n}=\delta_{n-1}\) and the dimension of \(Z_{i}^{n}\) is a multiple of \(s_{n-1}\). Therefore \(\dim R_{i}^{n}\) is also a multiple of \(s_{n-1}\) for all \(i\). Condition (ii) is established. (ii)\(\Rightarrow\)(i): Let \(d_{n}^{\prime}=\frac{\dim F_{k_{n}}}{s_{n}}\in\{1,\ldots,d_{n}\}\) and set \(V_{n}=F_{k_{n}}\). The conditions imply that we can choose a decomposition \(W_{n}=V_{n}\oplus W_{n-1}^{(1)}\oplus\ldots\oplus W_{n-1}^{(d_{n}-d_{n}^{ \prime})}\) where the \(W_{n-1}^{(i)}\)'s are copies of \(W_{n-1}\) such that the strict standard extension \(\operatorname{Fl}_{n}(\mathcal{F},E)\to\operatorname{Fl}_{n+1}(\mathcal{F},E)\) is given by \[\phi_{n}:\{F_{1},\ldots,F_{k_{n}}\}\mapsto\{F_{k_{1}}+C_{1}^{n},\ldots,F_{k_{ n+1}}+C_{k_{n+1}}^{n}\}\] with \(C_{i}^{n}=W_{n-1}^{(1)}\oplus\ldots\oplus W_{n-1}^{(m_{i})}=C_{i}^{\prime n} \oplus C_{i}^{\prime\prime n}\) for some nondecreasing sequence \(m_{1},\ldots,m_{k_{n+1}}\). Letting \(\xi_{n}:\operatorname{Fl}_{n}(\mathcal{F},E)\to\operatorname{Fl}(\mathbf{t}_ {n};W_{n})\) be given by \[\xi_{n}:\{F_{1},\ldots,F_{k_{n}}\}\mapsto\{F_{k_{1}}+C_{1}^{\prime n},\ldots, F_{k_{n+1}}+C_{k_{n+1}}^{\prime n}\},\] and \(\psi_{n}:\operatorname{Fl}(\mathbf{t}_{n};W_{n})\to\operatorname{Fl}(\mathbf{ t}_{n+1};W_{n+1})\) \[\psi_{n}:\{F_{1},\ldots,F_{k_{n}}\}\mapsto\{F_{1}+C_{1}^{\prime\prime n+1}, \ldots,F_{\ell_{n+1}}+C_{\ell_{n+1}}^{\prime\prime n+1}\}\] (for suitable types \(\mathbf{t}_{n}\)), we get exhaustions of \(\operatorname{Fl}(\mathcal{F},E)\) and a homogeneous space for \(\operatorname{GL}(\mathbf{s})\), which interlace. Hence if \(\mathcal{F}\) satisfies the condition above, then we can realize \(\operatorname{Fl}(\mathcal{F},E)\) as a homogeneous space for \(\operatorname{GL}(\mathbf{s})\). **Remark 6.4**.: It is shown in [2, Corollary 5.40] that \(\operatorname{GL}(\mathbf{s})/\mathbf{B}\) is never projective when \(\mathbf{B}\) is a Borel subgroup. On the other hand, according to [3, Proposition 7.2], an ind-variety of generalized flags is projective if and only if the total order on the flag can be induced by a subset of \((\mathbb{Z},\leq)\), and Theorem 6.3 shows that in many situations \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\) is projective. ## 7. The case of direct products of ind-varieties of generalized flags In this section, we point out that many direct products of ind-varieties of generalized flags can be homogeneous spaces for the group \(\operatorname{GL}(\mathbf{s})\). ### Direct products of ind-varieties Let \(\mathbf{X}_{i}=\bigcup_{n\geq 1}X_{i,n}\) (\(i\in I\)) be a collection of ind-varieties indexed by \(\mathbb{Z}_{>0}\) or a finite subset of it. For each \(i\in I\) we pick an element \(x_{i}\in X_{i,1}\) and we set \(X_{i,0}=\{x_{i}\}\). The direct product in the category of pointed ind-varieties is then given by \[\prod_{i\in I}\mathbf{X}_{i}:=\bigcup_{n\geq 1}\prod_{i\in I}X_{i,\phi_{i}(n)}\] for a collection of increasing maps \(\phi_{i}:\mathbb{Z}_{>0}\to\mathbb{Z}_{\geq 0}\) such that for every \(n\in\mathbb{Z}_{>0}\) we have \(\phi_{i}(n)=0\) for all but finitely many \(i\in I\) (the definition does not depend essentially on the choice of the maps \(\phi_{i}\)). **Remark 7.1**.: (a) As a set, the direct product can be identified with the set of sequences \((y_{i})_{i\in I}\) where \(y_{i}\in\mathbf{X}_{i}\) for all \(i\in I\) and \(y_{i}=x_{i}\) for all but finitely many \(i\in I\). (b) For a finite set of indices \(I\), as a set, \(\prod_{i\in I}\mathbf{X}_{i}\) coincides with the usual cartesian product, and its structure of ind-variety is given by the exhaustion \(\prod_{i\in I}\mathbf{X}_{i}:=\bigcup_{n\geq 1}X_{i,n}\). Fixing an index \(i_{0}\in I\), there are a canonical projection \[\operatorname{proj}_{i_{0}}:\prod_{i\in I}\mathbf{X}_{i}\to\mathbf{X}_{i_{0}},\ (y_{i})\mapsto y_{i_{0}}\] and an embedding \[\operatorname{emb}_{i_{0}}:\mathbf{X}_{i_{0}}\to\prod_{i\in I}\mathbf{X}_{i},\ x\mapsto(y_{i})\text{ with }y_{i}=\left\{\begin{array}{ll}x_{i}&\text{if }i\neq i_{0},\\ x&\text{if }i=i_{0},\end{array}\right.\] which are morphisms of ind-varieties. If the product is endowed with an action of a group \(G\), then each ind-variety \(\mathbf{X}_{i}\) inherits an action of \(G\) defined through the maps \(\operatorname{proj}_{i}\) and \(\operatorname{emb}_{i}\). Conversely, if every ind-variety \(\mathbf{X}_{i}\) is endowed with an action of a group \(G\), then we obtain an action of \(G\) on the product defined diagonally provided that the following condition is fulfilled: (7.1) every \[g\in G\] fixes \[x_{i}\] for all but finitely many \[i\in I\] (This condition is automatically satisfied in the case where \(I\) is finite.) Moreover, in both directions, when \(G=\mathbf{G}\) is an ind-group, we have that the obtained action is algebraic provided that the initial one is. The following lemma is an immediate consequence of this discussion. **Lemma 7.2**.: _Assume that the direct product \(\prod_{i\in I}\mathbf{X}_{i}\) is a homogeneous space for an ind-group \(\mathbf{G}\). Then, every ind-variety \(\mathbf{X}_{i}\) is also a homogeneous space for \(\mathbf{G}\)._ Note also that a direct product \(\prod_{i\in I}\mathbf{X}_{i}\) is locally projective if and only if it is the case of \(\mathbf{X}_{i}\) for all \(i\in I\). ### The case of ind-varieties of generalized flags We start with an example. **Example 7.3**.: Let \(\mathbf{s}=2^{\infty}\). We consider the space \(V\) of countable dimension, endowed with its fixed basis \(E=\{e_{k}\}_{k\in\mathbb{Z}_{>0}}\), and we set \(V_{n}:=\langle e_{1},\dots,e_{n}\rangle\). We have the exhaustion \(\operatorname{GL}(\mathbf{s})=\bigcup_{n\geq 1}\operatorname{GL}(V_{2^{n}})\) defined through the diagonal embedding \(\operatorname{GL}(V_{2^{n}})\hookrightarrow\operatorname{GL}(V_{2^{n+1}})\), \(x\mapsto\begin{pmatrix}x&0\\ 0&x\end{pmatrix}\). Consider the sequence of parabolic subgroups \[P_{n}:=\operatorname{Stab}_{\operatorname{GL}(V_{2^{n}})}(V_{1},V_{2^{n}-1}), \quad n\geq 2.\] In this way, \(P_{n}\cap\operatorname{GL}(V_{2^{n-1}})=P_{n-1}\) for all \(n\geq 3\). Moreover, every quotient \(\operatorname{GL}(V_{2^{n}})/P_{n}\) is a flag variety formed by flags (\(F_{1}\subset F_{2}\subset V_{2^{n}}\)) of length \(2\), and we have an embedding of flag varieties \[\operatorname{GL}(V_{2^{n-1}})/P_{n-1}\to\operatorname{GL}(V_{2^{n}})/P_{n},\ (F_{1},F_{2})\mapsto(F_{1},V_{2^{n-1}}+\overline{F_{2}})\] where as before the map \[V_{2^{n-1}}=\langle e_{1},\ldots,e_{2^{n-1}}\rangle\to\overline{V_{2^{n-1}}}= \langle e_{2^{n-1}+1},\ldots,e_{2^{n-1}+2^{n-1}}\rangle,\ v\mapsto\overline{v},\] is the isomorphism from Section 5. For every \(n\), this embedding factors through a direct product of grassmannians \[\operatorname{GL}(V_{2^{n-1}})/P_{n-1}\to\operatorname{Gr}(1;V_{2^{n-1}})\times \operatorname{Gr}(2^{n-1}-1;\overline{V_{2^{n-1}}})\to\operatorname{GL}(V_{2^ {n}})/P_{n},\] which allows us to check that the ind-variety \(\operatorname{GL}(\mathbf{s})/\mathbf{P}\), where \(\mathbf{P}=\bigcup_{n}P_{n}\), is isomorphic as an ind-variety to a direct product of two ind-grassmannians. The following theorem shows that many homogeneous spaces for \(\operatorname{GL}(\mathbf{s})\) can be isomorphic to direct products of ind-varieties of generalized flags. **Theorem 7.4**.: _Let \(\mathbf{X}=\operatorname{GL}(\mathbf{s})/\mathbf{P}\) be a homogeneous space, defined by a parabolic subgroup \(\mathbf{P}\). Assume that we have an exhaustion \(\mathbf{X}=\bigcup_{n\geq 1}\operatorname{GL}(s_{n})/P_{s_{n}}\) determined by an exhaustion \(\{s_{n}\}_{n\geq 1}\) of \(\mathbf{s}\), where each embedding \(\operatorname{GL}(s_{n})/P_{s_{n}}\hookrightarrow\operatorname{GL}(s_{n+1})/P_ {s_{n+1}}\) is linear. Then, \(\mathbf{X}\) is isomorphic as an ind-variety to a direct product of ind-varieties of generalized flags \(\prod_{i\in I}\operatorname{Fl}(\mathcal{F}^{i},E^{i})\) where \(I\) is either \(\mathbb{Z}_{>0}\) or a finite subset of it._ Proof.: Fix \(n\geq 1\). Let \(d_{n}=\frac{s_{n+1}}{s_{n}}\) and fix a decomposition \[V_{n}=W_{n}^{(1)}\oplus\ldots\oplus W_{n}^{(d_{n})} \tag{7.2}\] of the space \(V_{n}=\mathbb{C}^{s_{n+1}}\) as in Section 5. Thus we are in the setting of Proposition 5.3, and the embedding \[\phi_{n}:\operatorname{GL}(s_{n})/P_{s_{n}}\hookrightarrow\operatorname{GL}(s_ {n+1})/P_{s_{n+1}}\] can be encoded by an E-graph with \(d_{n}\) colours in the sense of Proposition 5.3 (a). The formula therein, combined with the characterization of \(\phi_{n}\) given in Proposition 5.3 (c), yields a commutative diagram where \(\psi^{(i)}\) is the embedding corresponding to the subgraph of \(\mathcal{G}(\alpha,\beta)\) formed by removing all the ordinary edges which are not of colour \(i\) (\(\psi^{(i)}\) is a strict standard extension due to Proposition 5.3 (c)-(d)), \(\underline{\ell}^{(i)}\) is an appropriate dimension vector, and the embedding \(\xi_{n}\) is induced by the decomposition (7.2). The theorem follows from this construction. The above proof yields the following sharpening of Theorem 7.4. **Corollary 7.5**.: _In the framework of Theorem 7.4, let \(\mathcal{G}\) be the \(\mathbf{s}\)-graph corresponding to \(\mathbf{P}\) in the sense of Section 5.3. Let \(\mathcal{G}=\bigcup_{i\in\mathcal{I}}\mathcal{G}_{i}\) be a decomposition into subgraphs so that all ordinary edges of \(\mathcal{G}_{i}\) are of the same colour. Then, the ind-variety \(\mathbf{X}\) is isomorphic to a direct product of ind-varieties of generalized flags \(\prod_{i\in\mathcal{I}}\mathbf{X}_{i}\) where \(\mathbf{X}_{i}\) has an exhaustion with embeddings encoded by \(\mathcal{G}_{i}\)._ ## Outlook We see the results of this paper as a small first step in the study of locally projective homogeneous ind-spaces of locally reductive ind-groups. One inevitable question for a future such study is, given two non-isomorphic locally reductive ind-groups \(\mathbf{G}\) and \(\mathbf{G}^{\prime}\), when are two homogeneous spaces \(\mathbf{G}/\mathbf{P}\) and \(\mathbf{G}^{\prime}/\mathbf{Q}\) isomorphic as ind-varieties? A further natural direction of research could be a comparison of Bott-Borel-Weil type results on \(\mathbf{G}/\mathbf{P}\) and \(\mathbf{G}^{\prime}/\mathbf{Q}\). We finish the paper by pointing out that the reader can verify that Theorem 6.1 remains valid if one replaces \(\mathrm{GL}(\mathbf{s})\) by any pure diagonal ind-group in the terminology of [1].
2309.14727
Effective Multi-Agent Deep Reinforcement Learning Control with Relative Entropy Regularization
In this paper, a novel Multi-agent Reinforcement Learning (MARL) approach, Multi-Agent Continuous Dynamic Policy Gradient (MACDPP) was proposed to tackle the issues of limited capability and sample efficiency in various scenarios controlled by multiple agents. It alleviates the inconsistency of multiple agents' policy updates by introducing the relative entropy regularization to the Centralized Training with Decentralized Execution (CTDE) framework with the Actor-Critic (AC) structure. Evaluated by multi-agent cooperation and competition tasks and traditional control tasks including OpenAI benchmarks and robot arm manipulation, MACDPP demonstrates significant superiority in learning capability and sample efficiency compared with both related multi-agent and widely implemented signal-agent baselines and therefore expands the potential of MARL in effectively learning challenging control scenarios.
Chenyang Miao, Yunduan Cui, Huiyun Li, Xinyu Wu
2023-09-26T07:38:19Z
http://arxiv.org/abs/2309.14727v1
# Effective Multi-Agent Deep Reinforcement Learning Control with Relative Entropy Regularization ###### Abstract In this paper, a novel Multi-agent Reinforcement Learning (MARL) approach, Multi-Agent Continuous Dynamic Policy Gradient (MACDPP) was proposed to tackle the issues of limited capability and sample efficiency in various scenarios controlled by multiple agents. It alleviates the inconsistency of multiple agents' policy updates by introducing the relative entropy regularization to the Centralized Training with Decentralized Execution (CTDE) framework with the Actor-Critic (AC) structure. Evaluated by multi-agent cooperation and competition tasks and traditional control tasks including OpenAI benchmarks and robot arm manipulation, MACDPP demonstrates significant superiority in learning capability and sample efficiency compared with both related multi-agent and widely implemented signal-agent baselines and therefore expands the potential of MARL in effectively learning challenging control scenarios. The open source code of MACDPP is available at [https://github.com/AdrienLin1/MACDPP](https://github.com/AdrienLin1/MACDPP). ## I Introduction Guided by task-related reward functions, Reinforcement Learning (RL) provides an effective solution to autonomously explore and gradually learn optimal or near-optimal control strategies by iteratively interacting with the environment in the absence of task-specific prior knowledge [1, 2]. Utilizing the power of deep neural networks [3] to adapt abstract features from high-dimensional input states, RL has demonstrated superior performances than humans in various complex scenarios including board games [4], video games [5], and robot control [6]. Based on the successful implementations of single-agent RL approaches, people naturally attempt to develop Multi-agent Reinforcement Learning (MARL) to effectively explore optimal control policies of large-scale systems and achieve promising results in a wide range of tasks [7, 8, 9, 10, 11, 12, 13, 14]. On the other hand, transferring RL from single-agent environments to multi-agent environments raises a new challenge: the environments affected by the joint actions from multiple agents become non-stationary, and each agent faces a moving-target problem while its optimal strategy strongly depends on the frequently changing policies of other agents. This characteristic not only breaks the Markov property of the environment but also greatly compromises the learning capability and converge velocity of traditional RL approaches designed for single agent [15]. Compared with the approaches like Independent Q-Learning (IQL) [16] that directly implemented single-agent RL approaches in multi-agent scenarios to separately explore independent polices [17, 18], MARL methods based on the Centralized Training with Decentralized Execution (CTDE) framework provides an appealing prospect for addressing the issue of non-stationary [19]. It enables the multiple agents to learn decentralized policies in a centralized end-to-end fashion: all agents are accessible to the global information during the training stage while the decision-making of each agent is independent during the interaction with the environment. From the perspective of the value function, Value-Decomposition Networks (VDN) [20] decomposed the value function for multiple agents under a CTDE framework and achieved better cooperation behaviors in a simulation maze environment. QMIX [21] further proposed a network-based mixture strategy for multiple agents' value function based on VDN and enjoyed astonishing results performances in StarCraft Multi-Agent Challenge (SMAC) [22]. Based on the policy-based RL approaches, Multi-agent Proximal Policy Optimization (MAPPO) [23] demonstrated better performances than traditional MARL methods in both Multi-Agent Particle Environment (MPE) and SMAC. Employing an Actor-Critic (AC) structure, Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [24] combines the strengths of both value function-based and policy-based approaches and has achieved good learning performances in both cooperative and competitive tasks. Based on this approach, Multi-agent TD3 (MATD3) [25] tackled the issue of overestimated value function with additional critic networks. The minimax algorithm was further introduced by MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) [26] for enhanced learning capability in both cooperative and competitive tasks. As demonstrated above, CTDE-based MARL approaches have improved the learning capability of agents from the perspective of structure. However, at the algorithmic level, the inherent inconsistency of multiple agents' policy updates and the resulting deterioration of learning performance has not been sufficiently addressed. By incorporating the relative entropy between the current and previous policies as a regularization term into the value function, Dynamic Policy Programming (DPP) [27] effectively constrains excessive policy updates in single-agent environments. Theoretically, DPP significantly reduces the estimated error of the value function with superior error bounds [28, 29]. In engineering applications of single-agent scenarios, DPP has demonstrated superior sample efficiency and robustness in several robot control tasks [30, 31]. As one of the pioneering works applying relative entropy regularized RL to multi-agent scenarios, Factorial Kernel Dynamic Policy Programming (FKDPP) [32] was proposed to control large-scale chemical plants with multiple DPP agents. It outperformed the control strategy designed by human experts in production rate, profit, and safety on a simulated Vinyl Acetate Monomer (VAM) plant [33] and has been successfully implemented in a real-world chemical plant for 35 days 1. Although this work fully indicates the great potential of relative entropy regularized MARL in real-world systems, FKDPP was developed based on neither deep neural networks nor CTDE framework and was limited in discrete action space without supporting the AC structure. These characteristics restrict its application scope in more challenging and flexible control scenarios. Footnote 1: This implementation of FKDPP was conducted by Yokogawa Electric Corporation and JSR Corporation. For more details, please see: [https://www.yokogawa.com/news/press-releases/2022/2022-03-2/](https://www.yokogawa.com/news/press-releases/2022/2022-03-2/). This paper focuses on integrating the relative entropy regularization to the modern MARL under the CTDE framework in order to alleviate the inconsistency of multiple agents' policy updates in various control scenarios. According to the characteristics compared with MARL baselines in Table I, our proposed approach, Multi-Agent Continuous Dynamic Policy Gradient (MACDPP)2 naturally extends the power of FKDPP from kernel-based value function approximation and discrete action space to the CTDE framework with AC structure with superior learning capability and sample-efficiency. MACDPP reduces the intractable computational burden of FKDPP in the actor network with continuous actions by Monte Carlo sampling and naturally obtains a superior exploration strategy based on the Boltzmann softmax operator. Evaluated by both cooperation and competition tasks in MPE environment and OpenAI benchmark control task where multiple agents collaborate to control one high-dimensional system, the proposed method successfully demonstrated superiority in both learning capability and sample-efficiency compared with various multi-agent and signal-agent RL baselines. The contributions of this paper can be summarized as: Footnote 2: Code available [https://github.com/AdrienLin1/MACDPP](https://github.com/AdrienLin1/MACDPP) 1. Our work first attempts to integrate relative entropy regularization into the CTDE framework-based MARL to address the inherent inconsistency of policy updates for multiple agents at an algorithmic level. We propose a novel MARL approach that is compatible with both cooperative and competitive tasks in a multi-agent scenario, as well as single systems collaboratively controlled by multiple agents. 2. As one natural extension of previous works [32, 33] with successful engineering applications to not only the deep neural networks function approximator but also the AC structure with continuous action space, the proposed MACDPP can be seen as a comprehensive upgrade to FKDPP, targeting enhanced learning capability and control flexibility. 3. The proposed method was evaluated by several benchmarks from MPE to traditional control tasks in terms of the learning capability and sample efficiency compared to both related CTDE framework-based MARL and widely-applied single-agent RL approaches. We further analyzed the impact of relative entropy regularization in the CTDE framework-based MARL on convergence and control behaviors. The remainder of this paper is organized as follows. Section II introduces the preliminaries of Markov games, MARL, and CTDE framework. Section III details the proposed approach MACDPP. The experimental results are presented in Section IV. Finally, Section V concludes this paper. ## II Preliminaries ### _Markov Games_ Markov games are widely utilized to model a multi-agent environment satisfying partially observable Markov processes (POMDP). It is generally defined by a sextuple \((N,\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\). \(N\) represents the number of agents in the target environment, \(\mathcal{S}\) defines the general state space. The locally observed state of the \(i\)-th agent is denoted as \(\mathbf{s}_{i}\) which is a subset of the global observed state \(\mathbf{s}\in\mathcal{S}\). The joint action is made up of the local actions from all agents \(\mathbf{a}=\mathbf{a}_{1}\times\cdots\times\mathbf{a}_{N}\in\mathcal{A}\). The subspace of each agent's action is presented as \(\mathcal{A}_{i}\). The state transition probability over all agents is presented as \(\mathcal{P}\). \(\mathcal{R}\) is a set of reward functions for specific tasks, and each agent has its own reward function \(R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})\in\mathcal{R}\) based on its local state, action and the next step state. The discount factor \(\gamma\in[0,1)\) is utilized to gradually ignore the accumulative rewards in a long-term horizon. Based on the Markov games, MARL introduces the value function \(V_{i,\pi_{i}}\) and the state-action value function \(Q_{i,\pi_{i}}\) to measure the long-term accumulative rewards obtained by the \(i\)-th agent under its policy: \[V_{i,\pi_{i}}(\mathbf{s}_{i})\!=\!\mathbb{E}_{\begin{subarray}{c}\mathbf{s}_{i+1} \sim\mathcal{P}\\ \mathbf{a}_{i,\epsilon}\sim\pi_{i}\end{subarray}}\!\!\left[\sum_{t=0}^{\infty} \gamma^{t}R_{i}(\mathbf{s}_{i,t},\mathbf{a}_{i,t},\mathbf{s}_{i,t+1})\mid\mathbf{s}_{i,0}=\bm {s}_{i}\right], \tag{1}\] \[Q_{i,\pi_{i}}(\mathbf{s}_{i},\mathbf{a}_{i})\!=\!\mathbb{E}_{\begin{subarray}{c}\mathbf{s}_ {i+1}\sim\mathcal{P}\\ \mathbf{a}_{i,\epsilon}\sim\pi_{i}\end{subarray}}\!\!\left[\sum_{t=0}^{\infty} \gamma^{t}R_{i}(\mathbf{s}_{i,t},\mathbf{a}_{i,t},\mathbf{s}_{i,t+1})\mid\begin{subarray}{ c}\mathbf{s}_{i,0}=\mathbf{s}_{i}\\ \mathbf{a}_{i,0}=\mathbf{s}_{i}\end{subarray}\right], \tag{2}\] where the global state in the next time step \(\mathbf{s}_{t+1}\) is determined by the current global state \(\mathbf{s}_{t}\) and action \(\mathbf{a}_{t}\) under \(\mathcal{P}\). Define \(\mathcal{P}^{\mathbf{a}}_{\mathbf{s}\mathbf{s}^{\prime}}\) as the probability of translating from state \(\mathbf{s}\) to state \(\mathbf{s}^{\prime}\) under action \(\mathbf{a}\) in a global perspective, the goal of each agent in MARL is to learn an optimal control policy to maximize its optimal value function following a Bellman equation: \[V_{i}^{*}(\mathbf{s}_{i})=\max_{\pi_{i}}\sum_{\begin{subarray}{c}\mathbf{a} _{i}\in\mathcal{A}\\ \mathbf{a}^{\prime}\in\mathcal{S}\end{subarray}}\pi_{i}(\mathbf{a}_{i}|\mathbf{s}_{i}) \mathcal{P}_{\mathbf{s}\mathbf{s}^{\prime}}^{\mathbf{a}}\left(R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\gamma V_{i}^{*}(\mathbf{s}_{i}^{\prime})\right). \tag{3}\] ### _Multi-agent Reinforcement Learning in CTDE framework_ In the CTDE-framework based MARL like MADDPG [24], the AC structure is implemented to separately estimate the state-action value function and model control policy by critic network \(\hat{Q}_{i}(\cdot,\mathbf{\theta}_{i})\) and actor network \(\hat{\pi}_{i}(\cdot,\mathbf{\phi}_{i})\) for each agent where \(\mathbf{\theta}_{i}\) and \(\mathbf{\phi}_{i}\) are the corresponding parameters. In the centralized training process, All critic networks are globally updated with the shared observation information. Define one global training sample from sample set \(\mathcal{D}\) as \((\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},\mathbf{r})\) where \(\mathbf{r}=[R_{1}(\mathbf{s}_{1},\mathbf{a}_{1},\mathbf{s}_{1}^{\prime}),...,R_{N}(\mathbf{s}_{N}, \mathbf{a}_{N},\mathbf{s}_{N}^{\prime})]\) is the vector of reward signal for all agents, the \(i\)-th agent's critic networks receive the states and actions from all agents \(\hat{Q}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i})\) and measure its own long-term reward of \(R_{i}(\mathbf{s}_{i},\mathbf{a}_{i})\). Determining the global action of the next step by all agents according to their local states \(\mathbf{a}_{i}^{\prime}=\hat{\pi}_{i}(\mathbf{s}_{i}^{\prime},\mathbf{\phi}_{i}),\forall i =1,...,N\), the corresponding Temporal-Difference (TD) errors that guide the update of critic networks in gradient descent optimization is calculated following: \[\mathcal{L}(\mathbf{\theta}_{i})=\left(\hat{Q}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i}) -R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})-\gamma\hat{Q}_{i}(\mathbf{s}^{ \prime},\mathbf{a}^{\prime},\mathbf{\theta}_{i})\right)^{2}. \tag{4}\] The actor networks are updated to maximize the returns of the current critic networks based on the local observation of each agent. The corresponding gradient of error is defined as: \[\nabla_{\mathbf{\phi}_{i}}\leftarrow\nabla_{\mathbf{a}_{i}^{\prime}=\hat{\pi}_{i}(\bm {a}_{i},\mathbf{\phi}_{i})}\hat{Q}_{i}(\mathbf{s},\mathbf{a}^{*},\mathbf{\theta}_{i})\nabla_{ \mathbf{\phi}_{i}}\hat{\pi}_{i}(\mathbf{s}_{i},\mathbf{\phi}_{i}) \tag{5}\] where \(\mathbf{a}^{*}\) is selected by all agents with shared information. In the decentralized execution process, on the other hand, the control actions of each agent are determined by the actor only without the consideration of other agents. ## III Approach In this section, the proposed method MACDPP was detailed. It naturally extended the relative entropy regularization term from DPP [27] and FKDPP [32] to the modern MARL under the CTDE framework and AC structure. The multi-agent critic networks regularized by relative entropy were introduced in Section III-A, and the corresponding actor that supports continuous actions was introduced in Section III-B. The factorization strategy of MACDPP for cooperative and competitive tasks was discussed in Section III-C with a summary of MACDPP's learning procedure. ### _Relative Entropy Regularized Critics_ Following the existing relative entropy regularized RL approaches [27, 28], the difference between the current policy \(\pi_{i}\) and previous policy \(\bar{\pi}_{i}\) of the \(i\)-th agent on state \(\mathbf{s}_{i}\) was defined as: \[\mathbb{D}_{\mathrm{KL}}(\mathbf{s}_{i})=\sum_{\mathbf{a}\in\mathcal{A}}\pi_{i}(\mathbf{a} _{i}|\mathbf{s}_{i})\log\left(\frac{\pi_{i}(\mathbf{a}_{i}|\mathbf{s}_{i})}{\bar{\pi}_{i}( \mathbf{a}_{i}|\mathbf{s}_{i})}\right). \tag{6}\] This term was then incorporated into the value function as a regularization term controlled by a parameter \(\eta\): \[V_{i}^{\prime}(\mathbf{s}_{i})=\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t}\left(R_{i}(\mathbf{s}_{i,t},\mathbf{a}_{i,t},\mathbf{s}_{i,t+1})-\frac{1}{\eta} \mathbb{D}_{\mathrm{KL}}(\mathbf{s}_{i,t})\right)\right]. \tag{7}\] Combining Eqs.(3) and (7), the resulted optimal value function was still a Bellman equation with an additional term \(-\frac{1}{\eta}\mathbb{D}_{\mathrm{KL}}(\mathbf{s}_{i,t})\) in Eq. (3). Assume the action of each agent \(\mathbf{a}_{i}\in\mathcal{A}_{i}\) is discrete, an iterative update form of both value function and policy can be found based on DPP [27]: \[V_{i,t+1}^{\prime}(\mathbf{s}_{i})=\frac{1}{\eta}\log\sum_{\mathbf{a}_{i}\in\mathcal{A }_{i}}\exp\left(\eta\cdot\Psi_{i,t}\left(\mathbf{s}_{i},\mathbf{a}_{i}\right)\right), \tag{8}\] \[\bar{\pi}_{i,t+1}(\mathbf{a}_{i}|\mathbf{s}_{i})=\frac{\exp\left(\eta\cdot\Psi_{i,t} \left(\mathbf{s}_{i},\mathbf{a}_{i}\right)\right)}{\sum_{\mathbf{a}_{i}^{\prime}\in \mathcal{A}_{i}}\exp\left(\eta\cdot\Psi_{i,t}\left(\mathbf{s}_{i},\mathbf{a}_{i}^{ \prime}\right)\right)} \tag{9}\] where \(t\) is the iteration of update, \(\Psi_{i,t}(\cdot)\) is the action preferences function [1] which can be treat as a regularized Q function: \[\begin{split}\Psi_{i,t}(\mathbf{s}_{i},\mathbf{a}_{i})=& \sum_{\mathbf{s}^{\prime}\in\mathcal{S}}\mathcal{P}_{\mathbf{s}\mathbf{s}^{ \prime}}^{\mathbf{a}}\left(R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\gamma V _{i,t}^{\prime}\left(\mathbf{s}_{i}^{\prime}\right)\right)\\ &+\frac{1}{\eta}\log\bar{\pi}_{i,t}(\mathbf{a}_{i}|\mathbf{s}_{i}).\end{split} \tag{10}\] With a discrete action space, once the critic network accurately estimates the action preferences function \(\Psi_{i}(\cdot)\), both the value function and policy can be directly calculated. In practice, the transition probability matrix \(\mathcal{P}\) is usually too large and inaccessible. DPP proposed an update rule of \(\Psi_{i}(\cdot)\) based on sampling by inserting Eqs (8) and (9) into Eq. (10). Given a sample \((\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},\mathbf{r})\), the action preferences function of the \(i\)-th agent was updated following: \[\begin{split}\Psi_{i,t+1}(\mathbf{s}_{i},\mathbf{a}_{i})=& \Psi_{i,t}(\mathbf{s}_{i},\mathbf{a}_{i})-\mathcal{B}_{\eta}\Psi_{i,t}(\mathbf{s}_{i})\\ &+R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\mathcal{B}_{ \eta}\Psi_{i,t}(\mathbf{s}_{i}^{\prime}),\end{split} \tag{11}\] \[\mathcal{B}_{\eta}\Psi_{i,t}(\mathbf{s}_{i})=\sum_{\mathbf{a}_{i}\in\mathcal{A}_{i}} \frac{\exp\left(\eta\cdot\Psi_{i,t}(\mathbf{s}_{i},\mathbf{a}_{i})\right)\Psi_{i,t}(\bm {s}_{i},\mathbf{a}_{i})}{\sum_{\mathbf{a}_{i}^{\prime}\in\mathcal{A}_{i}}\exp\left( \eta\cdot\Psi_{i,t}\left(\mathbf{s}_{i},\mathbf{a}_{i}^{\prime}\right)\right)}. \tag{12}\] \(\mathcal{B}_{\eta}(\cdot)\) is a Boltzmann softmax operator. It is straightforward to estimate the action preferences function instead of the Q function by the critic neural networks under CTDE framework when the action is discrete. Let the critic networks receive the global information of all agents, the loss function of the critic networks \(\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i})\) is calculated by integrating Eq. (11) into Eq. (4): \[\mathcal{L}(\mathbf{\theta}_{i})=\left(\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i })-y(\mathbf{s},\mathbf{a},\mathbf{s}_{i}^{\prime},\mathbf{\theta}_{i}^{-})\right)^{2}, \tag{13}\] \[\begin{split} y(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},\mathbf{\theta}_{i}^{-}) &=R_{i}(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\hat{\Psi}_{i}( \mathbf{s},\mathbf{a},\mathbf{\theta}_{i}^{-})\\ &-\mathcal{B}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})+ \gamma\mathcal{B}_{\eta}\hat{\Psi}_{i}(\mathbf{s}^{\prime},\mathbf{\theta}_{i}^{-}) \end{split}\] Where \(\mathbf{\theta}_{i}^{-}\) is the parameters of the corresponding target networks. The Boltzmann softmax operator is conducted over the global action of all agents: \[\mathcal{B}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})=\sum_{\mathbf{a}\in \mathcal{A}}\frac{\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{ i}^{-})\right)\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i}^{-})}{\sum_{\mathbf{a}^{ \prime}\in\mathcal{A}}\exp\left(\eta\cdot\hat{\Psi}_{i}\left(\mathbf{s},\mathbf{a}^{ \prime},\mathbf{\theta}_{i}^{-}\right)\right)}. \tag{15}\] However, the application of the relative entropy regularization in the AC structure remains limited due to the intractable calculation over the whole continuous action space in \(\mathcal{B}_{\eta}(\cdot)\). We detailed our solution in the next subsection. ### _Actors with Boltzmann Softmax Sampling_ To effectively calculate Eq. (15) in continuous action space, we estimated it within a local range of the input action in MACDPP. The global action \(\mathbf{a}\) was extended to a vector with \(M+1\) Monte Carlo samples: \[\mathcal{A}^{\text{MC}}=[\mathbf{a},\mathbf{a}+\mathbf{e}^{1},\mathbf{a}+\mathbf{e}^{2},...,\mathbf{a }+\mathbf{e}^{M}] \tag{16}\] where \(\mathbf{e}^{j}\sim\mathcal{N}(\mathbf{0},\mathbf{\zeta}^{\text{MC}})\) for \(j=1,...,M\) is the Monte Carlo sampling noise controlled by \(\mathbf{\zeta}^{\text{MC}}\). The locally estimation of Eq. (12) therefore was calculated as: \[\mathcal{B}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})\approx\sum_{\bm {a}\in\mathcal{A}^{\text{MC}}}\frac{\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s}, \mathbf{a},\mathbf{\theta}_{i}^{-})\right)\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i }^{-})}{\sum_{\mathbf{a}^{\prime}\in\mathcal{A}^{\text{MC}}}\exp\left(\eta\cdot \hat{\Psi}_{i}\left(\mathbf{s},\mathbf{a}^{\prime},\mathbf{\theta}_{i}^{-}\right)\right)} \tag{17}\] According to [34], one critical issue of the Boltzmann softmax operator in RL is its multiple fixed points without non-expansion property which guarantees the convergence of "Q-learning like" algorithms to a unique fixed point in theory. One effective solution is to replace it with the Mellowmax operator with a unique fix point and non-expansion property: \[\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})\approx\frac{1}{ \eta}\log\left(\frac{\sum_{\mathbf{a}\in\mathcal{A}^{\text{MC}}}\exp\left(\eta \cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i}^{-})\right)}{M+1}\right). \tag{18}\] Algorithm 1 summarized the calculation of \(\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})\). In practice, numerical issues usually arise in Eq. (18) with a large \(\eta\). We alternatively calculated it following: \[\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s},\mathbf{\theta}_{i}^{-})\approx \tag{19}\] \[\frac{1}{\eta}\log\left(\frac{\sum_{\mathbf{a}\in\mathcal{A}^{\text{ MC}}}\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a},\mathbf{\theta}_{i}^{-})-C \right)}{M+1}\right)+C\] where \(C=\eta\cdot\max_{\mathbf{a}^{\prime}\in\mathcal{A}^{\text{MC}}}[\hat{\Psi}_{i}( \mathbf{s},\mathbf{a}^{\prime},\mathbf{\theta}_{i}^{-})]\). Employing the Monte Carlo sampling to estimate \(\mathcal{M}_{\eta}\hat{\Psi}_{i}(\cdot)\), any policy network maps the local states to the local actions can be used as an actor. In the decentralized execution process, the trained agent made decisions based on its own actor with local observation \(\mathbf{a}_{i}^{*}=\hat{\pi}_{i}(\mathbf{s}_{i},\mathbf{\phi}_{i})\) The actor was updated in the centralized training following the gradient below: \[\nabla_{\mathbf{\phi}_{i}}\leftarrow\nabla_{\mathbf{a}_{i}^{*}=\hat{\pi}_{i}(\mathbf{s}_{i },\mathbf{\phi}_{i})}\hat{\Psi}_{i}(\mathbf{s},\mathbf{a}^{*},\mathbf{\theta}_{i})\nabla_{\mathbf{ \phi}_{i}}\hat{\pi}_{i}(\mathbf{s}_{i},\mathbf{\phi}_{i}) \tag{20}\] where the \(\mathbf{a}^{*}\) was jointly calculated by all agents' policies. ``` Function\(\text{\sc{DS\_Sampling}}(\mathbf{s},\mathbf{\theta}_{i})\): \(j=1\)to\(N\)do \(\mathbf{a}_{j}^{*}=\hat{\pi}_{j}(\mathbf{s},\mathbf{\phi}_{j})\) for\(m=1\)to\(M\)do \(\mathbf{a}_{i}^{*}=[\mathbf{a}_{1}^{*},...,\mathbf{a}_{N}^{*}]\) for\(m=1\)to\(M\)do \(\mathbf{e}_{i}^{m}\sim\mathcal{N}(\mathbf{0},\mathbf{\zeta}^{\text{Sampling}}),\mathbf{e}_{j+i}^{m}= \mathbf{0}\) \(\mathbf{e}^{m}=[\mathbf{e}_{1}^{m},...,\mathbf{e}_{N}^{m}]\) \(\mathcal{A}^{\text{explore}}=[\mathbf{a}^{*},\mathbf{a}^{*}+\mathbf{e}^{1},\mathbf{a}^{*}+\mathbf{e }^{2},...,\mathbf{a}^{*}+\mathbf{e}^{M}]\) Select \(\mathbf{a}^{\text{explore}}\in\mathcal{A}^{\text{explore}}\) following: \(p(\mathbf{a}^{\text{explore}})=\frac{\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a }^{\text{explore}},\mathbf{\theta}_{i}^{-})\right)}{\sum_{\mathbf{a}^{\prime}\in \mathcal{A}^{\text{explore}}}\exp\left(\eta\cdot\hat{\Psi}_{i}\left(\mathbf{s},\mathbf{a }^{\prime},\mathbf{\theta}_{i}^{-}\right)\right)}\) return\(\mathbf{a}_{i}^{\text{explore}}\) ``` **Algorithm 1**Estimation by Monte Carlo Sampling ``` Function\(\text{\sc{DS\_Sampling}}(\mathbf{s},\mathbf{\theta}_{i})\): \(j=1\)to\(N\)do \(\mathbf{a}_{j}^{*}=\hat{\pi}_{j}(\mathbf{s},\mathbf{\phi}_{j})\) for\(m=1\)to\(N\)do \(\mathbf{a}_{i}^{*}=[\mathbf{a}_{1}^{*},...,\mathbf{a}_{N}^{*}]\) for\(m=1\)to\(M\)do \(\mathbf{e}_{i}^{m}\sim\mathcal{N}(\mathbf{0},\mathbf{\zeta}^{\text{Sampling}}),\mathbf{e}_{j+i}^{m}= \mathbf{0}\) \(\mathbf{e}^{m}=[\mathbf{e}_{1}^{m},...,\mathbf{e}_{N}^{m}]\) \(\mathcal{A}^{\text{explore}}=[\mathbf{a}^{*},\mathbf{a}^{*}+\mathbf{e}^{1},\mathbf{a}^{*}+\mathbf{e }^{2},...,\mathbf{a}^{*}+\mathbf{e}^{M}]\) Select \(\mathbf{a}^{\text{explore}}\in\mathcal{A}^{\text{explore}}\) following: \(p(\mathbf{a}^{\text{explore}})=\frac{\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a }^{\text{explore}},\mathbf{\theta}_{i}^{-})\right)}{\sum_{\mathbf{a}^{\prime}\in \mathcal{A}^{\text{explore}}}\exp\left(\eta\cdot\hat{\Psi}_{i}\left(\mathbf{s},\mathbf{a }^{\prime},\mathbf{\theta}_{i}^{-}\right)\right)}\) return\(\mathbf{a}_{i}^{\text{explore}}\) ``` **Algorithm 2**Exploration using Boltzmann Softmax Unlike MADDPG which explores by directly adding noises to its actions, MACDPP proposed an effective exploitation that naturally related to the relative entropy regularized value function based on the shared information in the centralized training process. Define the global action as \(\mathbf{a}^{*}=[\mathbf{a}_{1}^{*},...,\mathbf{a}_{N}^{*}]\) where \(\mathbf{a}_{i}^{*}=\hat{\pi}_{i}(\mathbf{s}_{i},\mathbf{\phi}_{i}),i=1,...N\). An exploration set for the \(i\)-th agent with \(M+1\) candidates was built following Algorithm 2: \(\mathcal{A}^{\text{explore}}=[\mathbf{a}^{*},\mathbf{a}^{*}+\mathbf{e}^{1},\mathbf{a}^{*}+\mathbf{e }^{2},...,\mathbf{a}^{*}+\mathbf{e}^{M}]\) where exploration noises \(\mathbf{e}^{m}\) added Gaussian noises only to the local actions related to the \(i\)-th agent. Please note that the variance of sampling \(\mathbf{\zeta}^{\text{Sampling}}\) which affected the decision-making of the agent was independent of \(\mathbf{\zeta}^{\text{MC}}\) in Algorithm. 1 which locally estimated the Mellowmax operation. An effective exploration was randomly selected following the probability below: \[p(\mathbf{a}^{\text{explore}})=\frac{\exp\left(\eta\cdot\hat{\Psi}_{i}(\mathbf{s},\mathbf{a }^{\text{explore}},\mathbf{\theta}_{i}^{-})\right)}{ ``` for\(i=1\) to \(N\)do Initialize buffer \(D_{i}\), networks weights \(\mathbf{\theta}_{i}\), \(\mathbf{\phi}_{i}\). Copy the target networks with parameters \(\mathbf{\theta}_{i}^{-}\), \(\mathbf{\phi}_{i}^{-}\). for\(e=1\) to \(E\)do for\(t=1\) to \(T\)do # Interaction phase Observe state \(\mathbf{s}\) for\(i=1\) to \(N\)do \(\mathbf{a}_{i}=\text{BS\_Sampling}(\mathbf{s},\mathbf{\theta}_{i}^{-})\) Execute \(\mathbf{a}=[\mathbf{a}_{1},...,\mathbf{a}_{N}]\) Observe next state \(\mathbf{s}^{\prime}\) and reward \(\mathbf{r}\) Separately store sample to \(\mathcal{D}_{i},i=1,...,N\) # Centralized training phase for\(i=1\) to \(N\)do Sample mini-batch of \(J\) samples from \(\mathcal{D}_{i}\) for\(j=1\) to \(J\)do # Restore information from \(\mathcal{D}_{k},k\neq i\) \(\mathbf{s}_{j}=[\mathbf{s}_{1,j},...,\mathbf{s}_{N,j}]\) \(\mathbf{a}_{j}=[\mathbf{a}_{1,j},...,\mathbf{a}_{N,j}]\) \(\mathbf{s}^{\prime}_{j}=[\mathbf{s}^{\prime}_{1,j},...,\mathbf{s}^{\prime}_{N,j}]\) # Calculate the next action of all agents for\(k=1\) to \(N\)do \(\mathbf{a}^{\prime}_{k,j}=\hat{\pi}_{k}(\mathbf{s}^{\prime}_{k},\mathbf{\phi}^{-}_{k})\) \(\mathbf{a}^{\prime}_{j}=[\mathbf{a}^{\prime}_{1,j},...,\mathbf{a}^{\prime}_{N,j}]\) # Monte Carlo Estimation \(\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s}_{j})=\text{MC\_Estimate}(\mathbf{s}_{j}, \mathbf{a}_{j},\mathbf{\theta}_{i}^{-})\) \(\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s}^{\prime}_{j})=\text{MC\_Estimate}(\mathbf{s}^{ \prime}_{j},\mathbf{a}^{\prime}_{j},\mathbf{\theta}_{i}^{-})\) # Calculate TD error following Eq. (14) \(y_{j}=R_{i}(\mathbf{s}_{i,j},\mathbf{a}_{i,j},\mathbf{s}^{\prime}_{i,j})+\gamma\mathcal{M }_{\eta}\hat{\Psi}_{i}(\mathbf{s}^{\prime}_{j})+\hat{\Psi}_{i}(\mathbf{s}_{j},\mathbf{a}_{ j},\mathbf{\theta}_{i}^{-})-\mathcal{M}_{\eta}\hat{\Psi}_{i}(\mathbf{s}_{j})\) # Update critic \(\hat{\Psi}_{i}(\cdot,\mathbf{\theta}_{i})\) and actor \(\hat{\pi}_{i}(\cdot,\mathbf{\phi}_{i})\) \(\mathbf{\theta}_{i}\leftarrow\underset{\mathbf{\theta}_{i}}{\arg\min}\ \frac{1}{J}\sum_{j=1}^{J}(y_{j}-\hat{\Psi}(\mathbf{s}_{j},\mathbf{a}_{i,j},\mathbf{\theta} _{i}))^{2}\) # Update actor \(\hat{\pi}_{i}(\cdot,\mathbf{\phi}_{i})\) \(\nabla_{\mathbf{\phi}_{i}}\frac{\sum_{j=1}^{J}\nabla_{\mathbf{\phi}_{i}^{-}}\hat{\Psi }_{i}(\mathbf{s}_{j},\mathbf{a}_{j})\mathbf{\theta}\nabla_{\mathbf{\phi}_{i}^{-}}\hat{\pi}_{i} (\mathbf{s}_{i,j},\mathbf{\phi}_{i})}{J}\) # Update target networks \(\mathbf{\theta}_{i}^{-}\leftarrow\tau\mathbf{\theta}_{i}+(1-\tau)\mathbf{\theta}_{i}^{-}\) \(\mathbf{\phi}_{i}^{-}\leftarrow\tau\mathbf{\phi}_{i}+(1-\tau)\mathbf{\phi}_{i}^{-}\) return\(\hat{\Psi}_{i},\hat{\pi}_{i},i=1,...,N\) ``` **Algorithm 3**Learning Process of MACDPP ### _Factorization of Multi-agents in Different Tasks_ In this subsection, we detailed the training procedure of MACDPP in both multi-agent cooperative/competitive environments and single systems that are collaboratively controlled by multiple agents. The learning process of the proposed method in a multi-agent cooperative/competitive scenario was summarized in Algorithm 3. Given the length of episode \(E\) and the length of one rollout \(T\), at the beginning, the parameters of both critic and actor networks were randomly initialized as \(\mathbf{\theta}_{i},\mathbf{\phi}_{i},i=1,...,N\). Those weights were copied to the target networks as \(\mathbf{\theta}_{i}^{-},\mathbf{\phi}_{i}^{-}\). At each step, the global state \(\mathbf{s}\) was first observed. The control action of each agent \(\mathbf{a}_{i}\) was determined by BS_Sampling\((\cdot)\) following Algorithm 2. Conducted global action \(\mathbf{a}=[\mathbf{a}_{1},...,\mathbf{a}_{N}]\) by all agents, the global state in the next step and the vector of \(N\) reward functions \(\mathbf{r}=[R_{1}(\mathbf{s}_{1},\mathbf{a}_{1},\mathbf{s}^{\prime}_{1}),...,R_{N}(\mathbf{s}_{N },\mathbf{a}_{N},\mathbf{s}^{\prime}_{N})]\) were then observed and stored to the separate replay buffer. During the centralized training buffer, the update of each agent was separately conducted with its own \(J\) mini-batch samples while the samples from other agents' buffers were used to restore the global information. The TD error was calculated following Eq. (13) and Algorithm 1 to updated actor and critic networks. The target networks were then smoothly updated with a smooth parameter \(\tau\) according to \(\mathbf{\theta}_{i},\mathbf{\phi}_{i},i=1,...,N\). When implementing the proposed MACDPP to jointly control one complex system by multiple agents following our previous work [32, 33], only one global replay buffer \(\mathcal{D}\) was built. At each step, the global observed state \(\mathbf{s}\) was sent in parallel to all agents. The control actions of all actors were then integrated as \(\mathbf{a}\) and conducted to the target system. The resulting next step state \(\mathbf{s}^{\prime}\) and the corresponding reward were received and stored in \(\mathcal{D}\). Please note that all agents shared one reward function \(\mathbf{r}=[R(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})]\) designed for the whole system. Unlike the case in charge of multi-agent cooperative/competitive environments, MACDPP did not separately conduct \(J\) mini-batch sampling for each agent when jointly controlling one system but rather shared \(J\) samples during the update. In addition, the actor network of the \(i\)-th agent \(\hat{\pi}_{i}(\mathbf{s},\mathbf{\phi}_{i})\) directly received the global observed state in the decision-making process. The difference between the learning process of MACDPP in cooperative/competitive and joint control scenarios was illustrated in Fig. 1. ## IV Experimental Results ### _Experimental Settings_ In this section, we evaluated MACDPP in multi-agent and traditional control tasks in terms of learning capability and sample efficiency. For the multi-agent scenario, we selected Fig. 1: The differences between the learning process of MACDPP in cooperative/competitive (top) and joint control (bottom) scenarios. the physical deception, Covert Communication, keep away and cooperative communication tasks from the Multi-Agent Particle Environment (MPE) [24]3 The first three are mixed cooperative-competitive tasks and the last one is a pure cooperative task. MADDPG [24], MATD3 [25] and M3DDPG [26] were selected as the compared MARL baselines. For the traditional control scenario, we selected the Ant, HalfCheetah from Mujoco simulation [35] and the UR5 robot arm simulation task ur_ee_position developed in robo-gym [36]. For each MARL approach, the HalfCheetah was two scenarios: 2 agents separately controlled the front and back body, and six agents controlled six joints. The Hopper and UR5 were jointly controlled by three and five agents for each controllable joint. We not only compared the proposed method with MARL approaches MADPPG [24] but also the widely implemented single-agent RL approaches including Deep Deterministic Policy Gradient (DDPG) [37], Twin Delayed Deep Deterministic Policy Gradient (TD3) [38] and Soft Actor-Critic (SAC) [39]. All benchmark control tasks were illustrated in Fig. 2. The hyperparameters of all compared methods for each task were summarized in Table II. All actors and critics shared the same network structures. The tunable hyperparameters of the proposed MACDPP including \(\eta\), the Monte Carlo sampling numbers in Algorithms 1 and 2\(M_{1},M_{2}\) and the sampling noise \(\zeta^{\text{MC}},\zeta^{\text{Sampling}}\) were listed in Table III. The proposed MACDPP was developed by PaddlePaddle [40] under its RL toolkit PARL 4. All experiments were conducted on a workstation with Intel Xeon W2265 CPU, NVIDIA GeForce RTX 3080 GPU, 64GB memory and Ubuntu 20.04 OS. The experimental results were summarized over five independent trials with different random seeds for statistical evidence. Footnote 4: [https://github.com/PaddlePaddle/PARL](https://github.com/PaddlePaddle/PARL) ### _Cooperation and Competition in MPE Benchmarks_ #### Iv-B1 Evaluation of the Learning Capability We first compared the proposed methods with the related MARL approaches in four benchmark tasks from the MPE environment. The learning curves of all compared approaches were shown in Fig. 3 while the maximum average returns of each method in the evaluation phase during the learning were listed in Table IV (the number in red indicates the best result in the corresponding term). The trials of all four tasks were conducted by \(25\)k episodes, each episode had \(25\) steps. In the Physical Deception task, MADDPG, MATD3 and M3DDPG converged to close performances near \(0\) average return after \(25\)k episodes. As a comparison, our method quickly suppressed other baselines in the first \(2000\) episodes and converged to \(40\) average return after \(25\)k episodes. In the evaluation task, MACDPP outperformed MATD3 and M3DDPG with over \(7000\%\) and \(1200\%\) maximum average return while MADDPG achieved a negative average return. In the Covert Communication task, M3DDPG outperformed MADDPG and MATD3 in both average return and converge velocity thanks to its Minimax operator. On the other hand, Fig. 2: Eight Benchmark tasks for evaluation: (a) Physical Deception from MPE (mixed cooperative competitive); (b) Covert Communication from MPE (mixed cooperative competitive); (c) Keep Away from MPE (mixed cooperative competitive); (d) Cooperative Communication from MPE (pure cooperative); (e) HalfCheetah from Mujoco (joint control by two agents); (f) HalfCheetah from Mujoco (joint control by six agents); (g) Hopper from Mujoco (joint control by three agents); (h) UR5 End Effector Positioning from robo-gym (joint control by five agents) MACDPP converged to over \(100\%\) more average return during learning and achieved \(135\%\) maximum average in evaluation compared with the suboptimal method M3DDPG. In the Keep Away task, MADDPG converged to a relatively low average return. MATD3 converged to near \(-10\) average return within \(10\)k episodes but could not maintain its performance. Although M3DDPG converged quickly in the first \(10\)k episodes, MACDPP outperformed it with the best maximum average return and the lowest standard deviation in the learning curve which indicated a more stable training procedure. In the Cooperative Communication task which requires pure cooperation over all agents, it is observed that M3DDPG which is good at competition failed to learn good cooperation policies. Although MACDPP, MADDPG and MATD3 all learned to a close performance, the proposed method enjoyed the fastest convergence velocity. Overall, the proposed method demonstrated significantly superior learning capability than related MARL baselines in various MPE benchmark tasks. #### Iv-A2 Evaluation of the Sample Efficiency The sample efficiency which is important to the implementation of RL in real-world systems was evaluated in Fig. 4. We define the measure of sample efficiency in this subsection as the number of interactions used by each approach to reach the lower boundary of the maximum average returns in Table IV over the four benchmark tasks from the MPE environment. It is clearly observed that the proposed method achieved Fig. 4: Number of interactions utilized by all compared approaches in MPE benchmark tasks to reach the lower boundary of maximum average return. Fig. 3: Learning curves of MACDPP and other MARL baselines in MPE benchmark tasks. The shaded region represents the corresponding standard deviation over five trials. the overall superior sample efficiency among all compared MARL approaches, it reduced \(70\%\), \(54.6\%\) and \(48.3\%\) usage of samples than M3DDPG, MADDPG and MATD3 to reach a certain level of control performances. At the same time, we found that MACDPP has overall effectiveness in improving sample efficiency, whether in cooperative or competitive tasks. This result demonstrated the great potential of the proposed method in quickly learning proper multi-agent control policies in complex scenarios with few sampling costs. #### V-B3 Evaluation of the Computational Efficiency In this subsection, we investigated the impact of additional Monte Carlo sampling and Boltzmann softmax operator in MACDPP on computational efficiency. We measured the average calculation time of \(1000\) episodes over all compared MARL baselines in Table V. It is observed that MACDPP brought additional computational burdens in all four tasks. It required \(285.54\%\), \(270.19\%\) and \(27.36\%\) more calculation time compared with MADDPG, MATD3 and M3DDPG. On the other hand, considering the obvious advantages of our approach in learning ability, convergence velocity, and sample efficiency, we believed that these increased computational complexities in training and decision-making were acceptable. #### V-B4 Impact of the Specific Parameter In this subsection, we explored the impact of the special parameter \(\eta\) that controls the strength of the relative entropy term in MACDPP. The learning curves of the Keep Away task with different values of \(\eta\) using MACDPP are shown in Fig. 5. With a wide range of \(\eta\) from \(0.001\) to \(0.5\), MACDPP consistently outperformed the baseline method MADDPG in both the mean and standard deviation of the average returns. It is also observed that a proper selection of \(\eta\) could significantly improve MACDPP's learning performance. The most superior learning curve was obtained when \(\eta=0.1\). An over-small parameter \(\eta=0.001\) resulted in extremely slow convergence and a very large standard deviation of return at the beginning. As a comparison, the large one has less effect on smooth policy updates and may fail in learning more optimal control policies within a limited number of interactions. ### _Cooperation in Traditional Control Task_ #### V-C1 Evaluation of the Learning Capability In this section, we moved to the traditional control scenarios where the MARL approaches were employed to jointly control one system. In this section, we compared two MARL approaches MACDPP and MADDPG which were treated as the proposed method without using relative entropy regularization. Three widely implemented single-agent RL approaches DDPG, TD3 and SAC were also compared. The learning curves of all compared approaches in 2-agent HalfCheetah (the system was jointly controlled by MARL with two agents), 6-agent HalfCheetah, 3-agent Hopper and 5-agent UR5 robot arm were illustrated in Fig. 6. The average maximum returns in the evaluation phase are listed in Table. VI. Please note that the results of single-agent methods in two HalfCheetah were slightly different since we used difficult random seeds for each task. In the 2-agent HalfCheetah task, MADPPG learned the worst policy with the lowest average return. In this case, the joint control strategy failed to suppress the single-agent approach while introducing an additional computational burden. As a comparison, our method converged to the best average return overall compared approaches with a significant advantage in convergence velocity. Regarding the maximum average Fig. 5: Learning curves of MACDPP with different values of parameter \(\eta\) compared with MADDPG in the Keep Away task. The shaded region represents the standard deviation of the average evaluation over five trials. return in evaluation, MACDPP outperformed \(12.61\%\), \(8.57\%\) and \(6.0\%\) compared with MADDPG, TD3 and DDPG while achieving slightly better results than SAC with significantly superior converge velocity. In the 6-agent HalfCheetah task, the learning capability of MADPPG hugely deteriorated so that the six joints could not effectively cooperate. As a comparison, the proposed method successfully learned the task as well as other single-agent baselines with not only a superior converge velocity but also less standard deviation in the average returns. It enjoyed over \(50\%\) more maximum average returns than MADDPG which did not employ the relative entropy regularization. In the 3-agent Hopper task, MADDPG outperformed DDPG in average return and converge velocity while the proposed MACDPP achieved overall superior performances than all compared baselines. It quickly converged to the best maximum average return which was \(56.07\%\), \(25.95\%\), \(3.84\%\) and \(2.03\%\) than DDPG, MADDPG, SAC and TD3, respectively. In the more practical UR5 control scenarios where five independent joints were jointly controlled by five agents in MARL methods, both MACDPP and MADDPG outperformed single-agent RL approaches. Within \(200\)k steps, SAC and DDPG could not learn the task (SAC could not converge at all, and DDPG learned extremely slowly at the first \(70\)k steps) while only TD3 achieved a close average return to MADDPG. As a comparison, MACDPP consistently demonstrated superiority in both learning capability and converge velocity, it quickly reached an average return over \(0\) within \(70\)k steps and finally obtained \(210\%\), \(12.05\%\), \(264.6\%\) and \(16.3\%\) higher maximum average return than DDPG, MADDPG, SAC and TD3. #### Iv-A2 Evaluation of the Sample Efficiency The sample efficiency of MACDPP was evaluated in Fig. 7. Compared with MADDPG which converged slower to a certain level of control performances than the signal-agent baselines, the proposed method demonstrated great advantage in sample efficiency with the regularization of relative entropy. It only spent \(36.6\%\) samples to reach the same performance. This result indicated the importance of properly restricting large policy updates in MARL for superior effectiveness. MCDPP successfully reduced \(44.91\%\), \(37.94\%\) and \(27.98\%\) usage of interactions than DDPG, SAC and TD3. Fig. 6: Learning curves of MACDPP and all baselines in Mujoco and robo-gym benchmark tasks. The shaded region represents the corresponding standard deviation over five trials. Fig. 7: Number of interactions utilized by all compared approaches in Mujoco and robo-gym benchmark tasks to reach the lower boundary of maximum average return. #### Vi-A3 Evaluation of the Computational Efficiency The computational burden was evaluated in Table VII by summarizing the average calculation time of the first \(1000\) steps in four control scenarios. Although the proposed method required \(132.85\%\), \(213.5\%\)\(\%\), \(196.92\%\) and \(220.43\%\) more computational times compared with MADDPG, DDPG, TD3 and SAC. It was observed that the computational burden of MACDPP was alleviated in traditional control scenarios where the system operation took more time. The proposed method additionally consumed \(32.85\%\) time than MADDPG. Compared with the faster single-agent methods, our method increased the computation time from \(96\%\) to \(120\%\) while employing about four times more agents. Furthermore, with the increasing system operation and communication times (i.e., from Mujoco to robo-gym based on ROS toolkit), the phenomenon above became more and more noticeable. It demonstrated the potential and effectiveness of MACDPP in jointly controlling large-scale systems. #### Vi-A4 Case Study In this subsection, we investigated the superior control behaviors of MACDPP through the rollouts of learned policies in both 6-agent HalfCheetah and 5-agent UR5 control scenarios. In the first case study, we explored the learned policies of MADDPG and MACDPP under the same setting of parameter and random seed. The test rollouts with \(20\) steps and the corresponding trajectories of each joint were analyzed in Fig. 8. It is clearly observed that the learned control behavior of MACDPP is more effective than Fig. 8: Trajectories of control actions using MADDPG and MACDPP in one test rollout of 6-agent HalfCheetah task. The actions of MADDPG and MACDPP were drawn in purple and blue respectively. the one of MADDPG. Effectively coordinating six joints by six agents, the proposed method learned a superior control strategy. Each joint timely conducted proper torque according to the current system states, resulting in a faster movement. In comparison, MADDPG had significant disadvantages in terms of coordinating six joints by separate agents. Although the agent in charge of dimension 2 successfully learned a similar policy to the one of MACDPP, the whole multi-agent system struggled to generate proper torques from other agents: the agent of dimension 5 only produced effective torque near the \(15\)-th step while the agent of dimension 1 was fixed with \(-1\) torque during the how rollout. Due to the lack of relative entropy regularization from the algorithmic perspective, the multiple agents in MADDPG were unable to learn effective and cooperative control strategies. In the next case study, we studied the test rollouts using MADDPG and MACDPP in the ur_ee_position task which aims to control the end-effector of the UR5 robot arm to reach the randomly generated targets. It is observed that MACDPP quickly drove the robot to finish the task within \(60\) steps. The action trajectories showed proper cooperation between each joint. The base and elbow joints continuously output \(-180^{\circ}\) and \(60^{\circ}\) throughout the task. At step \(25\), the shoulder joint and wrist 1 joint were coordinated to guide the end-effector to move forward to the target position. Around step \(60\), the shoulder and wrist 2 joints worked together to quickly reach the target. The action Trajectories of all dimensions were effective with minimal jitter. Compared with our method, MADDPG could not sufficiently learn the cooperative strategy over five joints. The base joint could not continuously output a certain degree, it had strong trembling between steps \(40\) to \(80\). The shoulder and wrist 1 joints failed to cooperate effectively at the beginning, resulting in redundant movements of the end-effector. After step \(40\), the shoulder and wrist 2 joints could not achieve seamless coordination. Both of them experienced sustained tremors which ultimately resulted in highly degraded control performance. The experimental results above revealed the advantages of MACDPP in the joint control of robot systems. The multiple agents reduced the exploration complexity in one system and resulted in faster policy convergence compared to single-agent approaches with the same amount of interactions. At the same time, the relative entropy regularization significantly avoided the mismatch between the update of multi-agent policies during the learning, promoting the effectiveness in the learning of cooperative control strategies. Fig. 9: Trajectories of control actions using MADDPG and MACDPP in one test rollout of 5-agent UR5 control task ur_ee_position. The actions of MADDPG and MACDPP were drawn in purple and blue respectively. The last dimension Wrist3 joint was not included in the controllable action as it was fixed to 0 in the ur_ee_position task. ## V Conclusions This article proposed a novel MARL approach MACDPP to improve the learning capability and sample effectively in a wide range of control scenarios including multiple agents cooperative/competitive tasks and joint control of a single complicated system. It naturally alleviated the inherent inconsistency over multiple agents policy updates by integrating the relative entropy regularization to the AC structure and CTDE framework. MACDPP successfully extended FKDPP which has been successfully implemented in the real-world chemical plant by Yokogawa [32, 33] towards a modern approach that supports deep neural networks, AC structure and CTDE framework in order to fit a wider range of control scenarios. Through evaluation of different benchmark tasks, ranging from multi-agent cooperation/competition to Mujoco simulator and robot arm manipulation, our proposed method consistently demonstrated significant superiority in both learning capability and sample efficiency compared with related multi-agent and single-agent RL baselines. All these results indicated the potential of relative entropy regularized MARL in effectively learning complex systems divided into multiple agents with lower sampling costs and better control performance.
2302.14455
Optimization of Permalloy properties for magnetic field sensors using He$^+$ irradiation
Permalloy, despite being a widely utilized soft magnetic material, still calls for optimization in terms of magnetic softness and magnetostriction for its use in magnetoresistive sensor applications. Conventional annealing methods are often insufficient to locally achieve the desired properties for a narrow parameter range. In this study, we report a significant improvement of the magnetic softness and magnetostriction in a 30 nm Permalloy film after He$^+$ irradiation. Compared to the as-deposited state, the irradiation treatment reduces the induced anisotropy by a factor ten and the hard axis coercivity by a factor five. In addition, the effective magnetostriction of the film is significantly reduced by a factor ten - below $1\times10^{-7}$ - after irradiation. All the above mentioned effects can be attributed to the isotropic crystallite growth of the Ni-Fe alloy and to the intermixing at the magnetic layer interfaces under light ion irradiation. We support our findings with X-ray diffraction analysis of the textured Ni$_{81}$Fe$_{19}$ alloy. Importantly, the sizable magnetoresistance is preserved after the irradiation. Our results show that compared to traditional annealing methods, the use of He$^+$ irradiation leads to significant improvements in the magnetic softness and reduces strain cross sensitivity in Permalloy films required for 3D positioning and compass applications. These improvements, in combination with the local nature of the irradiation process make our finding valuable for the optimization of monolithic integrated sensors, where classic annealing methods cannot be applied due to complex interplay within the components in the device.
Giovanni Masciocchi, Johannes Wilhelmus van der Jagt, Maria-Andromachi Syskaki, Jürgen Langer, Gerhard Jakob, Jeffrey McCord, Benjamin Borie, Andreas Kehlberger, Dafine Ravelosona, Mathias Kläui
2023-02-28T10:00:44Z
http://arxiv.org/abs/2302.14455v2
# Optimization of Permalloy properties for magnetic field sensors using He\({}^{+}\) irradiation ###### Abstract Permalloy, despite being a widely utilized soft magnetic material, still calls for optimization in terms of magnetic softness and magnetostriction for its use in magnetoresistive sensor applications. Conventional annealing methods are often insufficient to locally achieve the desired properties for a narrow parameter range. In this study, we report a significant improvement of the magnetic softness and magnetostriction in a 30 nm Permalloy film after He\({}^{+}\) irradiation. Compared to the as-deposited state, the irradiation treatment reduces the induced anisotropy by a factor ten and the hard axis coercivity by a factor five. In addition, the effective magnetostriction of the film is significantly reduced by a factor ten - below \(1\times 10^{-7}\) - after irradiation. All the above mentioned effects can be attributed to the isotropic crystallite growth of the Ni-Fe alloy and to the intermixing at the magnetic layer interfaces under light ion irradiation. We support our findings with X-ray diffraction analysis of the textured Ni\({}_{81}\)Fe\({}_{19}\) alloy. Importantly, the sizable magnetoresistance is preserved after the irradiation. Our results show that compared to traditional annealing methods, the use of He\({}^{+}\) irradiation leads to significant improvements in the magnetic softness and reduces strain cross sensitivity in Permalloy films required for 3D positioning and compass applications. These improvements, in combination with the local nature of the irradiation process make our finding valuable for the optimization of monolithic integrated sensors, where classic annealing methods cannot be applied due to complex interplay within the components in the device. ## I Introduction Permalloy, a typical soft magnetic Ni-Fe alloy is employed as an active sense layer in several magnetoresistive (MR) sensor applications [1]. To have a small magnetostriction and low coercivity, most of these devices are designed around the alloy composition of Ni\({}_{81}\)Fe\({}_{19}\) which also possesses significant anisotropic magnetoresistance (AMR). Optimization of Permalloy for AMR sensors has been studied for a long time [2; 3; 4] and includes different aspects: primarily, improvement of magnetic softness and low magnetostriction. To reach that, negligible crystalline anisotropy is firstly required. The single thin film elements typically feature a stripe shaped geometry to induce a strong shape anisotropy, providing the sensor with a well-defined orientation of sensitivity. Furthermore, this design of the sensitive elements ensures a fixed configuration of the magnetic domains, thus enabling a very high signal-to-noise ratio. Additional anisotropies of other sources, if not oriented in the same direction as the shape anisotropy, would hinder this sensitivity direction [5]. Moreover, to achieve low hysteresis, the coercivity in the hard axis magnetization direction must be very low and the specific AMR must be as high as possible [6] to maximize sensitivity. Eventually, to avoid parasitic anisotropies, low magnetostriction (source of magnetoelastic anisotropy) is required. In this case, strain in the material has small or negligible impact on the magnetic properties. The low magnetoelastic anisotropy is particularly important for sensors on flexible substrates [1; 7; 8; 9; 10] that have attracted great attention in recent years in wearable electronics and biomedical applications. To obtain this particular material property, growth optimization [11] and annealing [12] are viable options. However, none of these technique allow for a local treatment of the film. It is well known that ion irradiation is an excellent tool to tune locally the magnetic and structural properties of thin films through ordering [13; 14; 15; 16] and interface intermixing [17; 18; 19; 20]. In Permalloy films, ion irradiation has been shown to change the magnetic anisotropy [21; 22; 23] and the magneto-resistive response in the presence of exchange bias [4; 24]. However, most of these works use ion implantation [25; 26; 27] or heavy ions [28], which can result in significant damage to the sample. This can be avoided by using lighter ions - like He\({}^{+}\) - with energies in the range of 10-30 keV [17; 29]. In this way, collision cascades are absent and the structural modifications are confined to the vicinity of the ion path in a metal. Furthermore, the effect of irradiation on the magneto-elastic properties of single Permalloy films and a direct comparison between field free ion irradiation and annealing has not yet been reported [30]. In this work, we propose and explore the use of He\({}^{+}\) ion irradiation on sputtered layer of Ni\({}_{81}\)Fe\({}_{19}\)(30 nm) as material preparation for magnetic field sensors and we compare it with standard field free annealing. Using Kerr microscopy and Vibrating Sample Magnetometry (VSM) we show that 20 keV He\({}^{+}\) ions significantly reduce the coercivity and the induced magnetic anisotropy of our magnetic material. The result is a soft magnetic film with in-plane magnetic anisotropy \(<10\) J/m\({}^{3}\) and coercive field \(\simeq\) 0.05 mT, which is a further improvement over the values that can be obtained by field-free annealing process by a factor 5 and 10, respectively. The anisotropy measurements are supported by a detailed comparison using the remanent domain pattern. Additionally, we show that the polycrystalline magnetostriction can be progressively reduced by a factor ten for irradiation doses of \(5\times 10^{16}\) cm\({}^{-2}\). This reduction in magnetoelastic coupling is attributed to crystallization and changes to the interface magnetostriction caused by intermixing at the magnetic layer boundaries. We support our findings with structural characterization performed using X-ray diffraction (XRD). The results show an overall improvement in the crystallization after irradiation and annealing. We attribute the reduction in magnetic anisotropy to the absence of a preferential direction of atomic ordering and to stress relaxation during irradiation. As post growth He\({}^{+}\) ion irradiation improves magnetic softness and minimizes strain cross sensitivity of Permalloy, AMR magnetic sensors with high sensitivity and low hysteresis can be envisioned even for integrated devices. ## II Experimental methods The samples have been prepared by DC magnetron sputtering using a Singulus Rotaris system on a 1.5 \(\mu\)m thick, thermally oxidized SiOx on top of a 625 \(\mu\)m thick _Si_ substrate. A layer of Ni\({}_{81}\)Fe\({}_{19}\) (30 nm) is sputtered at room temperature in the presence of a rotating magnetic field of 5 mT on a NiFeCr (5 nm) seed layer and capped with 4 nm of Ta as shown in Fig. 1 (b). The following sputtering conditions were used for the magnetic layer growth: base pressure \(5\times 10^{-8}\) mbar, sputtering power 1200 W and Ar\({}^{+}\) flow 90 sccm. The seed layer is used to promote a NiFe (111) texture during growth and it is known to improve magnetoresistance [25]. After deposition, optical lithography and ion etching have been used to pattern arrays of disks (80 \(\mu\)m of diameter and 3 \(\mu\)m of spacing) on the samples in order to probe the local film properties. Multiple copies of the samples have been irradiated at an energy of 20 keV with different fluences of He\({}^{+}\) ions from \(5\times 10^{13}\) to \(5\times 10^{16}\) cm\({}^{-2}\). At these irradiation conditions, the majority of the ions reach the substrate (roughly 94% from Monte Carlo TRIM [31] simulations, not shown), resulting in homogeneous irradiation of the entire layer stack. To compare the effect of ion irradiation to thermal annealing, the same magnetic material has been consecutively annealed for three hours at 200, 265 and 300\({}^{\circ}\)C at a pressure of \(10^{-7}\) mbar. In order to avoid a magnetization induced preferential direction of ordering [26; 32], external magnetic fields have been minimized during the irradiation and annealing steps. The thin film magnetic properties have been measured with Kerr microscopy and VSM. The magnetic properties of our films are summarized in Table 1. Due to the negligible implantation [29], the value of the Young's modulus is assumed to be unaffected by our irradiation and annealing step. Electrical measurement of anisotropic magnetoresistance (AMR) have been performed with four contacts in line in the presence of a rotating magnetic field of 10 mT. To apply strain to our devices, the substrate was bent mechanically with a three-point bending method. As reported in our previous work [34] a tensile and uniaxial strain is generated [35]. Moreover the strain is uniform in the central area of the sample and thus in the measured region. As the thin films are in total 40 nm thick, we assume that the strain is entirely transferred from the substrate and that shear strain is negligible. Structural modifications caused by ion irradiation and annealing were probed by X-Ray Diffraction (XRD) using a Bruker D8 Discover system. Angular \(2\Theta/\Theta\) scans and rocking curve measurements were performed on 1 by 1 cm samples. ## III Results and discussion To compare the structural modifications induced by different material treatment on a Ni-Fe alloy, XRD measurements on the Ni\({}_{81}\)Fe\({}_{19}\) (30 nm) film as-deposited and after irradiation and annealing are performed and reported in Fig. 1. Fig. 1 (a) shows \(2\Theta/\Theta\) angular scan of the Permalloy film. A well defined crystalline texture of NiFe (111) (and its second order peak) is present for the material in the as-deposited state and persists after irradiation and annealing in all the fluence and temperature range explored. The full width at half maximum (FWHM) of the (111) peak is reported in Fig. 1 (c) as a function of the irradiation fluence (blue diamonds) and the temperature during annealing (orange pentagrams). In both cases, the FWHM of the (111) peak decreases by about 15% with increasing ion fluence and annealing temperature with respect to the as-deposited case. The crystallite size (or the size of a coherently diffracting domain in the material) is a fundamental property that can be extracted from XRD profile [36]. According to the Scherrer equation [37], \[D=\frac{K\lambda}{\beta cos\theta} \tag{1}\] \begin{table} \begin{tabular}{||c c c c c c|} \hline \(Ni_{81}Fe_{19}\) & \(M_{s}\) (T) & \(K_{u}\) (J/m\({}^{3}\)) & \(H_{c}\) (mT) & \(\lambda_{s}\) x\(10^{-6}\) & \(Y\) (GPa) \\ \hline \hline as-deposited & 0.95(1) & 78(5) & 0.20(5) & -0.7(1) & 200\({}^{33}\) \\ \hline Ann. 265\({}^{\circ}\)C & 0.95(1) & 70(5) & 0.15(5) & +0.04(9) & 200\({}^{33}\) \\ \hline He\({}^{+}\) \(5\times 10^{16}\) cm\({}^{-2}\) & 0.91(1) & 8(7) & 0.05(5) & +0.01(9) & 200\({}^{33}\) \\ \hline \end{tabular} \end{table} Table 1: Parameters of the magnetic materials (thickness 30 nm) after deposition, annealing and He\({}^{+}\) ion irradiation. The values without reference are quantified experimentally. Here, \(M_{s}\) is the saturation magnetization, \(K_{u}\) is the uniaxial anisotropy constant, \(H_{c}\) is the coercive field, \(\lambda_{s}\) is the saturation magnetostriction and \(Y\) is the Young’s modulus. The same value for \(Y\) is considered in all cases. the size of crystallites is inversely proportional to the FWHM of a diffraction peak. Here \(K=0.9\) is a dimensionless shape factor, D the crystallite size, \(\lambda\) the wavelength of the Cu-K\(\alpha\) radiation, \(\theta\) the diffraction angle and \(\beta\) is the line broadening at FWHM of the XRD peak in radians, after subtracting the instrumental line broadening. As our measurements show, both annealing at 265\({}^{\circ}\)C and ion irradiation with a fluence of \(5\times 10^{16}\) cm\({}^{-2}\) increase the size of crystallites in our films. The estimated size of the diffracting domains using eq. 1 is 22(1) nm for the as-deposited case and 24(1) nm after the two material treatments. Additionally, rocking curve measurements of the NiFe (111) peak were preformed and more information can be found in section S1 of the supplementary material. Both for the irradiated samples and for the annealed ones a decrease in the FWHM of the rocking curve is observed indicating improvement in the film crystalline phase [38]. The major effect of room temperature irradiation has been shown to be improved material uniformity [39] and interface intermixing [17]. In the same way, thermal annealing is widely used to induce crystallization [40] and promote atomic diffusion [41]. Similar effects have been observed in literature for amorphous alloys, where annealing [42] and He\({}^{+}\) irradiation [15; 16] providing high short range atomic mobility allow a mechanism for growth of the ordered phase at the expense of its disordered or less ordered counterpart. The thin film magnetic properties have been measured with Kerr microscopy and are reported in Fig. 2. Figs. 2 (a)-(c) report the hysteresis curves for the NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm) sample for two perpendicular in-plane directions of the applied magnetic field: (a) for the as-deposited state, (b) after annealing and (c) after irradiation. The curves refer to the magnetic contrast of the structured film into 80 \(\mu\)m disks. The magnetic response of the Permalloy film in the as-deposited state can be seen in Fig. 2 (a). As the magnetization curves at \(\Phi=0^{\circ}\) and \(\Phi=90^{\circ}\) are different, a weak uniaxial magnetic anisotropy \(K_{u}\), is present in the as-deposited Ni\({}_{81}\)Fe\({}_{19}\) and might be associated to internal stresses during the material growth or asymmetries in the deposition system [43]. The value of \(K_{u}=80(7)\)\(\frac{J}{m^{2}}\) has been obtained subtracting the area between the easy and hard axis loop of the as-deposited state. The direction of the magnetic easy axis anisotropy can be seen in the orientation of the magnetic domains at the remanent state (inset of Fig. 2 (a)). The field was applied along \(\Phi=0^{\circ}\) and then reduced to zero. A vector image of the in-plane magnetization is obtained by the sum of the horizontal and vertical components of the magnetic contrast. In this case, the domains align along the easy axis direction. The measurement has been repeated for the same film after annealing and is reported in Fig. 2 (b). After the annealing, the in-plane hysteresis loops still show the presence of uniaxial magnetic anisotropy. This is confirmed by the remanent magnetic state (inset of Fig. 2 (b)) as the magnetic domains again orient in the easy axis direction \(\Phi\simeq 90^{\circ}\). Interestingly, the magnetic response of the irradiated Permalloy reported in Fig. 2 (c) is significantly different with respect to the as-deposited and annealed case. The hysteresis loops now show a negligible angular dependence on \(\Phi\). Both the magnetic anisotropy and the hard axis coercivity \(H_{c}\) are significantly reduced. A confirmation of the extremely low magnetic anisotropy of the irradiated Permalloy can be seen in the inset of Fig. 2 (c). The remanent magnetic configuration is a vortex state, which is formed as the low induced anisotropy is negligible compared to the shape anisotropy of the patterned disks. Fig. 2 (d) reports the angular plot of the normalized remanent magnetization for the three samples considered. The as-deposited and the annealed case (in blue and green, respectively), show a signature of uniaxial magnetic anisotropy with easy axis and sizable remanent magnetization at \(\Phi\simeq 90^{\circ}\). The irradiated sample instead, shows reduced remanent magnetization for all the angles. The low remanent magnetization is typical for the vortex state in inset of Fig. 2 (c). To further understand the improvement to the magnetic softness of our Permalloy after irradiation, we have gradually increased the He\({}^{+}\) fluence (ions/cm\({}^{2}\)) keeping ion energy constant. The measurements of \(H_{c}\) and \(K_{u}\) as a function of the fluence of He\({}^{+}\) ions during irradiation are reported in Fig. 2 (e). The values of the film as-deposited and after annealing are given for comparison by dashed lines. For low fluences, no sizable effects are noted. At fluences larger than \(5\times 10^{13}\) cm\({}^{-2}\) the coercivity and the anisotropy are progressively reduced as the He\({}^{+}\) fluence is increased. For the maximum fluence of \(5\times 10^{16}\) cm\({}^{-2}\), \(H_{c}\) is five times lower compared to the as-deposited state while the induced anisotropy is decreased by a factor ten. We do not observe a similar substantial reduction of these magnetic parameters after the annealing. A possible explanation for this dissimilarity is the different mechanism of ordering promoted during irradiation and field-free annealing. Improved atomic ordering in Permalloy after annealing and irradiation with different ions [28; 44] has been re Figure 1: (a) 2\(\Theta/\Theta\) XRD angular scan of the NiFe samples for the sample in the as-deposited state, after annealing and after irradiation. (b) schematic of the NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm) stack. (c) FWHM of the NiFe (111) peak as a function of He\({}^{+}\) fluence and as function of annealing temperature. ported in literature. Some of these studies on polycrystalline films [45; 46] show that crystalline grain growth is more homogeneous for irradiation than for thermal annealing in the temperature range from 200 to 300\({}^{\circ}\) C. This difference originates from the distinct mechanism with which chemical ordering of the alloy is changed during the two processes [47]. As we see from these studies, radiation-enhanced mobility is more isotropic in the absence of an applied magnetic field, if compared to heat-induced mobility [22; 47]. Accordingly, a stronger reduction in the magnetic anisotropy for the irradiated samples can be expected. Recently [39], a comparison between ion irradiation and thermal annealing analyzing the microscopic pinning parameters for DW motion has been conducted. In this work, the annealed sample shows strong but widely distributed pinning sites. In contrast to this, the irradiated sample exhibits weaker defects with a higher density. A possible explanation for the observed reduction in coercivity in our irradiated samples, is therefore an overall smoother DW energy landscape after irradiation, which allows for domain formation and switching of the magnetization at lower magnetic fields. In addition to that, the release of internal stresses in the film, that has been reported during irradiation [48; 49], can also be responsible for improvements to the soft magnetic properties of our Permalloy [43]. To evaluate the effect of ion irradiation and annealing on the magnetoelastic coupling of a thin magnetic Ni-Fe alloy, the strain-dependent magnetic properties have been investigated. Uniaxial in-plane strain is applied to a full film of NiFeCr(5 \(nm\))/Ni\({}_{81}\)Fe\({}_{19}\)(30 \(nm\))/Ta(4 \(nm\)) by three point bending method as previously reported [34]. Since the magnetization is coupled to the external strain via the expression of the anisotropy energy, the magnetic anisotropy before and after the application of strain is measured using Kerr microscopy. A strain of \(\varepsilon_{xx}=0.06\%\) (tensile) is applied along the in-plane direction \(\Phi=0^{\circ}\). The expression for the magnetoelastic anisotropy depends on the saturation magnetostriction \(\lambda_{s}\) of the material according to [50] \[K_{ME}=\frac{3}{2}\lambda_{s}Y\varepsilon, \tag{2}\] where \(Y\) is the Young's modulus and \(\varepsilon\) is the uniaxial tensile strain. Using eq. 2 and the values of the Young's modulus in Table 1, we calculate the effective magnetostriction of the film for different He\({}^{+}\) fluences. The calculated values are reported in Fig. 3 (a). In the as-deposited state, as well as for He\({}^{+}\) fluences in the range of \(10^{13}\) cm\({}^{-2}\), \(\lambda_{s}=-7(2)\times 10^{-7}\) is negative. In this case, a tensile strain increases the anisotropy field in the direction \(\Phi=0^{\circ}\). For larger fluences of ion during irradiation, the magnetostriction is progressively reduced and reaches values close to zero for a fluence of \(5\times 10^{16}\) cm\({}^{-2}\). In this case, the magnetoelastic anisotropy is negligible and the material is insensitive to the applied strain. For this reason, the magnetization curves before and after the application of \(\varepsilon_{xx}=0.06\%\) are almost unchanged. The saturation magnetostriction of the magnetic layer after annealing has been measured and is reported in Fig. 3 (a) for comparison. After the annealing \(\lambda_{s}\simeq 0\) is reported. An additional confirmation of the magnetic behavior of the stack under strain is obtained by imaging domain formation using the magneto-optical Kerr effect (MOKE). The MOKE images shown in Figs. 3(c)-(e) show how the magnetoelastic anisotropy alters the preferential direction of magnetic domains before (left) and after (right) the application of strain. Let us first consider the as-deposited state (Fig. 3(c)). Before the application of strain, the magnetization aligns to the deposition-induced anisotropy easy axis. After the application of strain, the negative magnetostriction of the as-deposited sample orients the magnetic domains along the y direction, perpendicular to the uniaxial strain \(\varepsilon_{xx}\). Fig. 3(d) shows instead the domain pattern for a sample annealed at 265\({}^{\circ}\)C. In this case the remanent magnetic state is almost not altered by the applied strain. This is in agreement with the extremely low magnetostriction measured, that results in negligible magnetoelastic anistropy \(K_{ME}<<K_{w}\). The remanent state for the sample irradiated with He\({}^{+}\) fluence \(5\times 10^{16}\) cm\({}^{-2}\) (Fig. 3 (e)) exhibits instead a magnetic vortex state that is not altered after the application of \(\varepsilon_{xx}=0.06\%\). The initial vortex state, unchanged under the application of strain, highlights that the contribution of induced and magnetoelastic anisotropy have been reduced to a point that only the shape anisotropy determines the remaining domain pattern. In order to compare more quantitatively the MOKE images and the vortex state of the irradiated sample, the average radial magnetization was calculated from the longitudinal component of the vector image for different in-plane \(\Phi\) directions [51]. Figure 2: (a) - (c) in-plane hysteresis loops of NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm) after sputtering, after thermal annealing and after He\({}^{+}\) ion irradiation, respectively. In the inset, the corresponding remanent magnetic state (\(B_{ext}=0\) mT) for 80 \(\mu\)m disks is shown. The field was applied along \(\Phi=0^{\circ}\). (d) angular plot of the normalized remanent magnetization \(M_{r}/M_{s}\) as function of the in-plane magnetic field direction \(\Phi\) for as-deposited, irradiated and annealed samples. (e) coercive field (blue) and uniaxial magnetic anisotropy (orange) measured along the field direction \(\Phi=0^{\circ}\) on a Permalloy sample irradiated with different fluences of ions during He\({}^{+}\) irradiation. For comparison, the values after annealing and in the as-deposited state are reported with dashed lines. The average contrast is calculated for a single 80 \(\mu\)m disk for the images in Fig. 3 (e) and is reported in Fig. 3 (b). For the unstrained state seen in Fig. 3 (e) left, the disk's magnetization is a circularly-symmetric vortex, and the average contrast varies periodically with angular position on the disk. The values well follow the expression \(a\ sin(\phi b)\), black line in Fig. 3 (b). After the application of strain, as a consequence of the extremely small magnetostriction, the average contrast, red line in Fig. 3 (b), still follows the periodic behavior \(a\ sin(\phi b)\). A possible explanation for the reported reduction in saturation magnetostriction after ion irradiation and annealing is the growth in size of the crystallites in the NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm) sample, already highlighted in Fig. 1 (c). The magnetostriction of isotropic oriented cubic crystallites can be written as the combination of the saturation-magnetostriction constants \(\lambda_{100}\) and \(\lambda_{111}\) in the (100) and (111) directions, respectively [52] \[\lambda_{\mathrm{s}}=\frac{2\lambda_{100}+3\lambda_{111}}{5}. \tag{3}\] In Permalloy, the two component of the magnetostriction change significantly over the relative Ni-Fe composition range altering the effective magnetostriction, \(\lambda_{\mathrm{s}}\). The composition used in this work, Ni\({}_{81}\)Fe\({}_{19}\), is predicted to have \(\lambda_{\mathrm{s}}\) close to zero [33]. In our XRD measurement, a 15% reduction to the (111) peak FWHM is observed after irradiation and annealing. This crystallization can alter the relative contribution of \(\lambda_{100}\) and \(\lambda_{111}\) in the magnetic layer. Following eq. 3, the effective magnetostriction of the film is changed. As shown in Fig. 3 (a), the magnetostriction is progressively reduced for higher fluences and annealing temperatures as the size of crystallites caused by irradiation and annealing increases. On top of that, increased intermixing at the magnetic layer boundaries, could alter the interface magnetostriction [54] (inversely proportional to the film thickness [55; 56]) thus playing a role in the effective magnetostriction of the film. To validate the usability of our Permalloy layer for sensing application, transport measurements have been conducted. The electrical characterization confirms that the NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm) sample has sizable AMR \(\Delta R/R=1.1\) (1)%. As the AMR does not change after irradiation, the proposed material treatment is suitable for improving magnetic properties of magnetic material for magnetic sensing applications. AMR measurements can be found in section S2 of the supplementary material. ## IV Conclusions In conclusion, we have investigated the effects of He\({}^{+}\) irradiation and thermal annealing on the magnetic properties of NiFeCr(5 nm)/Ni\({}_{81}\)Fe\({}_{19}\)(30 nm)/Ta(4 nm). Our XRD analysis suggests that both irradiation and annealing promote crystalline growth of the textured Ni\({}_{81}\)Fe\({}_{19}\) alloy. While the irradiation treatment strongly reduces the hard axis coercivity down to 0.05 m\(T\) and the deposition induced anisotropy by a factor ten, the field-free annealing does not significantly improve the magnetic softness. We mainly attribute this to stress relaxation in the film after irradiation and to the different mechanism for atomic ordering, that is completely isotropic in the case of irradiation only. In addition, the effective magnetostriction of the film is significantly reduced by a factor ten after irradiation and annealing as confirmed by anisotropy measurement in the presence of in-plane strain. Importantly, we have shown that the sizable magnetoresistance is preserved after the irradiation. As a result, post growth He\({}^{+}\)on irradiation is an excellent tool to improve magnetic softness and minimize strain cross sensitivity of Permalloy. In contrast to thermal annealing, ion irradiation offers the advantage of performing a local material treatment [21; 42; 57] to adjust the anisotropy and write magnetic domain patterns directly into thin film structured devices. As a consequence, we can locally tune the properties of a magnetic material to make it suitable, for instance, for high sensitivity and low-hysteresis integrated AMR sensors that are insensitive to stain. ## Supplementary Material See supplementary material for the rocking curve and AMR measurements. ###### Acknowledgements. This project has received funding from the European Union's Horizon 2020 research and innovation program un Figure 3: (a) saturation magnetostriction \(\lambda_{\mathrm{s}}\) as a function of He\({}^{+}\) ions fluence during irradiation. The values as-deposited and after annealing are reported for comparison with dashed lines. (b) average contrast for 80 \(\mu\)m disks as a function of the in plane angle \(\Phi\) for the irradiated sample in the remanent state (magnetic vortex state) before and after the application of strain. (c) - (e) remanent magnetic state for 80\(\mu\)m diameter disks before (left) and during (right) uniaxial strain 0.06% application for as-deposited, annealed and irradiated Permalloy, respectively. der the Marie Sklodowska-Curie grant agreement No 860060 "Magnetism and the effect of Electric Field" (MagnEFi), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 173 - 268565370 (project A01 and B02), the DFG funded collaborative research center (CRC)1261 / project A6 and the Austrian Research Promotion Agency (FFG). The authors acknowledge support by the chip production facilities of Sensitive GmbH (Mainz, DE), where part of this work was carried out and the Max-Planck Graduate Centre with Johannes Gutenberg University.
2306.17440
STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking
3D single object tracking with point clouds is a critical task in 3D computer vision. Previous methods usually input the last two frames and use the predicted box to get the template point cloud in previous frame and the search area point cloud in the current frame respectively, then use similarity-based or motion-based methods to predict the current box. Although these methods achieved good tracking performance, they ignore the historical information of the target, which is important for tracking. In this paper, compared to inputting two frames of point clouds, we input multi-frame of point clouds to encode the spatio-temporal information of the target and learn the motion information of the target implicitly, which could build the correlations among different frames to track the target in the current frame efficiently. Meanwhile, rather than directly using the point feature for feature fusion, we first crop the point cloud features into many patches and then use sparse attention mechanism to encode the patch-level similarity and finally fuse the multi-frame features. Extensive experiments show that our method achieves competitive results on challenging large-scale benchmarks (62.6% in KITTI and 49.66% in NuScenes).
Yubo Cui, Zhiheng Li, Zheng Fang
2023-06-30T07:25:11Z
http://arxiv.org/abs/2306.17440v1
# STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking ###### Abstract 3D single object tracking with point clouds is a critical task in 3D computer vision. Previous methods usually input the last two frames and use the predicted box to get the template point cloud in previous frame and the search area point cloud in the current frame respectively, then use similarity-based or motion-based methods to predict the current box. Although these methods achieved good tracking performance, they ignore the historical information of the target, which is important for tracking. In this paper, compared to inputting two frames of point clouds, we input multi-frame of point clouds to encode the spatio-temporal information of the target and learn the motion information of the target implicitly, which could build the correlations among different frames to track the target in the current frame efficiently. Meanwhile, rather than directly using the point feature for feature fusion, we first crop the point cloud features into many patches and then use sparse attention mechanism to encode the patch-level similarity and finally fuse the multi-frame features. Extensive experiments show that our method achieves competitive results on challenging large-scale benchmarks (62.6% in KITTI and 49.66% in NuScenes). ## I Introduction Single object tracking with point clouds is one of the most important tasks in 3D computer vision. Given the 3D box of the target in the initial frame, single object tracking requires the tracker to make successive predictions of the given target in subsequent frames to obtain the target's 3D pose, which could provide useful information for downstream tasks, such as path planning in autonomous following robots. Currently, most previous methods [4, 6, 7, 8, 9, 10] use similarity computation to match the current frame point cloud with the template point cloud of the tracking target, and then find the target in the current frame point cloud. Meanwhile, since the first frame point cloud is the most accurate and has strong prior information, they usually update the template point cloud by fusing the predicted target point cloud of the previous frame with the initial frame target point cloud to achieve the best tracking results. However, the performance of this similarity-based matching paradigm is often limited due to the sparsity and disorder of point clouds. Recently, \(M^{2}\)-Tracker [11] proposed a motion-based tracking paradigm, suggesting that regressing the relative target motion from the two consecutive point cloud frames could be more suitable for the point cloud tracking task than similarity-based matching. By inputting the last two frames of point cloud and the predicted box of the previous frame, they first segmented the two point clouds to obtain the foreground point cloud, i.e., the approximate target point cloud. Then, by regressing the offset between the two foreground point clouds, a coarse current box prediction is obtained based on the previous predicted 3D box. Finally, they fused the target point cloud in two frames and refined the coarse box to get a more accurate box. However, no matter the similarity-based matching or motion-based estimation, they both only input two consecutive point cloud frames and ignore the earlier historical information of the target, which is also important for the tracking task. For example, if the target is taking a turn in recent frames, this long-time global motion information can be used to constrain the angle prediction in the current frame, while this motion information is difficult to be detected in only two local frames. Also, all previous similarity-based algorithms complement the template point cloud with the target information from the first frame. However, this skip-completion ignores the successive spatio-temporal information of the target in the historical frames and only superimposes the aligned point clouds, thus also not fully utilizing the spatio-temporal information during tracking. To address the above issues, in this paper, we propose a point cloud tracking algorithm based on spatio-temporal information, termed STTracker. Different from previous works, we input the point clouds of the past \(N-1\) frames and their corresponding 3D boxes of the target, as well as the point cloud of current frame to predict the current 3D box, as shown in Fig. 1. Meanwhile, the input can be of any length Fig. 1: Multi-frame point clouds input. Our input includes \(N\) frames point clouds and the past \(N-1\) frame 3D boxes of target. Different colors represent different timestamps. and any frame, such as \([t,t-1,t-2,t-3]\) or \([t,t-2,t-4]\), etc. Therefore, using two consecutive frames of point clouds as input in previous methods can be considered as one of our input modes. After getting the multi-frame input, we propose a similarity-based spatio-temporal fusion module to build correlations between multi-frame point clouds and fuse the historical information into the current frame features for prediction. Given the previous 3D boxes of the target, the fusion module could learn the motion information implicitly. Furthermore, to reduce the computational effort of the fusion module and speed up the training and inference speed, we use a sparse patched-based attention mechanism for multi-frame feature fusion. Our method proves that by learning the spatio-temporal information of the target from multiple historical frames, the similarity-based matching paradigm could break the limitations and track the target with point clouds effectively. Compared to \(M^{2}\)-Tracker [11] which only learns the short motion information between two frames, our method not only fully utilizes the long spatio-temporal information brought by multiple frames to implicitly learn motion information, but also learns the appearance similarity information between multiple frames to better locate the target position. Comprehensive evaluation results show that our STTracker achieves competitive results on KITTI [1] and NuScenes [2] datasets. Overall, our contributions are as follows: * We propose a spatio-temporal learning framework that introduces multiple frames into 3D single object tracking. * We propose a novel multi-frame features fusion method to implicitly learn the motion information of the target and build correlation among multiple frames. * Experiments on KITTI and NuScenes datasets show that our STTracker achieves promising performance, and extensive ablation studies also verify the effectiveness of our method. The rest of this paper is organized as follows. In Sec. II, we discuss the related work. Sec. III describes the proposed STTracker. In Sec. IV we first compare our methods with previous methods in KITTI and NuScenes datasets, and then conduct ablation studies on each module of our methods. We finally conclude in Sec. V. ## II Related Work ### _3D Single Object Tracking_ 3D single object tracking with point cloud has developed fast in recent years. SC3D [4] compares template and search point clouds with cosine similarity and selects the highest score one to track. P2B [6] proposes an augmentation module to fuse the point features with the point-wise cosine similarity, and then takes VoteNet [3] to have an accurate 3D box. Following this pipeline, PTT [8], BAT [7], LTTR [9], V2B [10], PTTR [12] and SMAT [37] also take the two-branch siamese architecture and similarity-based matching paradigm. By enhancing the point features [7, 8, 10], computing the similarity with transformer [9, 12, 37], the similarity-based matching pipeline makes great progress in 3D single object tracking task. PCET [40] proposes two modules to extract discriminative features and improve the robustness to sparse point clouds and respectively. Different from these similarity-based methods, \(M^{2}\)-tracker [11] points out that the motion-based tracking paradigm may be more suitable for 3D SOT than similarity matching. They first segment the foreground points to find the target points and then regress the relative target motion between the two frame points to get a coarse 3D box. Finally, they aggregate the target from two successive frames by using the predicted motion state and refine the coarse 3D box to get a better prediction. Moreover, STDA [43] proposes a temporal motion model to learn the spatio-temporal information to track object by predicting the state and variance of the target. However, their method depends on the detector and could not track the object in end-to-end manner. In this paper, different from STDA [43], we propose an end-to-end network to directly learn the spatio-temporal information from multi-frame data. ### _Spatio-Temporal Learning_ Learning spatio-temporal information across multiple frames has been exploited for 3D vision tasks, such as 3D object detection and point cloud prediction. Faf [20] jointly conducts object detection, tracking, and motion forecasting together by inputting multiple point cloud frames and designs two schemes for temporal fusion. StarNet [21] uses the previous detection results as prior to improve the current detection. STINet [22] inputs multiple frames to extract features and temporal proposals to detect in current frame and predict future trajectories simultaneously. MVFuseNet [24] designs multi-view temporal fusion of LiDAR in RV and BEV for current detection and motion forecasting. 3DSTCN [23] projects past \(N\) frame point clouds into 2D range images and apply a U-Net-like spatio-temporal 3D CNN to obtain the future 3D point cloud predictions. MGTANet [14] designs short-term feature extraction and long-term feature enhancement to learn spatial-temporal information. SpOT [42] represents tracked objects as sequences and proposes a sequence-level 4D refinement Fig. 2: Architecture of the proposed STTracker. Given \(N\) point cloud and corresponding \(N-1\) 3D boxes, we **first** use a shared backbone to extract features from point clouds. **Second**, we input \(N\) features with \(N-1\) 3D boxes into our spatio-temporal fusion module to learn spatio-temporal information. **Finally**, we use a center-based regression to predict the current box. network. PF-Tracker [41] proposes a multi-camera 3D MOT framework that adopts the "tracking by attention" pipeline. ## III Methodology ### _Overall Architecture_ Given the current point cloud, and previous \(N-1\) frames point clouds with their corresponding 3D bounding boxes, we aim at estimating the current target 3D box, which could be represented as \((x,y,z,w,l,h,\theta)\), where \((x,y,z)\) is the center, \((w,l,h)\) is the size and \(\theta\) is the orientation of the box respectively. Meanwhile, following the assumption [6] that the size of the target object is known through the first frame, we only need to estimate \((x,y,z,\theta)\). As shown in Fig. 2, our proposed STTracker (**S**patial **T**emporal **T**racker) is a one-stage network that has a simple pipeline. We first input all \(N\) frames of point cloud into a shared backbone to extract per-frame point features. Unlike previous works [6, 7, 8, 9] which only input the points within the predicted 3D box in the template branch, our \(N-1\) previous frames of point cloud adopt the same input size as the current frame, as shown in Fig. 2. Meanwhile, we add the timestamps for all points to construct time-aware point cloud \(\mathcal{P}_{t}=\{x,y,z,t\}\), and use dynamic pillar [29] to extract basic features for its fast speed. We refer the readers to paper [29] for more details. Second, we input all \(N\) extracted features and previous \(N-1\) 3D boxes to our fusion module to learn spatio-temporal information. Finally, by using center-based prediction, we predict the 3D box of the target in current frame. We will introduce the details in the following subsections. ### _Spatio-Temporal Learning_ After per-frame feature extraction, we have \(N\) frames of point cloud features and their sizes are all \(W\times H\times C_{1}\), where \(N-1\) are from previous frame point clouds and the last one is from the current point cloud, \(C_{1}\) is the feature channel. Meanwhile, we multiply the voxel size with the final downsampling rate of our network to get the final grid size. Based on the point cloud range and the grid size, we generate a corresponding empty BEV (Bird's Eye View) grid map (\(H\times W\times 1\)). We assign the mask value of each grid as 1 if the center of its center is in the BEV box, otherwise as 0. Therefore, we could obtain \(N-1\) box masks. Our goal is to extract useful spatio-temporal relationships to guide current prediction. The simplest approach is concatenating them together and applying 2D convolutional block to directly extract the spatio-temporal features. However, we argue that this approach could not learn the information efficiently due to the misalignment among features at different timestamps. Because of the motion from the LiDAR sensor or the target itself, the same position in the feature maps from different timestamp features represents different point clouds. Although sometimes the ego-motion of the LiDAR sensor is available, the motion of the target is always unknown. Therefore, simple concatenation would lead to ambiguity of features [14]. Another approach is applying 3D convolutional block [23] to the concatenated features. However, the 3D convolutional block would incur huge computation cost. To align the features from different timestamps and learn the spatio-temporal information of the target, we proposed an attention-based feature fusion module, as shown in Fig. 3, termed STLM (**S**patio-**T**emporal **L**earning **M**odule). STLM adopts a similarity-based matching to fuse different timestamp features, and could be divided into spatial learning block and temporal learning block respectively. **Spatial Learning Block.** The spatial learning block aims to learn the spatial information for each frame feature, thus it only involves single-frame features. Different from previous similarity-based works [6, 7, 8, 9, 10], we input previous point Fig. 3: Illustration of the proposed STLM. The STLM includes two components, spatial learning block and temporal learning block respectively. The first block learns the spatial information for each frame feature, and the second block learns the temporal information from all frame features. clouds not only including the points within the 3D box but also including the points out of the 3D box, as the same as the current search point cloud. Therefore, to distinguish the foreground points from input, we need a BEV mask to represent the box's spatial location. We propose the \(MaskFusion\) to incorporate the box mask into the extracted features. As shown in Fig. 3, we first apply a Conv2D layer named \(MaskConv\) to project the mask into features, then add the mask feature with the BEV features and further apply a Conv2D layer named \(BoxConv\) to further extract the foreground features with channel \(C_{2}\). Compared to the methods which only extract the feature from points within the 3D box, we could keep and extract much more texture information. This operation could be formulated as follows: \[\hat{F}_{i}=\text{BoxConv}(\text{MaskConv}(M_{i})+F_{i}) \tag{1}\] where \(i\in\{t-1,...,t-N\}\) and \(F_{i},M_{i}\) denote the point feature and box mask for \(i\)-th frame, respectively. Meanwhile, since we need to compute the similarity among \(N\) frames of features in the following temporal learning block, the \(W\times H\) size would incur high computation cost. Therefore, following previous vision transformer works [15, 16, 17, 18], we also divide per-frame feature map into many non-overlapping local patch grids. Specifically, we set the patch size to \(R\times R\) and crop per-frame \(W\times H\times C_{2}\) features to \(S\) patches with size of \(R\times R\times C_{2}\), then apply a Conv2D layer named \(PatchConv\) to extract the per-patch features with channel \(C_{3}\). Finally, we flatten the \(R\times R\times C_{3}\) patch features to \(S\times C_{3}\), where \(S=R\times R\). Denoting the patch transformation as \(\phi\), this procedure is represented as follows: \[P_{i}=\text{PatchConv}(\phi[\hat{F}_{i}]) \tag{2}\] where \(i\in\{t,...,t-N\}\) and \(P_{i}\) refers to the final patch features. The patch transformation could not only reduce the computation cost, but also provide a larger receptive field for the following similarity-based matching. Overall, we apply the proposed \(MaskFusion\) and patch transformation \(\phi\) for all \(N-1\) previous features, while we only apply the patch transformation \(\phi\) for current frame feature since we do not have the 3D box of current frame. **Temporal Learning Block.** Given \(N\) patch features \(P_{i}\) from different timestamps, the temporal learning block uses a sparse attention-based paradigm to fuse them and finally outputs the feature including the spatio-temporal information of the target. In particular, we first concatenate these per-frame patch features to have the fused spatial-temporal feature \(G\) of size \(N\times S\times C_{3}\), where the horizontal axis represents space and the vertical axis represents time. Then, inspired by the attention mechanism [34], we apply the deformable attention [35] to align the different timestamp patch features by themselves. Specially, we first use two linear layers for each query patch feature to generate sampling offsets \(\Delta G\) and attention weights \(A\) respectively. Based on the location of query patch itself and the output sampling offsets, we can sample reference patch features from the feature \(G\) by bilinear interpolation. Finally, the original patch feature could be aggregated with the reference features with their corresponding weights. The procedure could be formulated as follows: \[\Delta G=\text{MLP}_{\text{o}}(G) \tag{3}\] \[A=\text{MLP}_{\text{s}}(G) \tag{4}\] \[V_{k}=S(G,g+\Delta g_{k}) \tag{5}\] \[\hat{G}=\sum_{l=1}^{L}W_{l}\left(\sum_{k=1}^{K}A_{lk}\cdot V_{k}\right) \tag{6}\] where \(S(\cdot)\) is the bilinear sampling, \(K\) is the number of predicted offset for each grid, \(L\) is the number of attention heads. In Equ. 3 and Equ. 4, we use two MLP layers to predict \(K\) offsets \(\Delta g_{k}\) and corresponding similarity scores \(A_{k}\) for each feature grid respectively. Then, in Equ. 5, we add the offsets \(\Delta g_{k}\) to the grid coordinate \(g\) to get new grid coordinate and use bilinear sampling \(S(\cdot)\) to sample the feature at \(g_{k}+\Delta g_{k}\) from \(G\). Moreover, in Equ. 6, the sampled features \(V_{k}\) are multiplied with the similarity scores \(A_{k}\). Finally, following the multi-head mechanism, we use \(W_{l}\) to project each head feature back and sum them up. We refer the reader to paper [35] for more details. Finally, we reshape the fused feature \(\hat{G}\) back into the size of \(W\times H\times C_{4}\) and concatenate it with the original current frame feature \(W\times H\times C_{1}\) to strengthen current frame features. The final fused feature \(U\) is generated as follows: \[U=\text{Conv2D}(\text{cat}[\hat{G},F_{l}]) \tag{7}\] By using the sparse deformable attention [35], each patch could find its corresponding region to align based on the similarity. Meanwhile, the sparsity also avoids all-to-all similarity matching and further limits the computation cost. ### _Prediction_ Following CenterPoint [33], we also adopt the center-manner to predict the target 3D box. Specially, in the training phase, we first generate the heatmap according to the \((x,y)\) of the 3D box center, and then compute the offset of \((x,y)\) to compensate the error from the downsample operation. For the height and orientation, we directly regress the height value of the center and \((sin\theta,cos\theta)\). However, the original heatmap in CenterPoint [33] uses Gaussian kernel locating at the center of the box, thus the number of positive samples would be not enough for the single object tracking problem since there is only one target, as shown in Fig. 4 (a). To alleviate this problem, following SMAT [37], we also assign all points in Fig. 4: (a). The previous gaussian kernel heatmap assignment; (b). Our all foreground heatmap assignment. the box as positive samples and generate the heatmap label as follows: \[H_{p}=\begin{cases}1,&\text{if }p\in B\\ 0,&\text{else}\end{cases} \tag{8}\] where \(B\) is the 3D label box in BEV representation. In this way, we could get more positive samples during training, as shown in Fig. 4 (b). In the inference phase, after getting the predicted heatmap and predicted offset \(o\), we could compute the \(x,y\) value of the center: \[\hat{x_{c}}=(j+o_{x})\times b\times v_{x}+x_{min}, \tag{9}\] \[\hat{y_{c}}=(i+o_{y})\times b\times v_{y}+y_{min}. \tag{10}\] where \((i,j)\) is the index of the peak value in the heatmap, \((o_{x},o_{y})\) is the predicted offset for \((x,y)\), \(b\) is the downsample stride, \((v_{x},v_{y})\) and \((x_{min},y_{min})\) are the voxel size and the minimal value point cloud range of \((x,y)\) axes respectively. ## IV Experiments ### _Experimental Setting_ **Dataset.** We evaluate our method on KITTI [1] and NuScenes [2] datasets. For both datasets, we follow previous works [6, 7, 9, 11] to split the training and testing sets. For NuScenes dataset, we also follow CenterPoint [33] to accumulate 10 sweeps to densify the keyframe. **Implementation Details.** Our model is implemented in Pytorch and based on the popular codebase1, trained on RTX 3090 GPU. For feature extraction, we use dynamic pillar [29] and backbone [28] which are widely used in 3D detection [27, 28]. In the training phase, we train the STTracker with Adamw [19] optimizer with the initial learning rate of 0.003, weight decay of 0.01 for both datasets. Footnote 1: [https://github.com/open-mmlab/OpenPCDet](https://github.com/open-mmlab/OpenPCDet) **Evaluation metric.** One Pass Evaluation (OPE) [32] is used to measure Success and Precision. The Success measures the 3D IoU between the predicted box and the ground-truth box, the Precision measures the AUC (area under curve) of distance between the center of two boxes from 0 to 2m. Moreover, the Mean value in each dataset is computed as follows: \[V_{mean}=\frac{\sum_{n=1}^{N}V_{n}*F_{n}}{\sum_{n=1}^{N}F_{n}} \tag{11}\] where \(V_{n}\) and \(F_{n}\) represent the value and frames of each category respectively. finally outperforms \(M^{2}\)-Tracker by 3.95% in Mean. We believe this is due to our ability to model the motion of the target through multiple frames of input point clouds and embed this information into the measurement of similarity in the attention mechanism, thereby obtaining more accurate localization of the target. Moreover, we notice that our method performs better performance in the small-size object (Car, Pedestrian) but not that good for the large-size object (Truck, Bus, Trailer). Following the discussion in FSD [44], we also believe that it is challenging for the CenterHead [33] to predict large objects since the object centers are usually empty (most of the points are on the surface of objects). **Results on KITTI.** As shown in Table II, STTracker performs competitive performance on the KITTI dataset. In terms of Success and Precision, our method only trails \(M^{2}\)-Tracker by 0.3% and 0.5% respectively. We believe the difference in performance between KITTI and NuScenes is due to the differing annotation frequencies of the two datasets. Specially, NuScenes is annotated at 2 Hz, while KITTI is annotated at 10 Hz, making the relative motion between frames on KITTI smaller and easier to estimate, thus it is more advantageous for \(M^{2}\)-Tracker which directly predicts relative motion. To verify this assumption, we further compare our method and \(M^{2}\)-Tracker [11] in a modified KITTI dataset. To have the same annotation frequency as NuScenes dataset, 2Hz, we only select the tracklets which have more than 5 frames for training and testing, and sample one frame as a valid frame every 5 frames. We train our model and \(M^{2}\)-Tracker on the new dataset. The training settings of \(M^{2}\)-Tracker is following their public setting2. Shown in Table. III, our method shows better performance than \(M^{2}\)-Tracker on the modified KITTI dataset, verifying our assumption that our method could have a better performance in large motion tracking scenes. Meanwhile, for the previous similarity-based methods [7, 8, 9, 10, 12, 37, 40], although they had good performances Fig. 5: Advantageous cases of our STTracker compared with BAT, \(M^{2}\)-Tracker on the Car, Pedestrian and Cyclist categories of KITTI Dataset. Fig. 6: (a) The performance of different numbers of the first-frame point cloud of the target. (b) Compared to \(M^{2}\)-Tracker under different numbers of distractors. (c) The performance with different patch sizes in spatial learning block. in the Mean, they usually performed worse in Pedestrian category. We believe that the size of Pedestrian is small thus limiting the similarity-based methods. However, our method outperforms PCET, which had the best performance among similarity-based methods in Pedestrian, by 3.5% and 4.3% in Success and Precision respectively. The results show that by learning the spatio-temporal information, our STTracker could achieve better performance. Moreover, our method also outperforms STDA which also use spatio-temporal information Additionally, our STTracker achieves **23.6 FPS** running speed shown in Table. IV, and we also visualize the tracking results in Fig 5. ponents in temporal learning block, as shown in Table. VIII. The results of T1 and T2 verify the importance of current feature in fusion. Meanwhile, instead of using sparse attention, T3 uses dense attention and the performance drops 5.1% and 3.6% in 3D Success and Precision respectively. The results show that there is no need to compare all location features in fusion for different frame features. Lastly, because the point feature already includes the 3D information and extra time feature, adding positional embedding (T4) does not improve the performance. **Ablation Experiments.** Finally, we conduct ablation experiments on the components of our method. Table. V shows the results. A1 is the baseline model which only inputs two frames. A2 shows that directly concatenating the multi-frame features could not bring improvement but a huge decrease, as analyzed in Sec. III-B. Compared to A2, A3 shows great improvement, 11.8%\(\uparrow\) and 13.0%\(\uparrow\) in 3D Success and Precision, which verifies the effeteness of our proposed STLM. Finally, by using the foreground heatmap assignment, A4 achieves the best performance. ## V Conclusions In this paper, we present STTracker, a multi-frame similarity-based tracking framework to track 3D object with point cloud. We propose a spatio-temporal learning module to fuse multi-frame features and fully exploit the spatio-temporal information of 3D target. The comprehensive experiments show the effectiveness of our method. Meanwhile, We notice that our method does not have obvious advantages in large-size objects or high-frequency scenes, and too much input frames also leads to the performance decline. Therefore, we would like to solve these problems in future works.
2309.05819
Aperiodicity is all you need: Aperiodic monotiles for high-performance composites
This study introduces a novel approach to composite design by employing aperiodic monotiles, shapes that cover surfaces without translational symmetry. Using a combined computational and experimental approach, we study the fracture behavior of composites crafted with these monotiles, and compared their performance against conventional honeycomb patterns. Remarkably, our aperiodic monotile-based composites exhibited superior stiffness, strength, and toughness in comparison to honeycomb designs. This study suggests that leveraging the inherent disorder of aperiodic structures can usher in a new generation of robust and resilient materials.
Jiyoung Jung, Ailin Chen, Grace X. Gu
2023-09-11T20:54:29Z
http://arxiv.org/abs/2309.05819v1
## Aperiodicity is all you need: Aperiodic monotiles for high-performance composites ## Abstract This study introduces a novel approach to composite design by employing aperiodic monotiles, shapes that cover surfaces without translational symmetry. Using a combined computational and experimental approach, we study the fracture behavior of composites crafted with these monotiles, and compared their performance against conventional honeycomb patterns. Remarkably, our aperiodic monotile-based composites exhibited superior stiffness, strength, and toughness in comparison to honeycomb designs. This study suggests that leveraging the inherent disorder of aperiodic structures can usher in a new generation of robust and resilient materials. ## 1 Introduction Composite materials, celebrated for their customizable mechanical properties, serve as lightweight structural components that are integral in aerospace and biomedical sectors.[1, 2, 3, 4, 5] The strength of these materials lies in their composite nature - combining properties of different base materials allows the creation of a composite with a harmonious balance of multiple desired properties. This concept is beautifully exemplified in biological materials[6, 7, 8, 9, 10, 11] such as nacre and wood, which generally outperform their engineering counterparts in mechanical performance, despite being composed of relatively weak constituents. Traditional engineering composites are often characterized by repeating unit cells, a feature that simplifies the design and manufacturing processes. However, such ordered structures can lead to catastrophic failure under critical loading. Meanwhile, biological materials often present disordered structures, where the unit cells vary spatially.[12] The extent to which this disorder plays a role in the improved mechanical performance of biological materials remains a topic of ongoing research. The inherent benefits of materials with irregular or disordered microstructures have recently garnered scientific interest.[13, 14, 15] Characterized by heterogeneous microstructures, these structures could offer a fortified path for stress wave propagation, thereby increasing resilience under heavy loads.[16, 17, 18, 19] Emerging research indicates that by amplifying this irregularity, the flaw tolerance of specific cellular frameworks can be enhanced.[20] Moreover, the microscopic intricacies of polycrystalline configurations, encompassing grain boundaries, precipitates, and phases, are perceived as prospective templates for engineering materials with enhanced toughness.[21, 22] Current methodologies for creating these heterostructures involve techniques such as randomly moving nodes within regular lattice structures, constructing material foams, or stacking materials with different microstructures[23, 17, 24] However, these methods introduces a layer of complexity to design and manufacturing, especially with challenges due to the imperfect assembly of differently oriented unit cells. Addressing these challenges, our study presents the integration of aperiodic monotiles in composite designs. Aperiodic monotiles, as discovered in recent literature, have been shown to cover a surface entirely with intrinsic aperiodicity.[25] This makes them an ideal choice for creating disordered materials. The usage of aperiodic monotiles in composite design would facilitate tunable properties while maintaining excellent interface bonding. In this work, we explore a completely new family of architecture composed of aperiodic monotiles for creating composite materials. Specifically, we developed a numerical phase-field model to simulate the properties and crack propagation of composites consisting of aperiodic monotiles. Our models are validated with tensile experiments of additively manufactured specimens. The aperiodic monotile-based designs are benchmarked with periodic honeycomb-based design, which is one of the most widely used shapes in engineering applications due to its superior mechanical performance [26]. It is envisioned that these types of aperiodic designs could lead to the development of stronger and tougher composite materials compared to the conventional periodic designs. ## 2 Results and discussions An aperiodic monotile is a shape that can cover a two-dimensional (2D) surface without any translational symmetry or a repeating pattern [25]. An example schematic of tiling using a 'hat' polykite aperiodic monotile [25] is shown in Fig. 1 (a). Due to the characteristics of the hat monotile generated based on the hexagonal structure, the monotile can have six rotational angles and a flipped shape as shown in Fig. 1 (b, c). These hat monotiles covers the infinite plane in an irregular manner, which enables limitless designs by translation and rotation of tiles. Here, we will introduce translation and rotation of the aperiodic monotile-based design along the reference tile, part of the infinite reference tile shown in Fig. 1 (a), and study their influence on mechanical behavior. Experiments are conducted to explore the mechanical performance of the aperiodic monotile-based composites. Mechanical tensile tests are conducted, with more details in the Methods section. In terms of material selection, two base constituents are utilized, one for the boundaries and another for the inner unit cell material, affording us flexibility in generating composites with a range of mechanical properties. To evaluate the composites under realistic operational conditions, a defect is introduced, in the form of a notch, into the samples. This approach enables us to investigate the tolerance of these materials to such anomalies. For the fabrication process, Polyjet additive manufacturing (Stratasys Connex 3) is employed, utilizing digital photopolymer materials with a modulus range spanning over three orders of magnitude. Two digital photopolymer materials are used to achieve the required characteristics. In this case, TangoBlackPlus, the softer material, is used to form the boundaries, while VeroClear, the stiffer material, is used in the core areas. A specimen has dimensions of 50 mm by 125 mm by 3 mm with a notch of 20% length of the specimen width (10 mm) as shown in Fig. 2 (a). We prepared the specimens with different notch tip locations and volume fractions. For aperiodic monotile-based specimens, we have prepared three different specimens with a volume fraction of 80% VeroClear and 20% TangoBlackPlus where two specimens are subjected to a planar translation (denoted as AP80_T1 and AP80_T2) and a third specimen undergoes a rotation of 30 degrees (denoted as AP80_R1) along the infinite reference tile. Additionally, we also test two specimens with a volume fraction of 70% VeroClear and 30% TangoBlackPlus with translation and rotation (denoted as AP70_T1 and AP70_R1). These designs are shown in Fig. 2 (a). We note that infinitely many patterns can be generated by the translation or rotation of the tiling. As a benchmark, the honeycomb structure which has a periodic pattern is considered as shown in Fig. 2 (b). Two honeycomb-based specimens with different volume fractions are considered, denoted as HC80 and HC70. Experimental stress-strain curve results for the various samples are presented in Fig. 3 (a) and (b), where (a) and (b) are results for 80% and 70% volume fractions of VeroClear material, respectively. The shadows in the curves represent variations from three experiments for each sample design. From the curves, it can be seen that the aperiodic monotile based structures show higher stiffness, strength and toughness compared to the honeycomb structures for both volume fraction cases. In Fig. 3 (c), the crack propagation paths for different designs are displayed. The honeycomb structure exhibits a path reminiscent of brittle fracture, whereas the aperiodic monotile-based structures reveal a multifaceted crack trajectory with a combination of large and small zigzags, enhancing crack resistance. The stiffness, strength, and toughness values are compared in Fig. 3 (d). With an 80% volume fraction of VeroClear material, the aperiodic structure is 103% superior in stiffness, 34.5% in strength, and 15.9% in toughness compared to the honeycomb structure. These results indicate that the aperiodic monotile-based composites have not only higher stiffness but also higher strength and toughness compared to the honeycomb structure showing the mechanical superiority of the aperiodic structure, which can be advantageous in many applications. To probe further into the mechanisms, simulations utilizing phase-field modeling are performed for the aperiodic monotile-based design and the honeycomb-based design. More details about the phase-field model are discussed in the Methods section. The simulations are carried out on 2D specimens, each measuring 50 mm by 75 mm (without gripping section) and featuring a 20% length crack of sample, under the tensile loading. The simulations have utilized material properties obtained from the characterization of the base materials shown in **Table 1**. The stress-strain curves for the various designs are shown in Fig. 4, with the general trends matching the experimental results. Here, the aperiodic monotile-based structures show higher stiffness, strength, and toughness compared to the honeycomb structures. Additionally, it can be seen that the aperiodic monotile based structure show relatively consistent mechanical performance (similar to experiments) regardless of the crack location determined by the translation or rotation of the tiling. This points to the potential defect tolerance capabilities of these types of aperiodic structures. The crack propagation behavior of the aperiodic monotile specimen (AP80_T2 in Fig. 4 (a)) with different strains can be found in Fig. 5 (a, c). The crack propagation behavior of the honeycomb structure with different strains can be found in Fig. 5 (b, d). In this model, a phase value of 1 symbolizes complete damage (represented by the color red), while a phase value of 0 denotes no damage (represented by the color blue). Elements exhibiting more than 98% damage are not depicted. In the phase-field model, as the strain increases, the phase value of the element under stress increases, which indicates the degree of damage and crack propagation. From Fig. 5 (a, c), the aperiodic monotile structures show a complex crack path mixed with large and small zigzags, behavior that is also seen in experiments. Conversely, the crack path in the honeycomb specimen pursued the shortest possible trajectory as shown in Fig. 5 (b, d). These results indicate that phase-field modeling holds the potential for capturing the fracture behaviors of these unique composite systems, hence offering promise for future exploration. ## Conclusions This study introduced new architectures incorporating aperiodic monotiles into composite designs. Not only do these structures greatly simplify design and manufacturing, but their aperiodicity also offers a promising path to enhanced mechanical resilience. Tensile experiments and corresponding numerical phase-field models show that these aperiodic monotile designs outperform traditional honeycomb-based designs in terms of stiffness, strength, and toughness. Furthermore, our findings highlighted the aperiodic designs' inherent capability to tolerate defects. Through the synthesis of aperiodic materials design, advanced manufacturing techniques, and numerical simulations, this research illuminates a promising avenue for the next generation of composite materials. ## Methods **Mechanical testing:** Experimental tensile tests (Mode I fracture) on both aperiodic monotile and honeycomb structures are conducted. To secure the specimens, mechanical use action grips are utilized, clamping only the designated gripping area made of the VeroClear material. The tests are conducted at a controlled tensile displacement rate of 2 mm/min. Throughout the tests, force, displacement, and time data are recorded at a frequency of 1.04 Hz. The test terminates when the force is dropped to nearly zero, and the crack fully propagated through the transverse direction of the specimen. At least three specimens are printed and tested for each microstructure. **Phase-field modeling:** Phase-field modeling is employed due to its established capabilities in simulating intricate crack evolution phenomena including curvilinear crack paths and crack branching [27, 28, 29]. Among various versions of phase-field modeling, we adopt a hybrid formulation-based phase modeling that can be applicable for combined shear and tensile loading [30], which enables modeling crack propagation for composites with complex microstructures. The phase-field modeling is conducted using ABAQUS with user-defined element (UEL) subroutine. The model consisted of three layers including the phase-field layer, displacement layer, and visualization layer sharing the identical nodes. Approximately 150,000 quadrilateral plane stress elements are used for each layer of specimens. The y-directional displacement of the lower surface is fixed, and the displacement control is applied to the upper surface.
2303.00084
Spin evolution of Venus-like planets subjected to gravitational and thermal tides
The arrival of powerful instruments will provide valuable data for the characterization of rocky exoplanets. It is then crucial to accurately model the dynamical state of exoplanets. Rocky planets with sufficiently large orbits should have non-zero eccentricities and/or obliquities. Realistic models of tides for rocky planets can allow for higher spin states than the synchronization state in the presence of eccentricities or obliquities. This work explores the secular evolution of a star-planet system under tidal interactions, both gravitational and thermal, induced respectively by the quadrupolar component of the gravitational potential and the irradiation of the planet's surface. We use the formalism of Kaula associated with an Andrade rheology to model a relevant response of a rocky planet to gravitational tides and a prescription of thermal tides fitted for Venus to model the response of the atmosphere to the thermal tides. We implemented the general secular evolution equations of tidal interactions in the secular code ESPEM (French acronym for Evolution of Planetary System and Magnetism). We show the possible spin-orbit evolution and resonances for eccentric orbits and explore the possible spin orbit resonances raised by the obliquity of the planet. Our simulations have shown that the secular evolution of the spin and obliquity can lead to the retrograde spin of the Venus-like planet if the system starts from a high spin obliquity, in agreement to previous studies. Taking into account the luminosity evolution of the Sun changes the picture. We find that the planet never reaches the equilibrium: the timescale of rotation evolution is longer than the luminosity variation timescale, which suggests that Venus may never reach a spin equilibrium state but may still evolve.
Alexandre Revol, Émeline Bolmont, Gabriel Tobie, Caroline Dumoulin, Yann Musseau, Stéphane Mathis, Antoine Strugarek, Allan-Sacha Brun
2023-02-28T21:02:21Z
http://arxiv.org/abs/2303.00084v2
# Spin evolution of Venus-like planets subjected ###### Abstract Context:The arrival of powerful instruments will provide valuable data for the characterization of rocky exoplanets. Rocky planets are mostly found in close-in orbits. They are therefore usually close to the circular-coplanar orbital state and are thus considered to be in a tidally locked synchronous spin state. For planets with larger orbits, however, exoplanets should still have nonzero eccentricities and/or obliquities, and realistic models of tides for rocky planets can allow for higher spin states than the synchronization state in the presence of eccentricities or obliquities. Aims:This work explores the secular evolution of a star-planet system under tidal interactions, both gravitational and thermal, induced by the quadrupolar component of the gravitational potential and the irradiation of the planetary surface, respectively. We show the possible spin-orbit evolution and resonances for eccentric orbits and explore the possibility of spin-orbit resonances raised by the obliquity of the planet. Then, we focus on the additional effect of a thick atmosphere on the possible resulting spin equilibrium states and explore the effect of the evolution of the stellar luminosity. Methods:We implemented the general secular evolution equations of tidal interactions in the secular code called ESPEM. In particular, we focus here on the tides raised by a star on a rocky planet and consider the effect of the presence of an atmosphere, neglecting the contribution of the stellar tide. The solid part of the tides was modeled with an anelastic rheology (Andrade model), while the atmospheric tides were modeled with an analytical formulation that was fit using a global climate model simulation.We focused on a Sun-Venus-like system in terms of stellar parameters, orbital configuration and planet size and mass. The Sun-Venus system is a good laboratory for studying and comparing the possible effect of atmospheric tides, and thus to explore the possible spin state of potential Venus-like exoplanets. Results:The formalism of Kaula associated with an Andrade rheology allows spin orbit resonances on pure rocky worlds. Similarly to the high-order spin-orbit resonances induced by eccentricity, the spin obliquity allows the excitation of high-frequency Fourier modes that allow some spin-orbit resonances to be stable. If the planet has a dense atmosphere, like that of Venus, another mechanism, the thermal tides, can counterbalance the effect of the gravitational tides. We found that thermal tides change the evolution of the spin of the planet, including the capture in spin-orbit resonances. If the spin inclination is high enough, thermal tides can drive the spin toward an anti-synchronization state, that is, a the l:1 spin-orbit resonance with an obliquity of 180 degrees. Conclusions:Through our improvement of the gravitational and thermal tidal models, we can determine the dynamical state of exoplanets better, especially if they hold a thick atmosphere. In particular, the contribution of the atmospheric tides allows us to reproduce the spin state of Venus at a constant stellar luminosity. Our simulations have shown that the secular evolution of the spin and obliquity can lead to a retrograde spin of the Venus-like planet if the system starts from a high spin obliquity, in agreement with previous studies. The perturbing effect of a third body is still needed to determine the current state of Venus starting from a low initial obliquity. When the luminosity evolution of the Sun is taken into account, the picture changes. We find that the planet never reaches equilibrium: the timescale of the rotation evolution is longer than the luminosity variation timescale, which suggests that Venus may never reach a spin equilibrium state, but may still evolve. ## 1 Introduction The five thousand exoplanets discovered so far1 have revealed a great diversity of worlds. As the number of discoveries continues to grow, an accurate modeling of exoplanets becomes increasingly important. In the context of the arrival of new powerful instruments such as the James Webb Space Telescope (i.e. JWST; Greene et al.2016) and the Atmospheric Remote-sensing Infrared Exoplanet Large-survey mission (i.e. ARIEL; Tinetti et al.2021; Edwards and Tinetti 2022) in the characterization of rocky planets, we need to describe the dynamical state of rocky exoplanets with more realistic models by taking their internal structure and their potential atmosphere into account. A large number of the rocky planets discovered so far are in very close-in orbits, and are therefore usually considered to be in a circulant and coplanar orbit and with a rotation that is synchronized with their mean motion, showing a permanent dayside. For planets with larger orbits, however, the rotational state and orbital elements (i.e., the semi-major axis, eccentricity, orbital inclination, etc) evolve on a much longer timescale and are expected to have nonzero eccentricities and/or obliquities. Then, eccentricity or obliquity can trap the spin in a higher rotation state, that is in spin-orbit resonances (hereafter SORs), such as a 3:2 SOR, 2:1 SOR, or higher (e.g., Makarov & Efroimsky, 2013; Makarov et al., 2018). If a planet has an atmosphere, another tidal mechanism must be taken into account: the atmospheric thermal tides. These are caused by the differential heating between day- and nightsides (Gold & Soter, 1969; Chapman & Lindzen, 1970; Dobrovolskis & Ingersoll, 1980; Ingersoll & Dobrovolskis, 1978; Correia & Laskar, 2001; Auclair-Desrotour et al., 2017). This mechanism is a possible explanation for the current state of Venus as the thermal tides can both desynchronize the planet and increase its obliquity (e.g., Correia & Laskar, 2001, 2003). The rotation rate and the obliquity affect the climate of the planet by influencing the heat distribution. For example, spin rates faster than the synchronization can help prevent atmospheres from collapsing (e.g., Wordsworth, 2015) and change the fate of a potential surface water ocean (i.e., complete vaporisation or not; Turbet et al., 2016) through a more effective heat redistribution in the atmosphere. We therefore need a complete dynamical framework with relevant tidal models to determine the rotation states of exoplanets as accurately as possible in the context of future data. In this article, we use the particular case of Venus to present our recent implementation of planetary tides (Boue & Efroimsky, 2019) in a secular code called ESPEM (French acronym for Evolution of Planetary System and Magnetism, Benbakoura et al., 2019; Ahuir et al., 2021). Here we study the case of a Venus-like planet around a Sun-like star. The rotation of Venus is thought to be an equilibrium between the gravitational bodily tides and thermal atmospheric tides (Correia & Laskar, 2001; Correia & Laskar, 2003; Correia et al., 2003). It is therefore a good laboratory for studying these mechanisms. Some unresolved issues still remain, however, such as whether the rotation of Venus is currently in equilibrium, and how it reached its current rotational state. The competition between the tides, gravitational and thermal, strongly depends on the internal state, but in the case of Venus, little is known about its internal structure. This will change with the next incoming mission to Venus, however, as EnVision (Widemann et al., 2020), DAVINCI (Garvin et al., 2022), and VERITAS (Smrekar et al., 2020) will bring valuable data about the internal state of Venus and the thermal atmospheric response of the planet (Bills et al., 2020). To study the spin evolution of Venus-like planets, and in particular, the capture in SORs, it is necessary to describe the internal structure of the planet and atmosphere well. In particular, Walterova & Behounkova (2020) showed that the internal structure also affects the SOR available by the planet. To ensure a good description in this work, we therefore computed the gravitational bodily tides using the formalism of Kaula (1964) along with the Andrade rheology (Andrade, 1910), using rheological parameters constrained from laboratory experiments on olivine (Castillo-Rogez et al., 2011). We computed the thermal tides using the analytical model of Leconte et al. (2015) adapted from the prescription developed by Dobrovolskis & Ingersoll (1980) to reproduce the current state of Venus. We also investigated the effect of the luminosity variation of the star on the equilibrium state between the gravitational and thermal tides. In Section 2 we introduce the tidal model we used for the solid and thermal tides and the implementation in the ESPEM code. In Section 3 we discuss the evolution of the spin of a Venus-like planet when we only consider the influence of the solid tide. In particular, we discuss the well-known eccentricity-driven SORs and the less well-known inclination driven SORs. In Section 4 we discuss the evolution of the planet taking the thermal tides for the constant and evolving stellar luminosity into account. Finally, we discuss our findings and conclude in Section 5. ## 2 New model of planetary tides in ESPEM We consider the equilibrium solid tides, which correspond to the mass redistribution of a body (i.e., the planet) under the influence of the gravitational perturbation of a massive (or close-in) orbiting body (i.e. the star). As the planet rotates, the solid bulge will be ahead from the position of the star (as illustrated in red in the Fig. 1c) if the spin of the planet is higher than the mean motion. Then, we consider the so-called thermal tides. These correspond to the mass redistribution of an atmosphere due to stellar heating. In the same manner in which the gradient of the gravitational potential causes the mass redistribution of the body, the thermal tides are raised by the differential heating through the atmosphere (Gold & Soter, 1969; Dobrovolskis & Ingersoll, 1980; Correia & Laskar, 2001). The differential temperature between the day- and nightside causes a pressure gradient and therefore a mass redistribution of the atmosphere. This pressure gradient continuously redistributes the atmospheric particles from the high-temperature side (dayside) to the low-temperature side (nightside). As Fig. 1a shows, the direction of the bulge that forms is parallel to the direction of the heating source (i.e., the star). If the planetary spin is higher than the mean motion, as shown in Fig. 1b, the geometry of the deformation places the atmospheric bulge behind the position of the star by analogy with the solid deformation, while the solid bulge is ahead of the position of the star. The delayed response of the atmosphere caused by its radiative damping affects the dynamics of the planet through viscous coupling at the surface. Fig. 1c shows the combination of the gravitational and thermal tides on the planet. The two tides compete until an equilibrium is found. In the following, we introduce in Section 2.1 the formalism we used (Kaula) and in Section 2.2 the complex Love number, which allows us to express the tidal potential of the deformed planet in terms of Fourier series. In Section 2.3 we extend the notion of the potential Love number to a thermal Love number and detail the way in which we account for the influence of the thermal tides. In Section 2.4 we design a homogeneous interior model for the planet that counterbalances the thermal tides for the excitation of Venus. In Section 2.5 we discuss the corresponding orbital and rotational equations to finally described the implementation in the ESPEM code (Section 2.6). ### Kaula formalism In order to compute the tidal response of a body to a tidal perturbation, we need to use a formalism that is general enough to encapsulate the frequency-dependent response of a body. This response either requires a decomposition of the tidal potential created by the perturber (hereafter perturbing potential) into Fourier harmonic modes as developed by Kaula (1964), or a time-domain approach as proposed by Correia et al. (2014) and Gevorgyan et al. (2020). Both models allow a study of more complex and realistic rocky andicy bodies (Efroimsky & Makarov, 2013; Bolmont et al., 2020). We used the formalism developed by Darwin (1879) and adapted by Kaula (1964), hereafter the Darwin-Kaula formalism. The Darwin-Kaula theory of bodily tides provides the expression of the perturbing potential of a disturbed body in Fourier series as \[U=\sum_{l=2}^{\infty}\sum_{m=0}^{l}\sum_{p=0}^{l}\sum_{q\neq l}U_{ impq}(a,e,i,\sigma_{impq}), \tag{1}\] with a, e, and i, the semi-major axis, the eccentricity and the inclination respectively. The indexes \(l,m,p,\) and \(q\) are the indices of the harmonic modes, where \(l,m\) are the orders associated with the associated Legendre polynomials, and \(p,q\) the order of the Darwin-Kaula Fourier development. Each harmonic mode corresponds to an excitation frequency \(\sigma_{impq}\), that is, the frequency with which the perturbing potential will affect the deformed body, defined as \(\sigma_{impq}=(l-2p+q)n-m\Omega\) (with \(\Omega\) and \(n\) the spin rate and the mean motion respectively). Fig 2 shows the contribution of three different modes \((l,m,p,q)\), the \((2,2,0,0),(2,2,1,1)\), and \((2,2,0,2)\) modes, which correspond to the frequencies \(2(n-\Omega)\), \(n-2\Omega\), and \(4n-2\Omega\), respectively. The \((l,m,p,q)=(2,2,0,0)\) mode corresponds to the circular coplanar case (i.e., the semi-diurnal frequency). The \((2,2,1,1)\) and \((2,2,0,2)\) modes are two of the frequencies that are excited when the eccentricity is nonzero. This formulation is general and fundamental enough to be valid for an arbitrary rheology, and can also be used in the context of the thermal tides (see Sec. 2.3). When the perturber is far enough away, we can keep the development of the gravitational potential at the quadrupolar order only, \(l=2\)(Makarov & Efroimsky 2013; Mathis & Le Poncin-Lafitte 2009). We restricted the eccentricity expansion numerically up to the order 7, which corresponds to the index 7 in the summation over \(q\) and eccentricities up to 0.3. ### Solid Love number The response of a planet to the tidal disturbance is quantified using the tidal Love number \(k_{2}\)(Love 1909). The Love number links the perturbing potential and the additional potential created by the deformed planet in response to the perturbing potential. As we used the quadrupolar component of the tidal potential, we can write each mode as \[U_{2mpq}^{\text{tidal bulge}}(\sigma_{2mpq})=\bar{k_{2}}(\sigma_{2mpq})\ U_{2mpq}^{\text{tidal perturbation}}(\sigma_{2mpq}), \tag{2}\] where \(U_{2mpq}^{\text{tidal bulge}}(\sigma_{2mpq})\) corresponds to the quadrupolar component of the potential Eq. 1 (\(l=2\)), and \(\bar{k_{2}}(\sigma_{2mpq})\) (hereafter \(\bar{k_{2}}\)) is related to the amplitude of the complex quadrupolar Love number (Efroimsky 2012a) at the quadrupolar mode \(l=2\) with a frequency dependence with \(\sigma_{2mpq}\). Physically, the quadrupolar Love number quantifies the response of a body submitted to a periodical external perturbation of frequency \(\sigma_{2mpq}\). Here the periodical perturbation corresponds to the tidal potential as described by Eq.1. It can be written as a complex number, where the real part represents the pure elastic behavior and the imaginary part is the viscous behavior, written as \[\bar{k_{2}}=\Re\ (\bar{k_{2}})+i\ \Im\ (\bar{k_{2}})\ =|\bar{k_{2}}|\exp(-i \epsilon_{2}). \tag{3}\] Thus, we can link the phase of the exponent \(\epsilon_{2}\) with the angle between the tidal bulge and the position of the perturber with \(\delta=\epsilon_{2}/2\)(Remus et al. 2012). As shown in Fig 2, each phase is associated with an excitation mode \((2mpq)\). This complex Love number can be computed for any density, shear modulus and viscosity profiles by integrating the equations of motions and Poisson's equation relating the displacement, stress, strain and induced potential in the frequency domain assuming a compressible Andrade rheology following the method described in Dumoulin et al. (2017) and Tobie et al. (2019). For a homogeneous solid body, the Love number can be determined from analytical solutions following Efroimsky (2012b). Figure 1: Tidal elongations of a rotating planet, composed of a solid core and a gaseous atmosphere, submitted to both gravitational and thermal forcing. Figures inspired from Correia & Laskar (2003). Figure 2: Schematic representation of the contribution of the tidal modes \(l,m,p,q=(2200),(2211),\) and (2202) in the Kaula formalism. Each bulge represents the tidal deformation under a component of the tidal potential \(U_{Impq}\) of the perturber (point mass on the right.) The tidal torque directly depends on the imaginary part of the Love number. In the circular coplanar case, the tidal torque applied on the planet is expressed as (Kaula, 1964; Goldreich, 1966; Murray and Dermott, 1999) \[T^{\rm grav}=\frac{3}{2}\frac{\mathcal{G}M_{\star}R_{p}^{5}}{a^{6}}\mathfrak{I}( \bar{k}_{2}), \tag{4}\] with \(\mathcal{G}\) the gravitational constant, \(M_{\star}\) the stellar mass, \(R_{p}\) the planetary radius, \(a\) the semi-major axis and \(\mathfrak{I}(\bar{k}_{2})\) the imaginary part of the Love number, which can be linked with the well-known dissipation factor \(Q_{2}\) and the Love number modulus \(k_{2}\) with \(\mathfrak{I}(\bar{k}_{2}(\sigma))=-k_{2}(\sigma)/Q_{2}(\sigma)\mathrm{Sign}(\sigma)\)(Goldreich and Soter, 1966; Ogilvie, 2014; Bagheri et al., 2022). Then, we need to model an appropriate Love number \(\bar{k}_{2}\) for rocky bodies in order to compute the secular tidal effects. Most models assume that planets are made of weakly viscous fluid (e.g., Hut, 1981; Goldreich, 1966) even for rocky planets. However, it has been shown that they do not reproduce the correct behavior for highly viscous solid bodies, such as the evolution of their rotation (Henning et al., 2009; Efroimsky and Makarov, 2013). We used a more realistic rheological response, the Andrade rheology (Andrade, 1910), to better reproduce the behavior of a rocky body under periodical forcing (Castillo-Rogez et al., 2011). The Andrade rheology is anelastic model built as a combination of dashpots and springs. It is composed with two first components in series, a dashpot and a spring which model the pure viscous damping and the pure elastic rigidity respectively, which correspond to the so-called Maxwell rheology (e.g., Correia et al., 2014). The Maxwell components are linked in series with an infinite number of springs and dashpots in parallel which correspond to the hereditary Andrade property, which retains some aspect of material memory (see Fig 3 and Efroimsky, 2012a for details). This model successfully reproduces a broad range of laboratory measurement of solid behavior under stress and strain, including silicate minerals, metals, and ics (Andrade, 1910, 1914; McCarthy and Castillo-Rogez, 2013). The rheological profile used in this study was computed with a multilayer model following the method published by Tobie et al. (2005, 2019) and Bolmont et al. (2020). The Love number can be computed with the method of Bolmont et al. (2020). Following Efroimsky (2012b), the complex tidal Love number \(\bar{k}_{2}^{\rm grav}\) is given by \[\bar{k}_{2}^{\rm grav}=\frac{3}{2}\frac{1}{1+A_{2}J/\bar{J}}\, \tag{5}\] with \(J=1/\mu\) the unrelaxed compliance (with \(\mu\) the unrelaxed elastic shear modulus in Pa) and \(A_{2}\) defined by \[A_{2}=\frac{57J^{-1}}{8\pi\mathcal{G}\rho^{2}R_{p}^{2}}\, \tag{6}\] with \(\rho\) the density. \(\bar{J}\) is the complex compliance of the material and defined in the formalism of the Andrade rheology with (Castillo-Rogez et al., 2011) \[\bar{J}=J+\beta(i\sigma)^{-\alpha}\Gamma(1+\alpha)-\frac{i}{\eta\sigma}\, \tag{7}\] with \(\beta\) a factor that describes the intensity of anelastic friction in the material, \(\Gamma\) the Gamma function, \(\eta\) the shear viscosity, \(\sigma\) the excitation frequency, and \(\alpha\) an experimentally fit parameter that which represents the frequency dependence of the transient response. A value of \(\alpha\) in the range of 0.23-0.28 allows us to reproduce the dissipation factor and \(k_{2}\) for the Earth at different frequencies (Tobie et al., 2019). We studied the case of a Venus-like planet with different end-member temperature profiles.Little is known about the interior structure of Venus. This will likely improve with the upcoming ES EnVision mission (Widemann et al., 2020) and the NASA DaVinci (Garvin et al., 2022) and VERITAS (Smrekar et al., 2020) missions. In the meantime, we considered four possible structures. We used one multilayer profile (referred to as the reference profile) with Earth-like viscosity values as a reference, two other profiles with viscosity values divided by 10 or multiplied by 100 relative to the reference profile and one homogeneous profile (see Sec. 2.4). The multilayer structures can be considered as end members of what we think could be the real interior of Venus (e.g., Bolmont et al., 2020). The Love numbers associated with the homogeneous profile where computed following the formula described in Bolmont et al. (2020) and Efroimsky (2012b) (see section 2.4). For the multilayer reference structure, we derived the radial density and seismic velocities in the mantle of the planet by using the Perple_X code2(Connolly, 2005), which uses a temperature profile from Armann and Tackley (2012) together with the shear modulus profile from the compositional model V1 of Dumoulin et al. (2017). The viscosity was computed as a function of the temperature and pressure profiles as (Dumoulin et al., 2017) Footnote 2: [http://www.perplex.ethz.ch](http://www.perplex.ethz.ch) \[\eta=\frac{1}{2}A_{0}^{-1}d^{2.5}\mathrm{exp}\Big{(}\frac{E_{a}+PV_{a}}{RT} \Big{)}, \tag{8}\] with \(E_{a}\) and \(V_{a}\) the activation and volume energy, respectively, and \(A_{0}\) the pre-exponential factor, which are parameters from the Arrhenius equation and depend on the material and \(d\) the grain size. The parameters of the dry olivine considered in the upper mantle are \(E_{a}=300\) kJ mol\({}^{-1}\), \(A_{0}=6.08\times 10^{-19}\) Pa\({}^{-1}\) s\({}^{-1}\), with a grain size \(d=0.68\) mm. Figure 4 shows the internal profiles of the shear modulus \(\mu\) and the viscosity \(\eta\) for the homogeneous model (see Sec 2.4), the reference model (hereafter Vref) from Armann and Tackley (2012), and two other models with viscosity profiles obtained by multiplying the viscous reference structure by 0.1 or 100 (denoted V0.1 and V100, respectively). Figure 3: Schematic representation of the Andrade anelastic model used in this study (adapted from Renaud and Henning, 2018). The two first components in series represent a spring and a dashpot. The elements in parallel represent an infinite number of springs and dashpots. The metallic core structure was computed using PREM scaled to the Venusian pressure conditions (Dumoulin et al. 2017). The imaginary part of the Love numbers associated with these profiles were computed following Dumoulin et al. (2017)and Bolmont et al. (2020) and are represented in Fig. 5. A less viscous profile (dash-dotted line Fig. 4) that might correspond to a hotter mantle, will be more dissipative than a more viscous profile (dotted line Fig. 4) for frequencies higher than \(10^{-11}\)s\({}^{-1}\) (see Fig. 4). ### Thermal Love number We considered only the feedback of the tidal bulge of the atmosphere deformed by the pressure gradient at the surface, considering that the atmosphere is perfectly coupled with the surface by viscous friction (e.g., Leconte et al. 2015; Auclair-Desrotour et al. 2019). Thus, we neglected other feedbacks such as the effect of the pressure gradient on the shape of the solid crust and the gravitational anomaly of the atmosphere (see Correia & Laskar 2003 for details). Because the mass redistribution of the atmosphere comes from the surface pressure anomaly, the imaginary part of the complex moment of the surface pressure field \(\mathcal{S}(\delta p_{s}^{2})\) can be used as a prescription for the imaginary part of the thermal Love number (Leconte et al. 2015; Auclair-Desrotour et al. 2017). The complex moment of the surface pressure field \(\delta p_{s}^{2}\) describes the thermal tides amplitude, and therefore, \(\mathcal{S}(\delta p_{s}^{2})\) can be used to describe the dissipation. Then, we can relate \(\mathcal{S}(\delta p_{s}^{2})\) to an imaginary thermal Love number \(\mathcal{S}(k_{z}^{\text{thermal}})\). The relation between these two quantities is discussed below. To calculate the complex moment of the surface pressure field \(\delta p_{s}^{2}\), we used the work of Leconte et al. (2015), who assumed a Maxwell-like frequency dependence (Ingersoll & Dobrovolsis 1978; Gold & Soter 1969; Auclair-Desrotour et al. 2017). More realistic frequency dependences have been proposed by Auclair-Desrotour et al. (2019) as a generic formulation and a scaling law, adapted for \(N_{2}\) atmospheres for different surface pressures. These models will be studied in future developments. Leconte et al. (2015) used a 3D climate model (e.g., Leconte et al. 2013; Forget et al. 2013; Leconte et al. 2013) specifically tuned for the case of Venus to reproduce the amplitude of the thermal tides on Venus today and used this point to fit an analytical Maxwell-like solution. The analytical formulation of the module of the complex moment of the pressure field Figure 4: Shear modulus and viscosity profiles for the multilayer reference model Vref and homogeneous model considered here, shown as solid red and black lines, respectively. The dotted and dash-dotted red lines represent the two profiles V0.1 and V100 derived from the multilayer reference model with a viscosity multiplied by x0.1 and x100 times, respectively. The viscosity \(\eta\) is computed as in Dumoulin et al. (2017) using Eq. 8 of this work. Figure 5: Imaginary part of the gravitational Love number \(\mathcal{S}(k_{z}^{\text{grav}})\) as a function of the excitation frequency of a Venus-like planet for different viscosity profiles. The multilayer reference profile Vref derived from Armann & Tackley (2012) is shown as the solid red line. The dotted and dash-dotted red lines represent the V0.1 and V100 profiles derived from the multilayer reference one presented in Fig 4. In blue we present the imaginary Love number \(\mathcal{S}(k_{z}^{\text{thermal}})\) as a function of the excitation frequency computed with Eq. 12 (in absolute values) associated with the amplitude of the thermal tides presented in Fig. 6. The green curve represents the homogeneous profile described in Sec 2.4. The vertical dotted black line represents the absolute value of the current frequency state of Venus. is expressed as \[\delta p_{s}^{2}=-\frac{q_{0}}{1+i\frac{\sigma}{2\omega_{0}}}, \tag{9}\] such that the imaginary part can be written as \[\Im(\delta p_{s}^{2})=\frac{q_{0}\frac{\sigma}{2\omega_{0}}}{1+\left(\frac{ \sigma}{2\omega_{0}}\right)^{2}}, \tag{10}\] with \(q_{0}\) the amplitude of the quadrupole term of the pressure field at zero frequency, \(\omega_{0}\) the radiative frequency, and \(\sigma\) the excitation frequency. The radiative frequency can be identified with the inverse of the thermal equilibrium timescale. The parameters fit on the GCM simulation of Venus are: \(q_{0}=201\) Pa and \(\omega_{0}=3.77\times 10^{-7}\) s\({}^{-1}\)(Lecomte et al., 2015). Figure 6 shows the amplitude of the pressure bulge \(|\delta p_{s}^{2}|\) as a function of the normalized forcing frequency. The solid curve shows the analytical solution fitted from the values of the amplitude and the phase lag computed with the GCM simulation of Venus of Leconte et al. (2015). As it was not possible to run the specific GCM of Venus for different rotation states, it was not possible to constrain the Maxwell fit better. The thermal Love number \(\Im(k_{2}^{\rm thermal})\) can be determined from the complex moment of the surface pressure field \(\Im(\delta p_{s}^{2})\) by identification between the thermal and the gravitational torque. The expression of the thermal torque raised by the mass redistribution of the atmosphere is (e.g., Goldreich & Soter, 1966; Correia & Laskar, 2001, 2003; Leconte et al., 2015; Auclair-Desrotour et al., 2017) \[T^{\rm thermal}=\sqrt{\frac{24\pi}{5}}\frac{M_{\star}}{M_{p}}\frac{R_{p}^{6}} {a^{3}}\Im(\delta p_{s}^{2})\,, \tag{11}\] with \(M_{\star}\) the stellar mass, \(M_{p}\) and \(R_{p}\) the planetary mass and radius respectively, and \(a\) the semi-major axis. The thermal equivalent Love number \(\Im(k_{2})\) can be written by identification with the expression of the solid torque (Eq. 4) as \[\Im(k_{2}^{\rm thermal})=-\sqrt{\frac{32\pi}{15}}\frac{a^{3}R_{p}}{\delta M_ {\star}M_{p}}\Im(\delta p_{s}^{2})\,, \tag{12}\] We note that in contrast to \(\Im(k_{2}^{\rm grav})\), the thermal Love number \(\Im(k_{2}^{\rm thermal})\) depends on the semi-major axis of the planet as the intrinsic response of the atmosphere depends on the flux received by the planet. Then, the two torques, gravitational and thermal, do not have the same dependence as to the semi-major axis, which allows an equilibrium point at which the two tides compensate for each other. The dependence of the mass of the atmosphere is contained in the surface pressure term. We highlight that a more massive atmosphere does not necessarily lead to stronger atmospheric tides. For a more massive atmosphere, the atmospheric layers are more opaque to the stellar flux. Thus, as less stellar flux reaches the surface, the thermal tides are damped for a more massive atmosphere. This effect strongly depends on the composition of the atmosphere and requires a better model to be taken into account. A more massive atmosphere is not investigated in this study. The imaginary Love number \(\Im(k_{2}^{\rm thermal})\) as a function of the excitation frequency for the fitted analytical model (Eq. 12) is plotted in blue in Fig. 5 in absolute values. In the following, we study the presence equilibrium points as a function of tidal frequency for different internal profiles. ### Equilibrium state between gravitational and thermal tides An equilibrium state between the gravitational and thermal tides can be determined by comparing their imaginary Love numbers. The two tides compensate for each other when the addition of the two imaginary Love numbers is 0 (when the two absolute values are equal; see Fig. 5). Figure 5 shows that the \(\Im(k_{2}^{\rm grav})\) corresponding to the multilayer reference profile (solid red curve) is always higher in amplitude than the \(\Im(k_{2}^{\rm thermal})\) (solid blue curve). Figure 7 shows the spin derivative for the multilayer reference profile, the V0.1 and V100 profiles, and the homogeneous fit profile. Comparing the cases without and with atmosphere (in red and blue, respectively), we find that the solid tides corresponding to the multilayer reference profile are strong enough to compensate for the thermal tides at any frequency. On the one hand, the V0.1 profile is also sufficiently dissipative to compensate for the thermal tides, as it corresponds to a more dissipative interior and thus stronger solid tides. Thus, the spin derivatives associated with the reference profile and the less viscous one are not in equilibrium. The system will therefore evolve to the 1:1 SOR. On other hand, the V100 profile leads to weaker solid tides. In this case, the spin derivative in Fig. 7 shows that the thermal tides are sufficient to compensate for the gravitational tides, except close to the synchronization, where the solid tides are still strong enough to make the 1:1 SOR stable. This profile, which might correspond to a colder mantle, is not dissipative enough to compensate for the thermal tides close to the current state of Venus. This is shown in Fig. 5, where the \(\Im(k_{2}^{\rm thermal})\) (solid blue curve) has a broad range of frequencies (from \(10^{-7}s^{-1}\) to \(2\times 10^{-5}\)s\({}^{-1}\)), where it dominates the gravitational Love number \(\Im(k_{2}^{\rm grav})\) associated with the V100 profile (dashed red curve). Thus, the two intersection points correspond to possible equilibrium states, at which the two tides compensate for each other. The equilibrium points shown in the top panel of Fig 7 (empty blue circles) correspond to the intersection point at low frequency (about \(10^{-7}\) s\({}^{-1}\) on Fig 5). Considering the slope of the derivative, however, this point is not stable. The second Figure 6: Amplitude of the pressure bulge \(|\delta p_{s}^{2}|\) as a function of the normalized forcing frequency \((\Omega-n)/\Omega\). The solid line represents the analytical solution fit for the point of Venus (red dot) computed with Venus GCM simulations (see Leconte et al., 2015). The red bars on the Venus point are not strictly error bars. They represent the dispersion of the pressure bulge at the surface. point at high frequency (about \(4.5\times 10^{-5}\) s\({}^{-1}\) on Fig 5) corresponds to a fast rotation of about three days. This last point is not investigated further because, on the one hand, this state is far from the current state of Venus, and on the other hand, it belongs to the high-frequency regime. In our case, the Maxwell-like frequency-dependent model of thermal tides overestimates the strength of the thermal tides at high frequencies, as the model was fit for the low-frequency regime (Auclair-Desrotour et al., 2019). We therefore consider our approach to be valid in the low-frequency range, that is, for \(|\Omega/n|<15\). As this equilibrium point is very far from the present-day rotation of Venus, it would require a more complex model that is beyond the scope of this study, and we did not investigate it further. A better model, such as the parameterized model proposed by Auclair-Desrotour et al. (2019), will be studied in the future. None of the profiles reproduce the balance between the two contributions, gravitational and thermal, close to the frequency of Venus Using the method of Bolmont et al. (2020) described in Sec 2.1 (Eq. 5 to 7), we fit a homogeneous profile (in density, viscosity, and rigidity) that reproduces an equilibrium point at the Venus frequency. As the profile we tried to construct is relatively close to the multilayer profile, we used the parameters given in Table 2 of Bolmont et al. (2020) for this profile of Venus at \(\alpha=0.25\) and only fit the value of the viscosity parameter \(\log(\eta)\). The other parameters are the rigidity \(\log(\mu)=10.02\) in Pa and the ratio of the Andrade and Maxwell time \(\tau_{A}/\tau_{M}=0.89\)(Castillo-Rogez et al., 2011). The homogeneous profile that fits the thermal Love number \(\mathfrak{I}(k_{2}^{\text{thermal}})\) at the Venus frequency is found with \(\log(\eta)=22.18\) and is plotted in Fig. 5 (green curve). Fig 7 (bottom panel) shows the stable equilibrium point between the gravitational tides associated with the fit profile and the thermal tides (filled blue dot) at the current frequency of Venus as well as two unstable points (empty blue points). These points correspond to the rotation states in which the two tides compensate for each other. We must highlight that the homogeneous profile allows for spin equilibrium at the current frequency of Venus in a very narrow range of internal states. As the interior temperature profile evolves on geologic timescales, it will be relevant to take the associated change of dissipation due to progressive cooling into account, which evolves on timescales of 100 Myr (Bower et al., 2019), or radiogenic decay, tidal heating, and so on, to fully characterize the spin equilibrium. This temperature dependence will be addressed in future studies. ### Secular equations The secular equations we implemented were derived from the Hamiltonian formalism by Boue & Efroimsky (2019). We used Eqs. (116) to (123) of their work, which were derived within the Darwin-Kaula formalism (see Appendix A). One important hypothesis, that was formulated to derive these equations is the gyroscopic approximation, which implies that the spin rate of a body is much faster than the evolution of the spin-axis orientation. This approximation invalidates the equations when the spin tends to zero for a noncoplanar orbit. A singularity occurs when the spin rate is zero within this approximation (see Boue & Efroimsky, 2019). The validity of the equations for an inclined orbit close to the null rotation state will need to be revisited. In this formalism, the inclination is defined by the angle between the orbital plane and the equatorial plane, which in other words corresponds to the angle between the orbital angular momentum and the planetary spin angular momentum3. Footnote 3: This angle is also often referred to as the obliquity, for instance, the obliquity of the Earth is about 23 degrees. In this study, we would thus say that the inclination of the Earth is 23 degrees. The development of Boue & Efroimsky (2019) also includes the deformation of the secondary under the tidal effect of the primary. We neglected the tidal deformation of the secondary. The resulting secular equations of the spin, eccentricity, and orbital inclination are presented in Appendix A. ### Implementation in the ESPEM code We implemented the secular equations of Boue & Efroimsky (2019) in the code ESPEM (Benbakoura et al., 2019; Ahuir et al., 2021). This is a secular code integrating the dynamical evolution of a star-planet system. The code takes the coupling between the two layers of low-mass stars into account (convective and ra Figure 7: Spin derivative as a function of the rotation (in terms of \(\Omega/n\), \(\Omega\) and \(n\) the planetary spin and mean motion respectively). In the top panel, the red lines correspond to the solid tides, and the blue lines correspond to the cases with solid and atmospheric tides. The solid lines correspond to the reference multilayer profile Vref. The dotted and dash-dotted lines correspond to the V0.1 and V100 profiles, respectively. In the bottom panel, the green line represents the solid tides associated with the homogeneous body (see Sec. 2.2 for details). The blue line corresponds to the cases with the contribution of atmospheric tides. The vertical dashed black line represents the current frequency of Venus in both panels. The dots (filled and empty) represent the equilibrium states between the gravitational and thermal tides (stable and unstable, respectively). diative layers), as well as the effect of the stellar wind and the torque due to the tides raised by the planet on the convective envelope star and the torque due to the star-planet magnetic interactions for circular and coplanar orbits (Ahuir et al., 2021). The code was only used to compute the angular momentum exchange between the planetary orbit and the stellar radiative core and convective envelope angular momentum (Benbakoura et al., 2019; Ahuir et al., 2021). We have added the tidal torque of the star on the planet within the formalism described in this paper. In addition to the equation for the semi-major axis, we also implemented the equations governing further osculating elements of the planet, such as the eccentricity, orbital inclination, longitude of ascending node, argument of periapsis, and planetary spin. The equations for spin, eccentricity, and inclination, longitude of ascending node and argument of periastron can be found in the Appendix A, Eq. 2 to 6. The user needs to provide a data file describing the time evolution of the mass and radius of the star as well as the evolution of the mass and radius of the radiative and convective envelopes, of the moment of inertia and the stellar luminosity. The stellar evolution is computed with evolution files provided by the code STAREVOL (Amard et al., 2016), which gives the internal dissipation and the evolution of the stellar quantities (e.g., the mass, radius of the radiative core, and convective envelope, and luminosity). The evolution of the stellar luminosity is used in Sec 4.2. The user also needs to specify the initial conditions of the osculating elements of the planet as well as the rotation rate. The code also needs a data file describing the frequency dependence of the real and imaginary parts of the tidal Love number of the planet. This latter file should also provide the mass, radius, and the radius of gyration of the planet (which represents the internal density distribution). The Love numbers provided in these data files were computed with the method described in Sec 2.4. As explained in Sec 2.1, several frequencies are excited depending on the eccentricity and inclination. These frequencies were computed for each time step. The real and imaginary parts of the Love number were interpolated linearly from their frequency dependence in the data file. These interpolated values were then used to compute the derivatives of all the quantities mentioned before. We used the parameters of a Sun-Venus system, that is, a Venus-like mass and radius planet orbiting at 0.723 AU. The parameters we used are listed in table 1. The initial spin rate is shown from \(\Omega/n=2.1\) for the ESPEM simulations as the SORs we studied are below this spin rate. The longitude of the ascending node and argument of pericenter, were set to zero. The effect of the stellar tides, stellar wind or the evolution of the stellar layers were not taken into account in this study. We focused here only on the evolution of the rotation state of the planet under the tidal perturbation of a star. We considered this approach appropriate for studying the evolution of the planetary system we consider here. The effect of the Venusian tides inside the Sun can be considered to be negligible as the corresponding evolution timescale is about \(10^{15}\) Gyrs order of magnitude(e.g., Bolmont and Mathis, 2016). ## 3 Impact of the gravitational tides alone First, we investigated the secular evolution of the spin, the eccentricity, and the spin inclination of a Venus-like planet orbiting a Sun-like star driven by the gravitational tides alone. In other words, we first neglected the influence of the thermal tides, which is equivalent to first considering an atmospheres planet. In Section 3.1 we discuss spin-orbit resonances for coplanar eccentric orbits, and in Section 3.2, we discuss spin-orbit resonances for inclined circular orbits. ### Eccentricity-driven spin-orbit resonances Hut (1981) showed that if the orbit is eccentric, the planet reaches a pseudo-synchronization state, where the rotation rate of the planet is comparable to the mean motion around the periastron. The use of a model more appropriate for a highly viscous object results in discrete stable spin states in presence of eccentricity, however, in particular, SORs (Makarov and Efroimsky, 2013). The higher the eccentricity, the higher the SOR order. A planet beginning its evolution with a high eccentricity and a spin faster than 2.5 times its orbital motion first becomes trapped in the 5:2 SOR. Then, as the eccentricity diminishes, the planet leaves the resonance to be trapped in the lower resonance, the 2:1 SOR, then leaving this configuration for a lower SOR, the 3:2 SOR as the eccentricity continues to decrease. Then, as the eccentricity continues to decrease, the rotation eventually becomes trapped in the 1:1 SOR, also known as the synchronous state, or tidal locking (see also Gomes et al., 2021, with the Creep tidal model). Figure 8 shows the evolution of the rotation state and the eccentricity of a Venus-like planet with the multilayer internal reference structure (see Sec. 2.2) for three initial eccentricities (0.0, 0.1, and 0.2) in coplanar orbit. The simulations started with an initial semi-major axis of 0.723 AU and an initial rotation period of 100 days. The figure shows that an eccentricity of 0.2 is sufficient to allow the planet to be captured in the 2:1 SOR and the 3:2 SOR for an eccentricity of 0.1. The planet can stay in this SOR as the eccentricity remains high enough throughout the simulation, as shown in the bottom panel of Fig. 8. The order of the resonance in which the planet capture is shown when we plot the spin derivative as a function of \(\Omega/n\) (\(\Omega\) and \(n\) the planetary spin and mean motion respectively). Figure 9 shows how this quantity evolves with \(\Omega/n\) for a fixed eccentricity (Fig. 9a) and how it evolves with \(\Omega/n\) and for different eccentricities (Fig. 9b). Figure 9a shows that for the circular case (\(e=0\), dotted blue line), only one value of the spin result in \(d\Omega/dt=0\), and therefore, only one possible equilibrium for the rotation state. The equilibrium is centered at \(\Omega/n=1\), which corresponds to the synchronization state. Higher eccentricities raise other resonances at a higher spin rate. For example, the 0.2 eccentric case (in green on Fig. 9a) shows the 3:2 SOR and the 2:1 SOR in addition to the 1:1 SOR. The 5:2 SOR is also present, but the eccentricity must be higher than 0.25 to keep this configuration stable, as the middle panel shows. Figure 9b shows the values taken by the spin derivative on a 2D map, as a function of \(\Omega/n\) and the eccentricity, from 0.0 to 0.3. As we are restricted numerically up to the order 7 in the eccentricity expansion, we computed the evolution up to \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Values \\ \hline Star Mass (\(M_{\odot}\)) & 1 \\ Planet Mass (\(M_{\rm Earth}\)) & 0.815 \\ Planet Radius (\(R_{\rm Earth}\)) & 0.857 \\ Semi-major axis (AU) & 0.723 \\ Eccentricity & \(\{0,0.1,0.2\}\) \\ Spin Inclination (degrees) & \(\{0,5,50,120,130\}\) \\ \hline \end{tabular} \end{table} Table 1: Numerical values used in the case of a Sun-Venus-like system. 0.3. This is sufficient because the population of rocky exoplanets does not present extreme eccentricities. The equilibrium points can be found with the null torque in red. The stable equilibrium states must satisfy the condition of a positive torque (in red) to its left and a negative torque (in blue) to its right. Figure 9b shows that increasing eccentricity allows higher-order SORs. The synchronization is accessible with \(e=0\), while the 3:2 SOR becomes accessible at \(e=0.06\), the 2:1 SOR at 0.16, and the 5:2 SOR at 0.26. The eccentric cases of the Fig. 9a are overplotted in the color map with the two horizontal dotted black lines. The evolutions shown in Fig. 8 are plotted in Fig. 9b and 9c with the three colored arrows (with identical colors in the two figures). The spin quickly decreases in the SOR associated with its eccentricity. Because the eccentricity is damped by the tides, the SORs remain until the eccentricity becomes too low to stably maintain these configurations. Figure 9c represents the eccentricity derivative map in the eccentricity versus rotation state plane. In red we show the area in which the eccentricity increases, and in blue the area in which it decreases. In particular, the eccentricity appears to be slightly excited for the \(e=0.1\) case shown in Fig. 8. This behavior can be explained with the derivative map of Fig. 9c. The eccentricity in the \(e=0.2\) case of Fig. 8 also appears to be excited before the state when the rotation reached the 2 : 1 SOR. The timescale of the eccentricity evolution is too long. The departure from resonant states is therefore not shown here. We confirmed the eccentricity-driven spin-orbit resonances, such as the 1:1, 3:2, 2:1, and 5:2 SORs, and their dependence on the value of the eccentricity. Our results are also consistent with the work of Walterova and Behounkova (2020). We reproduce the eccentricity-driven resonances they showed for the shear modulus and viscosity of our fitted hot profile. Walterova and Behounkova (2020) pointed out that the internal composition of the planet affects the stability of the SORs. Thus, the thermal evolution of the internal structure should be investigated, starting from a warm to a colder profile. As the temperature drives the viscosity and melt fraction of the mantle, the effect of the tidal heating should also be investigated and will be implemented in future developments. Then, the effect of the tidal heating should also be studied, but the tidal dissipation is not thought to be important for the case of Venus. Tidal dissipation is probably stronger for very close-in planets. The effect of the tidal heating of these planets will be the subject of future studies. ### Inclination-driven spin-orbit resonances The inclination-driven SORs has been discussed in Boue et al. (2016) in the context of gas giant planets responding to a Maxwell rheology. We show here that this behavior is also found for rocky planets with a rheology more adapted to rocky planets (Andrade), thus generalizing the findings of Boue et al. (2016). This is the first study of inclination-driven SORs with a realistic rheology for rocky exoplanets that generalizes the first study of Boue et al. (2016) for giant planets, who used a simple Maxwell rheology. Figure 10 shows the evolution of a Venus-like planet with the multilayer internal reference structure (see Sec. 2.2) after \(2\times 10^{8}\) years of evolution for three initial inclinations and an initial rotation of 100 days in a circular orbit with the parameters presented in table 1. For an initial inclination of 5 degrees (blue curve), the tides act to synchronize the rotation of the planet in \(5.5\times 10^{7}\) years, while the inclination is damped to zero on longer timescales. However, the spin can be trapped in SOR if the initial inclination is high enough. For the initial inclination of 50 and 120 degrees, the planet is captured in the 2:1 SOR for a few \(10^{7}\) yr. As in the previous section, we investigated how the derivative of the spin depends on rotation state \(\Omega/n\) and the inclina Figure 8: Evolution of a Venus-like planet with an initial rotation of 100 days for three initial eccentricities (i.e., the null eccentricity, and the 0.1 and 0.2 eccentricities). The top panel shows the evolution of the planet rotation rate in terms of \(\Omega/n\) (\(\Omega\) and \(n\) are the spin and mean motion, respectively). The bottom panel shows the variation in eccentricity \(\Delta e\) (i.e., \((e-e(0))/e(0)\)). Figure 10: Evolution of a Sun-Venus-like system. The top panel shows the evolution of the rotation state in terms of \(\Omega/n\) for an initial rotation period of about 100 days. The bottom panel shows the evolution of the orbital inclination for different initial inclinations of about 0, 50, 120, and 130 degrees. tion without eccentricity. Figure (b)b shows the spin derivative strength in the plane inclination versus rotation state \(\Omega/n\). The equilibrium points can be found with the red curves (null values of the derivative). In the same manner as in Fig 9, the stable equilibrium states must satisfy the condition of a positive torque (in red) on its left and a negative torque (in blue) on its right for positive values of the rotation state \(\Omega/n\), and inversely for negative values of \(\Omega/n\). Figure (a)a shows that only one equilibrium is possible for low inclinations: synchronous rotation. Increasing the inclination allows other SORs to appear (e.g., the 2:1 SOR). For inclinations higher than 120 degrees, the prograde rotations are no longer equilibrium points, but the retrograde rotations are, such as the -2:1 SOR at \(\Omega/n=-2\). A symmetry with respect to a 90-degree inclination exists. This symmetry is clearly visible in the middle panel of Fig (b)b. In particular, the torque at 130-degree inclination is symmetric of the 50-degree inclination. We highlight that no SORs lie above the 2:1 SOR in rotation. Figure (b)b shows no SORs close to the \(\Omega/n=3\) or \(=1.5\). Higher spin states were also studied, but as they do not exhibit any SORs. We therefore did not explore a spin rate higher than 100 days. Higher rotation rates require longer timescales to evolve than the present age of the Solar System and are therefore not presented in this paper. The evolution paths of Fig. 10 are overplotted in Fig. 11 for the four initial inclinations of 5, 50, 120, and 130 degrees. For two of these initial inclinations (50 and 120 degrees), we see a capture in the 2:1 SOR (in Fig. 10 and in Fig (b)b-(c)c). This resonance island is stable for inclinations greater than 15 and lower than 120 degrees. Thus, if the initial spin is higher than \(\Omega/n=2\), the spin is always be damped and trapped in the 2:1 SOR (for an inclination between 15 and 120 degrees). The spin remains in the 2:1 SOR until the inclination becomes too low to stably maintain this configuration. For an initial inclination of 50 degrees, the inclination appears to be excited by the tides and slightly increases when the spin is higher than the 2:1 SOR. This behavior can be explained with the shape of the inclination derivative \(\mathrm{d}i/\mathrm{d}t\) plotted in Fig. 12. It shows a positive-inclination derivative for a spin higher than the 2:1 SOR and an inclination lower than about 80 degrees (bottom right corner of the figure). It also shows that the inclination should also increase when the spin is slightly higher than the synchronization and for an inclination lower than 100 degrees (red area in the vicinity of the 1:1 SOR). This behavior is absent in Fig. 10 because the spin reaches the synchronization very quickly. Higher inclination cases can show an interesting behavior. Fig. (b)b clearly shows that for a high initial inclination of about 105 degrees, the synchronization state (i.e, \(\Omega/n=1\)) is no longer a stable configuration. Then, if the system starts with a positive rotation and a sufficiently high initial inclination, the spin is damped to the antisynchronization state \(\Omega/n=-1\), thus a retrograde rotation. As shown in Fig. 12, however, as the spin reaches a negative value, the inclination is driven toward 180 degrees by the tides. This will result in a stable state where the spin is retrograde, in the antisynchronization state, with an inclination of about 180 degrees. This corresponds to a prograde rotation with an orbital inclination of about 0 degrees. This is consistent with the symmetry on the \(\Omega/n=0\) axis and on the 90-degree axis of the derivative map (Fig. (b)b). We investigated spin inclination-driven SORs, such as the 1:1, the 2:1 and their symmetric, the -1:1, and -2:1 SORs, and the evolution of the spin inclination of the planet. We show the range of inclination allowing for the SORs from 0 to 105 degrees for the \(1:1\) and from 20 to 120 degrees for the \(2:1\) SOR. Our simulations in Fig 10 show the particular behavior of the inclination for a rotation rate above \(\Omega/n=2\), where the inclination appears to be excited by the tides if the inclination is lower than 80 degrees. Finally, the color maps in Fig. 11 show the symmetrical properties of the inclination-driven SORs in \(\Omega/n=0\) and \(i=90\) degrees. Figure 9: Spin derivative \(\mathrm{d}\Omega/\mathrm{d}t\) (\(rad/s^{2}\)). The left panel shows the spin derivative as a function of the rotation state \(\Omega/n\) (\(\Omega\) and \(n\) are the spin and mean motion, respectively). for different eccentricities. The green dots (filled and empty) represent the equilibrium states, stable (i.e., SORs) and unstable, respectively. The middle and right panels represent the spin derivative and the eccentricity derivative as a function of the rotation state \(\Omega/n\) and the eccentricity, respectively. The red colored areas depict the positives values, the blue areas depict the negative values, and the red line corresponds to \(\mathrm{d}\Omega/\mathrm{d}t=0\). The dotted lines represent the two eccentric cases in the left panel (\(e=0.1\) and \(e=0.2\)). The arrows represent the evolutions presented in Fig. 8. The effect of the thermal tides of a Venus-like atmosphere for different initial spin inclination is studied in the next section. ## 4 Venus-like atmospheric tides As previous studies showed, the current spin state of Venus cannot be reproduced by involving the solid tides alone (Gold & Soter 1969; Dobrovolskis & Ingersoll 1980; Correia & Laskar 2001, 2003; Correia et al. 2003; Correia & Laskar 2003; Leconte et al. 2015). In particular, Correia & Laskar (2001) showed that atmospheric tides can lead to four final rotation states of Venus, one of which is the retrograde rotation observed today. They showed that the current state of Venus cannot be reached for any initial configuration, however. We can consider that the current spin inclination of Venus is either high (about 177.36 degrees) and has a rotation period of 5832.6 hr, or a low spin inclination (about 2.64 degrees) and a retrograde rotation. We use the case of Venus as a reference. The next section (Sec 4.1) explores the evolution of a Venus-like planet in the spin and inclination parameter space, with a nonevolving atmosphere, a constant luminosity, and a nonevolving internal profile. Section 4.2 explores the effect of the luminosity evolution of the host star, accounting for a simple prescription for the atmospheric evolution. ### Constant luminosity, nonevolving atmosphere Leconte et al. (2015) fit the parameters of their analytical solution of the pressure bulge (Eq. 9) to their GCM simulation to model the thermal tides. These parameters are given in Section 2.3. In this section, we investigate the effect of the thermal forcing produced by the host star on the atmosphere. We considered two models for the interior: a multilayer model (introduced in Section 2.2) and the fitted homogeneous model (introduced in Section 2.4). For the thermal tides, we used the analytical model of thermal tides fit on the present-day Venus (introduced in Section 2.3). The frequency dependence of the corresponding Love numbers is given in Fig. 5. As discussed in Section 2.4, the solid tides corresponding to the multilayer model and its two variants (the V0.1 and V100 profiles) do not allow a stable-equilibrium point close to the current frequency of Venus as they are either too strong or too weak. We fit a homogeneous hot profile, using the method of Bolmont et al. (2020), in order to find an equilibrium point close to the Venusian frequency (see Sec 2.4). The shape of the spin derivative in the bottom panel of Fig. 7 shows one stable equilibrium state and two unstable equilibrium states. The two unstable states, close to the synchronization, are also present in the highly viscous V100 profile case. The negative stable spin state was fit to correspond to the retrograde state of Venus. The \(1:1\) synchronous spin state remains stable. As in Sec 3.2, we used the derivative maps of \(d\Omega/dt\) and \(di/dt\) as a function of \(\Omega/n\) and \(i\) to represent the evolution of Figure 11: Spin derivative \(\mathrm{d}\Omega/\mathrm{d}t\) (in \(rad/s^{2}\)). The left panel shows the spin derivative as a function of the rotation state \(\Omega/n\) and for different inclinations (5, 50, 120, and 130 degrees). The middle panel represents the value of the spin derivative as a function of \(\Omega/n\) and the inclination. The red areas depict the positives values, and the blue areas depict the negative values. The dotted black lines of the middle and right panels represent the four inclination values (5, 50, 120, and 130 degrees) plotted in the left panel. The right panel shows a zoom into the square drawn in the middle panel, centered on the \(2:1\) SOR. The dashed black curves depict the paths of the simulations presented in Fig. 10. The gray areas hide the part of the figure close to the null rotation, where the equations used are no longer valid due to the gyroscopic approximation (see Sec 2.5). Figure 12: Same as Fig 10, with the inclination derivative \(\mathrm{d}i/\mathrm{d}t=f(\Omega/n,i)\), in \(rad/s\), instead of the spin derivative \(\mathrm{d}\Omega/\mathrm{d}t\). the system in Fig 13. As discussed in Sec 2.4, we constrained our study at low spin rates. Because no inclined SORs are higher than the 2:1 (see Sec 3.2), the initial spin rate of the simulations was set to \(\Omega/n=2.5\) for most of our simulations. We can find the set of initial spins and spin inclinations that can lead to a stable state close to the current rotation state of Venus (with an inclination as high as 180 degrees). Figure 13a and 13b show the evolution of the system through the map, in which each curve represents an ESPEM simulation. The solid lines show the cases leading to an inclination as high as 180 degrees with a prograde rotation, close to the current state of Venus. The dashed lines show the cases leading to the \(1:1\) SOR and an inclination of \(0\) degrees (i.e., a prograde rotation with a null inclination). The maps show that the atmospheric tides can drive the system toward the high-inclination state and keep the rotation on the prograde spin rate (i.e., prograde rotation with a high inclination) if the initial spin inclination is higher than about 150 degrees with a fast initial rotation. This configuration can be reached through the effect of a chaotic motion in the Solar System for the case of Venus (Correia & Laskar, 2003). Correia & Laskar (2001) argued that the current state of Venus can be described with four final states, depending on the evolution path of the planet. In their work, the paths leading to the current state of Venus either evolved by increasing the spin inclination toward 180 degrees and keeping the spin on a prograde rotation by either decreasing the spin toward retrograde rotation or keeping the spin inclination to zero degrees. In this study, the retrograde rotation can be reached only from the evolution of the spin inclination toward the high-inclination states. None of the paths shown in Fig 13b crosses the null spin state. Any positive rotation with a low-inclination configuration will drive the system in the synchronous state. As the chaotic effect of a third body will only perturb the spin inclination of the planet it is therefore unlikely that the rotation has crossed the null spin during its evolution given our set of hypotheses. ### Luminosity variation Dynamical studies of the thermal tides (Correia & Laskar, 2001, 2003; Leconte et al., 2015) have considered a constant luminosity. As the thermal forcing depends on the heat flux of the host star, we also investigated the effect of an evolving luminosity on the rotation evolution of a Venus-like planet. The luminosity evolution of the Sun-like star in ESPEM comes from simulations with the stellar evolution code STAREVOL (Amard et al., 2016). Figure 14 shows the luminosity variation of the Sun-like star we considered. The thermal Love number was computed with the luminosity dependence with the formulation of Auclair-Desrotour et al. (2017b) as \[\Im(k_{2}^{\mathrm{s}tm}(\sigma))=-\frac{4}{32}\frac{\kappa\tau \varsigma eL_{\star}a}{R_{A}T_{0}M_{\star}R}\frac{\sigma}{\sigma^{2}+\omega_{ 0}^{2}}, \tag{13}\] with \(L_{\star}\) and \(M_{\star}\) the stellar mass and luminosity respectively, \(R\) the radius of the planet, \(a\) the semi-major axis, \(\tau\) a weight parameter that gives the efficiency of the coupling between the atmosphere and the surface (\(0<\tau<1\)), \(\varsigma\) a shape factor depending on the spatial distribution of tidal heat sources, \(\kappa\) the power per mass unit radiated by the atmosphere (where the atmosphere is assumed to behave like a graybody, i.e., Newtonian cooling), \(\epsilon\) the effective fraction of power absorbed by the atmosphere, \(\sigma\) the excitation frequency, \(\omega_{0}\) the radiative frequency, \(T_{0}\) the equilibrium surface temperature of the atmosphere, \(R_{A}\) the specific gas constant defined as \(R_{A}=R_{GP}/\mathcal{M}_{A}\) (\(R_{GP}\) and \(\mathcal{M}_{A}\) being the perfect gas constant and the mean molar mass respectively), and \(\alpha\) the shape factor depending on the spatial distribution of tidal heat sources. The values of the parameters we used are presented in table 2. We started the simulation at 100 Myr regarding the timescale of the rocky planets to form (Chambers, 2004). We considered the atmosphere to be fully formed quickly, over the first 1 Myr after the formation of the planet. Then, as the evolution of the atmosphere is very uncertain, we considered it as non-evolving.The following part presents the evolution of the system after 3.6 Gyr of evolution. Figure 16 shows the spin versus inclination maps with the ESPEM simulation overplotted, in the same manner as the Fig 13a and 13b. Figures 16a to 16h show the apparition and the evolution of the equilibrium state close to the current state of Venus (red curve appearing in the top left corner of the map from panel 16b to panel 16h). In the early stages of the simulations, the stellar flux is lower than today, and gravitational tides dominate the thermal ones. This means that the spin and inclination evolution is mainly driven by the gravitational tides, following a path consistent with figure 11c. Figure 16d corresponds to a situation in which an equilibrium close to the current state of Venus is found for the current age of the Solar System. We must emphasize that these maps were set to reproduce the steady state at the current state of Venus for the current solar luminosity. Figures 16e to 16h show that the Solar luminosity increases faster than the spin state. Thus, as the luminosity increases, the spin never stays in a stable configuration, but continuously evolves toward the stable state. The evolution of the theoretical equilibrium rotation state between the gravitational and thermal tides can be found by finding the rotation rate \(\Omega_{\mathrm{eq}}\), which satisfies \(k_{2}^{\mathrm{s}tm}(\sigma_{\mathrm{eq}})=k_{2}^{\mathrm{thermal}}(\sigma_{ \mathrm{eq}})\). Figure 15 shows the evolution of the equilibrium rotation rate in terms of \(\Omega_{\mathrm{eq}}/n\) and the evolution of the rotation rate \(\Omega/n\) from the ESPEM simulations shown in Fig. 16. The equilibrium states were determined numerically by finding the rotation state \(\Omega_{eq}\) that verifies the equality between the gravitational and the thermal Love number \(\Im(k_{2}^{\mathrm{s}tm}(\Omega_{eq}/n))=\Im(k_{2}^{\mathrm{thermal}}(\Omega_{eq}/n))\) over the evolution of the stellar luminosity. We show the evolution of the four simulations that crossed the current spin state of Venus during their evolution in Fig. 16. These cases crossed the equilibrium state close to the current time, but the equilibrium point evolved faster than the rotational state of the simulation. Figure 16h shows that the thermal tides eventually become stronger than the gravitational tides across a large parameter space as the luminosity increases. In summary, the luminosity evolution leads to two effects. First, the equilibrium changes as the balance between the gravi \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Values & Units \\ \hline \(T_{0}\) & 737 & K \\ \(\tau\) & 1 & - \\ \(\varsigma\) & 0.19 & - \\ \(\kappa\) & 0.286 & - \\ \(\epsilon\) & 0.04 & - \\ \(\omega_{0}\) & \(3.77E-7\) & s\({}^{-1}\) \\ \(R_{GP}\) & 8.314 & J mol\({}^{-1}\) K\({}^{-1}\) \\ \(\mathcal{M}_{\mathrm{A}}\) & 43.45 & g mol\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 2: Numerical values for the thermal Love number of Eq. 13. tational tides and thermal tides evolves (thermal tides dominate as the luminosity increases). Second, the planet cannot stay in equilibrium because the timescale of the spin evolution is longer than the timescale of the luminosity evolution. Concerning the first point, as the luminosity increases and the thermal tides becomes stronger, the equilibrium moves to higher spin rates (farther from synchronization). Concerning the second point, the rotation of the planet always chases the equilibria indefinitely (considering a nonevolving atmosphere). ## 5 Conclusion We presented the recent implementation of the effect of the tides raised by the star on a telluric Venus-like planet in the code ESPEM. We added the secular evolution of the osculating elements of the planetary orbit (a, e, i, \(\omega\), and \(\Omega\)), that is the semi-major axis, the eccentricity, the inclination, the longitude of ascending node, the argument of periastron, and the planetary spin. We followed the secular equations published by Boue & Efroimsky (2019), which describe the evolution of the osculating elements of the orbit of the planet under tidal perturbations following the Kaula formalism (Kaula, 1964). Our implementation includes gravitational and thermal tides, which allowed us to study the tidal effect of an arbitrary atmosphere on an arbitrary planet, provided that the tidal Love numbers \(k_{2}\) associated with its atmosphere and internal structure are known. First we focused in Sec 3.1 on the eccentricity-driven SORs and validate our implementation by finding the 1:1, 3:2, 2:1, and 5:2 SORs, depending on the eccentricity value. Our results are consistent with the findings of Walterova & Behounkova (2020). Then we investigated in Sec 3.2 the inclination-driven SORs, as shown by Boue et al. (2016), here with the Andrade rheology. In Figure 14: Luminosity variation of a Sun-like star over the time from the beginning to the end of the MS. The dash-dotted line represents the simulation start time. The dotted lines represent the current time state. Figure 13: Left panel: Inclination derivative \(di/dt\) (in \(rad/s\)) as a function of the spin \(\Omega/n\) (\(\Omega\) and \(n\) are the planetary spin and mean motion respectively) and for different inclinations (from 0 to 180 degrees). Right panel: Spin derivative \(d\Omega/dt\) (in \(rad/s^{2}\)) as a function of the spin \(\Omega/n\) and inclinations (from 0 to 180 degrees). The red lines represent the null derivative in both panels. The arrows show the evolution of the system. They are consistent with the sign of the inclination derivative (left panel) and the spin derivative (right panel). The dotted orange lines represent the null points of the inclination derivative from the left panel (in red in the left panel). The image of Venus at the top of the two panels corresponds to the current state of Venus. The black dot represents the 1:1 synchronization state. The gray area hides the part of the plots close to the null rotation. Figure 15: Evolution of the rotational equilibrium state \(\Omega_{eq}\) (in red) evolving with the stellar luminosity (Fig. 14). In blue, we show the four curves corresponding to the rotational evolution from ESPEM that crossed the current spin state of Venus in Fig. 16. particular, we find the 1:1, the 2:1 and their symmetric, the -1:1, and -2:1 SORs. In Sec 4.1 we investigated the effect of a thick Venus-like atmosphere with the implementation of the analytical model of Leconte et al. (2015). We used their fit parameters, chosen so that GCM simulations reproduce the current state of Venus. We fit a homogeneous internal structure that allowed gravitational tides to balance thermal tides at the frequency of Venus. We must emphasize that the de-spinning of Venus is a difficult task. Then, we constrained our work to lower initial rotation rates. We find that depending on the initial spin rate and initial spin inclination, either a spin inclination of about zero in the synchronization state or a state close to the current retrograde rotation of Venus results, with a spin rate close to the synchronization and a spin inclination of 180 degrees. The synchronization state (1:1 SOR) is reached when the planet starts with a spin inclination lower than about 120 degrees and a prograde spin in our simulations. The latter state can be reached when the planet starts either with a high spin inclination (higher than about 120 degrees) and a prograde rotation, or with a spin inclination lower than about 60 degrees and a retrograde rotation. Our results are consistent with the final spin state of Venus found by Correia & Laskar (2001), who computed the evolution of the spin and obliquity of Venus under the solar tides. We point out, however, that Correia & Laskar (2001) used different models for the gravitational tides, which less appropriate than an Andrade rheology for silicate bodies (i.e., the CTL, Hut 1981; Goldreich 1966; Efroimsky & Makarov 2013) and they included the core-mantle friction. The core-mantle friction helps to damp the spin inclination of the planet (Correia & Laskar 2001) and should help the gravitational tides to balance the thermal tides. Furthermore, they also accounted for the chaotic motion in the Solar System (Laskar 1990). In particular, they reported that the chaotic motion helps to transition from low to high inclination. We cannot reproduce this with only two bodies. Further developments of the ESPEM code will include these effects. In Sec 4.1 we assumed that the spin state of Venus was in equilibrium state to fit an internal model to the thermal tides. Our results in Sec 4.2 showed, however, that this may not be the case, and the spin of Venus may still be evolving because of the variation in the solar luminosity. Thus, we investigated the effect of the evolving luminosity on the thermal tides. The evolution of the stellar luminosity leads to a continuous change in the balance between the gravitational and the thermal tides. The rotation of the planet will then continually increase as the luminosity increases. The luminosity evolving faster than the spin, the rotation of the planet will chase the equilibrium state without reaching it. We must highlight that the way in which the thermal tides will continue to increase as the rotation of the planet increase is unclear. In our case, the Maxwell-like frequency-dependent model of thermal tides overestimates the strength of the thermal tides at high frequencies, as the model was fit for the low-frequency regime (Auclair-Desrotour et al., 2019). Further studies with GCM simulations of Venus at higher rotation rates will help to answer this question. Exploring the complete evolution of the rotation of Venus requires calculating the dynamical evolution of the planet in (at least) three-body simulations. The effect of the evolution of the internal structure and the atmosphere of the planet must also be investigated. The strength of the gravitational and thermal tides, and thus the balance between the two contributions, should have varied strongly during the evolution of the planet since its formation. The current internal structure and atmospheric tides are not sufficiently constrained, however. Future missions to Venus, such as EnVision (Widemann et al., 2020), DAVINCI (Garvin et al., 2022), and VERITAS (Smrekar et al., 2020), will bring valuable data on the internal state of Venus and on the thermal atmospheric response of the planet (Bills et al., 2020). They will help to determine whether the planet is in equilibrium between the gravitational and the thermal tide. These constraints could also help reconstruct the thermal evolution of the planet, which would impact the competition between the gravitational and thermal tides and thus the rotational evolution. Finally, better observations of the atmosphere, together with additional modeling of the Venusian atmosphere, would help constrain the thermal tide. In particular, estimating the response of the atmosphere with a GCM to other frequencies would be extremely helpful. In the context of exoplanets, we need to consider a relevant model of tides for rocky exoplanets to characterize their surface and potential habitability. We have shown that a relevant tidal model for rocky planet allows a higher spin state than the synchronization, such as eccentricity-driven SORs and also inclination-driven SORs. Planets on a large orbit can keep nonzero eccentricity or obliquity because they evolve on a longer timescale, and can still be trapped in this eccentricity or obliquity-driven SORs. When a planet has an atmosphere, thermal tides can excite the spin inclination to high values because thermal tides drive the spin of Venus in its current state through the chaotic motion of the Solar System. The strength of the thermal tides also depends on the surface pressure, and thus on the total mass of the atmosphere, on the composition that determines the atmospheric absorption, and on the dynamics of the atmosphere. These dependences should be investigated in future studies. We showed that the variation in host star luminosity can also prevent the rotation of a planet from reaching equilibrium between gravitational and thermal tides. This behavior must be further studied for different types of star, that is, different radiation spectra, and with more elaborate models of thermal tides that take the wavelength dependence of the irradiation and the composition of the atmosphere into account. The new generation of instruments, that is, the JWST and ARIEL (Greene et al., 2016; Tinetti et al., 2021; Edwards & Tinetti, 2022), will provide valuable data on the atmosphere of rocky worlds. The correct modeling of the dynamical state of exoplanets is then crucial to constrain their surface condition. ###### Acknowledgements. This work has been carried out within the framework of the NCCR Planet's supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The authors acknowledge the financial support of the SNSF (grant number: 20021_197176). All the members from CEA acknowledge support from GOLF and PLATO CNES grants of the Astrophysics Division at CEA. The computations were performed at University of Geneva on the Baobab and Sydgaskian clusters. This research has made use of NASA's Astrophysics Data System. The authors thank Drs. Jeremy Leconte, Pierre Audair-Desrotour and Gwenael Boue for interesting discussions about the thermal tides.
2309.08483
Groups elementary equivalent to finitely generated free metabelian
We describe groups elementarily equivalent to a free metabelian group with n generators.
Olga Kharlampovich, Alexei Miasnikov
2023-09-15T15:43:36Z
http://arxiv.org/abs/2309.08483v1
# Groups elementary equivalent to finitely generated free metabelian ###### Abstract We describe groups elementarily equivalent to a free metabelian group with \(n\) generators. ###### Contents * 1 Interpretability and bi-iterpretability * 2 By-interpretation of a free metabelian group with \(\mathbb{Z}\) * 2.1 Preliminaries for metabelian groups * 2.2 \(\mathbb{Z}\) is absolutely interpretable in \(G\) * 2.3 Interpretation of \(\mathbb{Z}\)-exponentiation in \(G\) * 2.4 Interpretation of \(\mathbb{Z}\bar{G}\)-module \(G^{\prime}\) in \(G\) * 2.5 Proof of Theorem 2 * 3 Groups elementarily equivalent to a freemetabelian group * 4 \(A\)-metabelian groups ## 1 Interpretability and bi-iterpretability One can use the model-theoretic notion of interpretability and bi-iterpretability to study structures elementarily equivalent to a given one. In this paper we are going to do this for free metabelian groups. We remind here some precise definitions and several known facts that may not be very familiar to algebraists. A _language_ (or _a signature_) \(L\) is a triple \((Fun,Pr,C)\), where \(Fun=\{f,\ldots\}\) is a set of functional symbols \(f\) coming together with their arities \(n_{f}\in\mathbb{N}\), \(Pr\) is a set of relation (or predicate) symbols \(Pr=\{P,\ldots\}\) coming together with their arities \(n_{P}\in\mathbb{N}\), and a set of constant symbols \(C=\{c,\ldots\}\). Sometimes we write \(f(x_{1},\ldots,x_{n})\) or \(P(x_{1},\ldots,x_{n})\) to show that \(n_{f}=n\) or \(n_{P}=n\). Usually we denote variables by small letters \(x,y,z,a,b,u,v,\ldots\), while the same symbols with bars \(\bar{x},\bar{y},\ldots\) denote tuples of the corresponding variables, say \(\bar{x}=(x_{1},\ldots,x_{n})\). In this paper we always assume, if not said otherwise, that the languages we consider are finite. The following languages appear frequently throughout the text: the language of groups \(\{\cdot,{}^{-1}\,,1\}\), where \(1\) is the constant symbol for the identity element, \(\cdot\) is the binary multiplication symbol, and \({}^{-1}\) is the symbol of inversion; and the language of rings \(\{+,\cdot,0\}\) with the standard symbols for addition, multiplication, and the additive identity \(0\). Sometimes we add the constant \(1\) to form the language of unitary rings (a priori, our rings are not unitary). A structure in the language \(L\) (an \(L\)-structure) with the base set \(A\) is sometimes denoted by \(\mathbb{A}=\langle A;L\rangle\) or simply by \(\mathbb{A}=\langle A;f,\ldots,P,\ldots,c,\ldots\rangle\). For a given structure \(\mathbb{A}\) by \(L(\mathbb{A})\) we denote the language of \(\mathbb{A}\). When the language \(L\) is clear from the context, we follow the standard algebraic practice and denote the structure \(\mathbb{A}=\langle A;L\rangle\) simply by \(A\). For example, we refer to a field \(\mathbb{F}=\langle F;+,\cdot,0,1\rangle\) simply as \(F\), or to a group \(\mathbb{G}=\langle G;\cdot,{}^{-1}\,,1\rangle\) as \(G\), etc. Sometimes we refer to a first-order formula in a language \(L\) as to an \(L\)-formula and denote by \(\mathcal{F}_{L}\) the set of all \(L\)-formulas. Let \(\mathbb{B}=\langle B;L\rangle\) be a structure. A subset \(A\subseteq B^{n}\) is called _definable_ in \(\mathbb{B}\) if there is a formula \(\phi(x_{1},\ldots,x_{n})\) (without parameters) in \(L(\mathbb{B})\) such that \(A=\{(b_{1},\ldots,b_{n})\in B^{n}\mid\mathbb{B}\models\phi(b_{1},\ldots,b_{n})\}\). In this case we denote \(A\) by \(\phi(B^{n})\) or \(\phi(\mathbb{B})\) and say that \(\phi\)_defines_\(A\) in \(\mathbb{B}\). Similarly, an operation \(f\) on the subset \(A\) is definable in \(\mathbb{B}\) if its graph is definable in \(\mathbb{B}\). A constant \(c\) is definable if the relation \(x=c\) is definable. An \(n\)-ary predicate \(P(x_{1},\ldots,x_{n})\) is definable in \(\mathbb{B}\) if the set \(\{(b_{1},\ldots,b_{n})\in\mathbb{B}^{n}|P(b_{1},\ldots,b_{n})\) is true\(\}\) is definable in \(\mathbb{B}\). In the same vein an algebraic structure \(\mathbb{A}=\langle A;f,\ldots,P,\ldots,c,\ldots\rangle\) is definable in \(\mathbb{B}\) if there is a definable subset \(A^{*}\subseteq B^{n}\) and operations \(f^{*},\ldots,\) predicates \(P^{*},\ldots,\) and constants \(c^{*},\ldots,\) on \(A^{*}\) all definable in \(\mathbb{B}\) such that the structure \(\mathbb{A}^{*}=\langle A^{*};f^{*},\ldots,P^{*},\ldots,c^{*},\ldots,\rangle\) is isomorphic to \(\mathbb{A}\). (Notice, that constants \(c,\ldots\) belong to the language of \(\mathbb{A}\), they are not parameters.) For example, if \(Z(G)\) is the center of a group \(G\) then it is definable as a group in \(G\). One can do a bit more in terms of definability. In the notation above if \(\sim\) is a definable equivalence relation on a definable subset \(A\subseteq B^{n}\) then we say that the quotient set \(A/\sim\) is _interpretable_ in \(\mathbb{B}\). Furthermore, an operation \(f\) or a predicate \(P\) on the quotient set \(A/\sim\) is interpretable in \(\mathbb{B}\) if the full preimage of its graph in \(A\) is definable in \(\mathbb{B}\). For example, if \(N\) is a normal definable subgroup of a group \(G\), then the equivalence relation \(x\sim y\) on \(G\) given by \(xN=yN\) is definable in \(G\), so the quotient set \(G/N\) of all right cosets of \(N\) is interpretable in \(G\). It is easy to see that the multiplication induced from \(G\) on \(G/N\) is also interpretable in \(G\). This show that the quotient group \(G/N\) is interpretable in \(G\). **Definition 1**.: _An algebraic structure \(\mathbb{A}=\langle A;f,\ldots,P,\ldots,c,\ldots\rangle\) is absolutely interpretable (or \(0\)-interpretable) in a structure \(\mathbb{B}\) if there is a subset \(A^{*}\subseteq B^{n}\) definable in \(\mathbb{B}\), an equivalence relation \(\sim\) on \(A^{*}\) definable in \(\mathbb{B}\), operations \(f^{*},\ldots,\) predicates \(P^{*},\ldots,\) and constants \(c^{*},\ldots,\) on the quotient set \(A^{*}/\!\!\sim\) all interpretable in \(\mathbb{B}\) such that the structure \(\mathbb{A}^{*}=\langle A^{*}/\!\!\sim;f^{*},\ldots,P^{*},\ldots,c^{*},\ldots,\rangle\) is isomorphic to \(\mathbb{A}\)._ Now we introduce some notation. An interpretation of \(\mathbb{A}\) in \(\mathbb{B}\) is described by the following set of formulas in the language \(L(\mathbb{B})\) \[\Gamma=\{U_{\Gamma}(\bar{x}),E_{\Gamma}(\bar{x}_{1},\bar{x}_{2}),Q_{\Gamma}(\bar {x}_{1},\ldots,\bar{x}_{t_{Q}})\mid Q\in L(\mathbb{A})\}\] (here \(\bar{x}\) and \(\bar{x}_{i}\) are \(n\)-tuples of variables) which interpret \(\mathbb{A}\) in \(\mathbb{B}\) (as in the definition 1 above). Namely, \(U_{\Gamma}\) defines in \(\mathbb{B}\) a subset \(A_{\Gamma}=U_{\Gamma}(B^{n})\subseteq B^{n}\), \(E_{\Gamma}\) defines in \(\mathbb{B}\) an equivalence relation \(\sim_{\Gamma}\) on \(A_{\Gamma}\), and the formulas \(Q_{\Gamma}\) define functions \(f_{\Gamma}\), predicates \(P_{\Gamma}\), and constants \(c_{\Gamma}\) that interpret the corresponding symbols from \(L(\mathbb{A})\) on the quotient set \(A_{\Gamma}/\sim_{\Gamma}\) in such a way that the \(L\)-structure \(\Gamma(\mathbb{B})=\langle A_{\Gamma}/\sim_{\Gamma};f_{\Gamma},\ldots,P_{ \Gamma},\ldots,c_{\Gamma},\ldots\rangle\) is isomorphic to \(\mathbb{A}\). Note, that we interpret a constant \(c\in L(\mathbb{A})\) in the structure \(\Gamma(\mathbb{B})\) by the \(\sim_{\Gamma}\)-equivalence class of some tuple \(\bar{b}_{c}\in A_{\Gamma}\) defined in \(\mathbb{B}\) by the formula \(Q_{c}\). We write \(\mathbb{A}\simeq\Gamma(\mathbb{B})\) if \(\Gamma\) interprets \(\mathbb{A}\) in \(\mathbb{B}\) as described above and refer to \(\Gamma\) as an _interpretation code_ or just _code_. The number \(n\) is called the dimension of \(\Gamma\), denoted \(n=dim\Gamma\). By \(\mu_{\Gamma}\) we denote a a surjective map \(A_{\Gamma}\to\mathbb{A}\) (here \(\mathbb{A}=\langle A;L(\mathbb{A})\rangle\)) that gives rise to an isomorphism \(\bar{\mu}_{\Gamma}:\Gamma(\mathbb{B})\to\mathbb{A}\). We refer to this map \(\mu_{\Gamma}\) as the _the coordinate map_ of the interpretation \(\Gamma\). Sometimes we cal the relation \(\sim_{\Gamma}\) the _kernel_ of the coordinate map \(\mu_{\Gamma}\) and denote it by \(\ker(\mu_{\Gamma})\). Finally, notation \(\mu:\mathbb{B}\rightsquigarrow\mathbb{A}\) means that \(\mathbb{A}\) is interpretable in \(\mathbb{B}\) with the coordinate map \(\mu\). We use this notation throughout the paper. More generally, the formulas that interpret \(\mathbb{A}\) in \(\mathbb{B}\) may contain elements from \(\mathbb{B}\) that are not in the language \(L(\mathbb{B})\), i.e., some parameters, say \(p_{1},\ldots,p_{k}\in B\). In this case we assume that all the formulas from the code \(\Gamma\) have a tuple of extra variables \(\bar{y}=(y_{1},\ldots,y_{k})\) for parameters in \(\mathbb{B}\): \[\Gamma=\{U_{\Gamma}(\bar{x},\bar{y}),E_{\Gamma}(\bar{x}_{1},\bar{x}_{2},\bar{y }),Q_{\Gamma}(\bar{x}_{1},\ldots,\bar{x}_{t_{Q}},\bar{y})\mid Q\in L(\mathbb{ A})\} \tag{1}\] so that after the assignment \(y_{1}\to p_{1},\ldots,y_{k}\to p_{k}\) the code interprets \(\mathbb{A}\) in \(\mathbb{B}\). In this event we write \(\mathbb{A}\simeq\Gamma(\mathbb{B},\bar{p})\) (here \(\bar{p}=(p_{1},\ldots,p_{k})\)), and say that \(\mathbb{A}\) is interpretable in \(\mathbb{B}\) by the code \(\Gamma\) with parameters \(\bar{p}\). In the case when \(\bar{p}=\emptyset\) one gets again the absolute interpretability. We will say that a subset \(D\subseteq A_{\Gamma}/\sim_{\Gamma}\) is definable in \(\mathbb{B}\) if its full preimage in \(A_{\Gamma}\) is definable in \(\mathbb{B}\). More generally, a subset \(D\subseteq(A_{\Gamma}/\sim_{\Gamma})^{m}\) is definable in \(\mathbb{B}\) if its full preimage in \(A_{\Gamma}^{m}\) under the natural projection \(A_{\Gamma}^{m}\to(A_{\Gamma}/\sim_{\Gamma})^{m}\) is definable in \(\mathbb{B}\). We say that a structure \(\mathbb{A}\) is interpreted in a given structure \(\mathbb{B}\)_uniformly_ with respect to a subset \(D\subseteq B^{k}\) if there is a code \(\Gamma\) such that \(\mathbb{A}\simeq\Gamma(\mathbb{B},\bar{p})\) for every tuple of parameters \(\bar{p}\in D\). If \(\mathbb{A}\) is interpreted in \(\mathbb{B}\) uniformly with respect to a \(0\)-definable subset \(D\subseteq B^{k}\) then we say that \(\mathbb{A}\) is _regularly interpretable_ in \(\mathbb{B}\) and write in this case \(\mathbb{A}\simeq\Gamma(\mathbb{B},\phi)\), provided \(D\) is defined by \(\phi\) in \(\mathbb{B}\). Note that the absolute interpretability is a particular case of the regular interpretability where the set \(D\) is empty. Now we discuss a very strong version of mutual interpretability of two structures, so-called _bi-interpretability_. **Definition 2**.: _Two algebraic structures \(\mathbb{A}\) and \(\mathbb{B}\) are called bi-interpretable (with parameters) in each other if the following conditions hold:_ 1. \(\mathbb{A}\) _and_ \(\mathbb{B}\) _are interpretable (with parameters) in each other, so_ \(\mathbb{A}\simeq\Gamma(\mathbb{B},p)\) _and_ \(\mathbb{B}\simeq\Delta(\mathbb{A},q)\) _for some codes_ \(\Gamma\) _and_ \(\Delta\) _and tuples of parameters_ \(p,q\)_. By transitivity_ \(\mathbb{A}\)_, as well as_ \(\mathbb{B}\)_, is interpretable (with parameters) in itself, so_ \(\mathbb{A}\simeq(\Gamma\circ\Delta)(\mathbb{A},p^{*})\) _and_ \(\mathbb{B}\simeq(\Delta\circ\Gamma)(\mathbb{B},q^{*})\)_, where_ \(\circ\) _denotes composition of interpretations and_ \(p^{*}\)_,_ \(q^{*}\) _the corresponding parameters._ 2. _There is a formula_ \(\theta_{\mathbb{A}}(\bar{u},x,\bar{s})\) _in the language_ \(L(\mathbb{A})\) _such that_ \(\theta_{\mathbb{A}}(\bar{u},x,p^{*})\) _defines in_ \(\mathbb{A}\) _the isomorphism_ \(\bar{\mu}_{\Gamma\circ\Delta}:(\Gamma\circ\Delta)(\mathbb{A},p^{*})\to\mathbb{A}\) _(more precisely, it defines the coordinate map_ \(\mu_{\Gamma\circ\Delta}:A_{\Gamma\circ\Delta}\to A\)_). Similarly, there is a formula_ \(\theta_{\mathbb{B}}(\bar{v},x,\bar{t})\) _in the language_ \(L(\mathbb{B})\) _such that_ \(\theta_{\mathbb{B}}(\bar{v},x,q^{*})\) _defines in_ \(\mathbb{B}\) _the isomorphism_ \(\bar{\mu}_{\Delta\circ\Gamma}:(\Delta\circ\Gamma)(\mathbb{B},q^{*})\) _(more precisely, it defines the coordinate map_ \(\mu_{\Delta\circ\Gamma}:B_{(\Delta\circ\Gamma)}\to B\)_)._ Algebraic structures \(\mathbb{A}\) and \(\mathbb{B}\) are called _0-bi-interpretable_ or _absolulutely hyperpretable_ in each other if in the definition above the tuples of parameters \(p\) and \(q\) are empty. Unfortunately, 0-bi-interpretability is rather rare. Indeed, the following result puts quite strong restrictions on such structures. **Lemma 1**.: _[_3_]_ _If \(\mathbb{A}\) and \(\mathbb{B}\) are 0-bi-interpretable in each other then their groups of automorphisms are isomorphic._ Fortunately, there is a notion of regular bi-interpretability, which is less restrictive, occurs more often, and which enjoys many properties of 0-bi-interpretability. **Definition 3**.: _Two algebraic structures \(\mathbb{A}\) and \(\mathbb{B}\) are called regularly bi-interpretable in each other if the following conditions hold:_ 1. \(\mathbb{A}\) _and_ \(\mathbb{B}\) _are regularly interpretable in each other, so_ \(\mathbb{A}\simeq\Gamma(\mathbb{B},\phi)\) _and_ \(\mathbb{B}\simeq\Delta(\mathbb{A},\psi)\) _for some codes_ \(\Gamma\) _and_ \(\Delta\) _and the corresponding formulas_ \(\phi,\psi\) _(without parameters). By transitivity_ \(\mathbb{A}\)_, as well as_ \(\mathbb{B}\)_, is regularly interpretable in itself, so_ \(\mathbb{A}\simeq(\Gamma\circ\Delta)(\mathbb{A},\phi^{*})\) _and_ \(\mathbb{B}\simeq(\Delta\circ\Gamma)(\mathbb{B},\psi^{*})\)_, where_ \(\circ\) _denotes composition of interpretations and_ \(\phi^{*},\psi^{*}\) _the corresponding formulas._ 2. _There is a formula_ \(\theta(\bar{y},x,\bar{z})\) _in the language of_ \(\mathbb{A}\) _such that for every tuple_ \(p^{*}\) _satisfying_ \(\phi^{*}(\bar{z})\) _in_ \(\mathbb{A}\) _the formula_ \(\theta(\bar{y},x,p^{*})\) _defines in_ \(\mathbb{A}\) _the isomorphism_ \(\bar{\mu}_{\Gamma\circ\Delta}:(\Gamma\circ\Delta)(\mathbb{A},p^{*})\to\mathbb{A}\) _and there is a formula_ \(\sigma(\bar{u},x,\bar{v})\) _in the language of_ \(\mathbb{B}\) _such that for every tuple_ \(q^{*}\) _satisfying_ \(\psi^{*}(\bar{v})\) _in_ \(\mathbb{B}\) _the formula_ \(\sigma(\bar{u},x,q^{*})\) _defines in_ \(\mathbb{B}\) _the isomorphism_ \(\bar{\mu}_{\Delta\circ\Gamma}:(\Delta\circ\Gamma)(\mathbb{B},q^{*})\to\mathbb{B}\)_._ **Definition 4**.: _Suppose structures \(\mathbb{A}\) and \(\mathbb{B}\) are regularly interpretable in each other, so \(\mathbb{A}\simeq\Gamma(\mathbb{B},\phi)\) and \(\mathbb{B}\simeq\Delta(\mathbb{A},\psi)\) for some codes \(\Gamma\) and \(\Delta\) and the corresponding formulas \(\phi,\psi\) without parameters. Suppose \(\mathbb{A}\simeq(\Gamma\circ\Delta)(\mathbb{A},\bar{s})\) for any tuple \(\bar{s}\) such that \(A\models\phi^{*}(\bar{s})\). We say that \(\mathbb{A}\) is invertibly interpretable in \(\mathbb{B}\) if there is a formula \(\theta(\bar{y},x,\bar{s})\) in the language of \(\mathbb{A}\) such that for every tuple \(p^{*}\) satisfying \(\phi^{*}(\bar{s})\) in \(\mathbb{A}\) the formula \(\theta(\bar{y},x,p^{*})\) defines in \(\mathbb{A}\) an isomorphism \(\Gamma\circ\Delta(\mathbb{A},p^{*})\to\mathbb{A}\)._ **Theorem 1**.: _Let \(\mathbb{A}\) be invertible interpretable in \(\mathbb{B}\), \(\mathbb{A}\simeq\Gamma(\mathbb{B})\). Then a structure \(\mathbb{A}^{\prime}\) is elementarily equivalent to \(\mathbb{A}\) if and only if there is a structure \(\mathbb{B}^{\prime}\) elementarily equivalent to \(\mathbb{B}\) such that \(\mathbb{A}^{\prime}\simeq\Gamma(\mathbb{B}^{\prime}).\)_ Proof.: The "if" part is [5, Lemma 4.4 (2)]. If \(\mathbb{A}\equiv\mathbb{A}^{\prime}\), then \(\mathbb{B}^{\prime}\simeq\Delta(\mathbb{A}^{\prime})\equiv\mathbb{B}\) by the "if" part. By the definition of invertible interpretability, \(\Gamma(\mathbb{B}^{\prime})\simeq\Gamma\circ\Delta(\mathbb{A}^{\prime})\). ## 2 By-interpretation of a free metabelian group with \(\mathbb{Z}\) We proved in [5] that a free metabelian group with \(n\) generators, \(n>1\) is prime, atomic, homogeneous and QFA. This was done by proving the existence of bi-interpretation with \(\mathbb{Z}\). Here we will describe a bi-interpretation from [5] of a free metabelian group \(G\) of finite rank \(n\geq 2\) with \(\mathbb{Z}=(\mathbb{Z},\cdot,^{-1},0,1)\). We will use it in the next section. Throughout this section we denote by \(G\) a free metabelain group of rank \(n\geq 2\) with basis \(X=\{x_{1},\ldots,x_{n}\}\). **Theorem 2**.: _Every free metabelian group of finite rank \(\geq 2\) is regularly bi-interpretable with \(\mathbb{Z}\)._ ### Preliminaries for metabelian groups In this section we introduce notation and describe some results that we need in the sequel. Some notation: \(G^{\prime}=[G,G]\) is the commutant of \(G\), \(G_{m}\) is the \(m\)'th term of the lower central series of \(G\), \(\langle A\rangle\) is the subgroup generated by \(A\subseteq G\), \(C_{G}(A)\) is the centralizer of a subset \(A\subseteq G\), if \(x,y\in G\) then \([x,y]=x^{-1}y^{-1}xy\) is the commutator of \(x\) and \(y\), and \(x^{y}=y^{-1}xy\) is the conjugate of \(x\) by \(y\). The maximal root of an element \(x\in G\) is an element \(x_{0}\in G\) such that \(x_{0}\) is not a proper power in \(G\) and \(x\in\langle x_{0}\rangle\). We term an element \(x\in G\) a _root_ if it is not a proper in \(G\). In [8] Mal'cev obtained complete description of centralizers of elements in \(G\). Namely, the following holds. Let \(x\in G\). Then 1. if \(x\in G^{\prime}\) then \(C_{G}(x)=G^{\prime}\) 2. if \(x\not\in G^{\prime}\) then \(C_{G}(x)=\langle x_{0}\rangle\), where \(x_{0}\) is the unique maximal root of \(x\), i.e., \(x=x_{0}^{k}\) for some \(k\in\mathbb{N}\) and \(x_{0}\) is not a proper power. It follows, in particular, that the maximal roots of elements in \(G\) exist and they are unique. Let \(v\in G\smallsetminus G^{\prime}\). Define a map \(\lambda_{v}:G^{\prime}\to G^{\prime}\) such that for \(c\in G^{\prime}\)\(\lambda_{v}(c)=[v,c]\). Then the map \(\lambda_{v}\) is a homomorphism. Indeed, using the commutator identity \[[x,yz]=[x,z][x,y]^{z},\] which holds for every elements \(x,y,z\) of an arbitrary group, one has for \(c_{1},c_{2}\in G^{\prime}\) \[[v,c_{1}c_{2}]=[v,c_{2}][v,c_{1}]^{c_{2}}=[v,c_{2}][v,c_{1}]=[v,c_{1}][v,c_{2}], \tag{2}\] as claimed. Similarly, in the notation above, the map \(\mu_{v}:c\rightarrow[c,v]\) is a homomorphism \(\mu_{v}:G^{\prime}\to G^{\prime}\). Let \(v\in G\smallsetminus G^{\prime}\) and \(d\in G^{\prime}\). Then for any \(k\in\mathbb{Z}\) there exists \(c\in G^{\prime}\) such that: \[(vd)^{k}=v^{k}d^{k}[c,v].\] We prove first, by induction on \(k\), that \[d^{k}v=vd^{k}[c,v]\] for some \(c\in G^{\prime}\). Indeed, for \(k=1\) one has the standard equality \(dv=vd[d,v]\). Now \[d^{k+1}v=d^{k}dv=d^{k}vd[d,v]=vd^{k}[c_{1},v]d[d,v]=vd^{k+1}[c_{1},v][d,v]=vd ^{k+1}[c_{1}d,v],\] the last equality comes from the property (2), that the map \(\mu_{v}\) is a homomorphism on \(G^{\prime}\). Now one can finish the claim by induction on \(k\) as follows (here elements \(c_{i}\in G^{\prime}\) appears as the result of application of the induction step and the claim above): \[(vd)^{k+1}=(vd)^{k}vd=v^{k}d^{k}[c_{2},v]vd=v^{k}d^{k}v[c_{2},v][[c_{2},v],v]d=\] \[=v^{k}vd^{k}[c_{3},v][c_{2},v][[c_{2},v],v]d=v^{k+1}d^{k+1}[c_{3}c_{2}[c_{2},v],v]=v^{k+1}d^{k+1}[c,v]\] where the second to last equality comes again from the property (2), and \(c=c_{3}c_{2}[c_{2},v]\). This proves the claim. **Lemma 2**.: _[_5_, Lemma 4.23]_ _Let \(d\in G^{\prime}\). If for any \(v\in G\smallsetminus G^{\prime}\) there exists \(c\in G^{\prime}\) such that \(d=[c,v]\), then \(d=1\)._ The group \(G\) acts by conjugation on \(G^{\prime}\), which gives an action of the abelianization \(\bar{G}=G/G^{\prime}\) on \(G^{\prime}\). This action extends by linearity to an action of the group ring \(\mathbb{Z}\bar{G}\) on \(G^{\prime}\) and turns \(G^{\prime}\) into a \(\mathbb{Z}\bar{G}\)-module. Denote by \(a_{i}\) the image of \(x_{i}\) in \(\bar{G}\), \(i=1,\ldots,n\). The group \(\bar{G}\) is a free abelian group with basis \(a_{1},\ldots,a_{n}\), so the group ring \(\mathbb{Z}\bar{G}\) can be viewed as the Laurent polynomial ring \(A=\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{n},a_{n}^{-1}]\). For the action of \(A\) on \(G^{\prime}\) we use the exponential notation, namely, for \(u\in G^{\prime}\) and \(a\in A\) we denote by \(u^{a}\) the result of the action of \(a\) on \(u\). Let \(Y=\{[x_{i},x_{j}]\mid 1\leq j<i\leq n\}\). Then the set of commutators \(Y\) generates \(G^{\prime}\) as an \(A\)-module. Note, that \(G\) (as a metabelian group) satisfies the Jacobi identity, i.e, for every \(u,v,w\in G\) the following equality holds \[[u,v,w][v,w,u][w,u,v]=1.\] In particular, for \(u=x_{i},v=x_{j},w=x_{k}\) one gets (in the module notation) \[[x_{i},x_{j}]^{a_{k}-1}[x_{j},x_{k}]^{a_{i}-1}[x_{k},x_{i}]^{a_{j}-1}=1.\] It shows also that commutators from \(Y\) satisfy the following identity \[[x_{i},x_{j}]^{a_{k}-1}[x_{j},x_{k}]^{a_{i}-1}=[x_{i},x_{k}]^{a_{j}-1},\] so \(Y\) is not a free generating set of the module \(G^{\prime}\). However, there are nice normal forms of elements of the module \(G^{\prime}\) described, for example, in [12], [5]. **Proposition 1**.: _Every element \(u\in G^{\prime}\) can be uniquely presented as the following product_ \[u=\Pi_{1\leq j<i\leq n}[x_{i},x_{j}]^{\beta_{ij}(a_{1},\ldots,a_{i})},\] _where \(\beta_{ij}(a_{1},\ldots,a_{i})\in\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{i},a_{i }^{-1}]\leq\mathbb{Z}\bar{G}\)._ The following statement follows from normal forms. **Proposition 2**.: _[_5_, Proposition 4.4]__\(G^{\prime}\) is a free module over \(\mathbb{Z}[a_{1},a_{1}^{-1},a_{2},a_{2}^{-1}]\) with the basis \(\{[x_{i},x_{j}]^{a_{3}^{\delta_{3}}\ldots a_{j}^{\delta_{j}}}\}\) for all \(1\leq i<j\leq n\), \(\delta_{3},\ldots,\delta_{j}\in\mathbb{Z}^{j-2}.\)_ Let \(F_{n}\) be the free group of rank \(n\) with the basis \(\{z_{1},\ldots,z_{n}\}\), let \(\pi:F_{n}\to\bar{g}\). A partial _Fox derivative_ associated with \(z_{i}\) is the linear map \(D_{i}:\mathbb{Z}(F_{n})\to\mathbb{Z}(F_{n})\) satisfying the conditions \[D_{i}(z_{i})=1,D_{i}(z_{j})=0,i\neq j\] and \[D_{i}(uv)=D_{i}(u)+uD_{i}(v)\] for all \(u,v\in F_{n}\). The main identity is \(D_{1}(w)(z_{1}-1)+\ldots+D_{n}(w)(z_{n}-1)=w-1.\)\(D_{i}\) induces a linear map \(d_{i}:\mathbb{Z}G\to\mathbb{Z}\bar{G}.\) We briefly explain the details. One can compute \[D_{i}([u^{-1},v^{-1}])=(1-uvu^{-1})D_{i}(u^{-1})+(u-uvu^{-1}v^{-1})D_{i}(v)\] for all \(u,v\in F_{n}.\) It follows that for \(\pi^{\prime}:\mathbb{Z}F_{n}\to\mathbb{Z}\bar{G}\), for all \(w\in{F_{n}}^{\prime\prime}\), \(w\in\ker\pi^{\prime}\). Hence \(D_{i}\) induces a linear map \(d_{i}:\mathbb{Z}G\to\mathbb{Z}\bar{G}\) (that we will also call Fox derivative). From the definition we have \[d_{i}(x_{i})=1,\ d_{i}(x_{j})=0,i\neq j,\] \[d_{i}(uv)=d_{i}(u)+(u\pi)d_{i}(v)\] for all \(u,v\in G\). The main identity is \(d_{1}(w)(a_{1}-1)+\ldots+d_{n}(w)(a_{n}-1)=w\pi-1,\) where \(w\) is an arbitrary element of \(G\). It can be verified that for \(w\in G^{\prime}\) and \(a\in\bar{G}\), \(d_{i}(w^{a})=a^{-1}d_{i}(w).\) Also note that for \(w\in G^{\prime}\) and \(u\in G\) we have \[d_{i}(wu)=d_{i}(w)+d_{i}(u).\] If \(a=\alpha_{1}a_{1}+\ldots+\alpha_{k}a_{k}\in\mathbb{Z}\bar{G}\) with \(\alpha_{i}\in\mathbb{Z},a_{i}\in\bar{G}\), we denote by \(a_{inv}=\alpha_{1}{a_{1}}^{-1}+\ldots+\alpha_{k}{a_{k}}^{-1}.\) Then \[d_{i}(w^{a})=a_{inv}d_{i}(w).\] For \(u\in G,\alpha\in\mathbb{Z}\), \[d_{i}(u^{\alpha})=\frac{u^{\alpha}-1}{u-1}d_{i}(u).\] ### \(\mathbb{Z}\) is absolutely interpretable in \(G\) The description in 2.1 of the centralizers of elements in \(G\) implies that the formula \[\phi(x)=\forall y\forall z([x,[y,z]]=1)\] defines the commutant \(G^{\prime}\) in \(G\). Indeed, \(\phi(g)\) holds in \(G\) for an element \(g\in G\) if and only if \(C_{G}(g)\geq G^{\prime}\), which happens only if \(g\in G^{\prime}\). The free nilpotent group \(G/G_{3}\) of class 2 and rank n is 0-interpretable in \(G\). Indeed, the verbal subgroup \(G_{3}\) has finite width in \(G\)[13], hence it is 0-definable in \(G\). Therefore the quotient group \(G/G_{3}\) is 0-interpretable in \(G\). It was shown in [9] that the ring \(\mathbb{Z}\) and its action by exponents on free abelian groups \(G/G^{\prime}\) and \(G^{\prime}/G_{3}\) are 0-interpretable in \(G/G_{3}\), hence, by transitivity of 0-interpretations, it is 0-interpretable in \(G\). We denote this interpretation of \(\mathbb{Z}\) in \(G\) by \(\mathbb{Z}^{*}\). Now, we may use in our formulas expressions of the type \(y=x^{m}modG^{\prime}\) for \(x,y\in G\smallsetminus G^{\prime}\) and \(m\in\mathbb{Z}^{*}\) viewing them as notation for the corresponding formulas of group theory language which are coming from the interpretations of \(\mathbb{Z}^{*}\) and its actions on \(G/G^{\prime}\). Similarly, for \(G^{\prime}/G_{3}\). More precisely, the interpretation \(\mathbb{Z}^{*}\) is given by a definable in \(G\) subset \(A\subseteq G^{k}\) together with a definable in \(G\) equivalence relation \(\sim\) on \(A\) and formulas \(\psi_{+}(\bar{x},\bar{y},\bar{z}),\psi_{\circ}(\bar{x},\bar{y},\bar{z})\) with \(k\)-tuples of variables \(\bar{x},\bar{y},\bar{z}\), such that the formulas \(\psi_{+}\) and \(\psi_{\circ}\) define binary operations on the factor set \(A/\sim\) (denoted by \(+\) and \(\circ\)) and the structure \(\langle A/\sim;+,\circ\rangle\) is a ring isomorphic to \(\mathbb{Z}\). Furthermore, the exponentiation by \(\mathbb{Z}^{*}\) on \(G/G^{\prime}\) and on \(G^{\prime}/G_{3}\) is also 0-interpretable, which means that there are formulas in the group language, say \(expnil_{1}(u,v,\bar{x})\) and \(expnil_{2}(u,v,\bar{x})\), such that for \(g,h\in G\) and \(m\in\mathbb{Z}^{*}\), where \(m\) is the equivalence class of some tuple \(\bar{a}\in A\) (we write in this case \(m=[\bar{a}]\)), one has \(g^{m}=h(mod\ G^{\prime})\) if and only if \(expnil_{1}(g,h,\bar{a})\) holds in \(G\) and also for elements \(p,q\in G^{\prime}\ p^{m}=q(mod\ G_{3})\) if and only if \(expnil_{2}(p,q,\bar{a})\) holds in \(G\). ### Interpretation of \(\mathbb{Z}\)-exponentiation in \(G\) Now, in the notation above, we construct a formula \(exp(u,v,\bar{x})\) of the group language, where \(\bar{x}\) is a \(k\)-tuple of variables, such that for \(g,h\in G\) and \(m\in\mathbb{Z}^{*}\), where \(m\) is the equivalence class \([\bar{a}]\) of some \(\bar{a}\in A\), the following holds \[g=h^{m}\Longleftrightarrow G\models exp(g,h,\bar{a}),\ (here\ m=[\bar{a}]).\] To construct the formula \(exp(u,v,\bar{x})\) we consider several two cases, for each of them build the corresponding formula \(exp_{i}(u,v,\bar{x})\), and then use them to build \(exp(u,v,\bar{x})\). Case 1. Let \(g\in G\smallsetminus G^{\prime}\). In Section 2.2 we described a formula \(expnil_{1}(u,v,\bar{x})\) of group language such that for \(g,h\in G\) and \(m=[\bar{a}]\in\mathbb{Z}^{*}\) one has \[g^{m}=h(mod\ G^{\prime})\Longleftrightarrow G\models\ expnil_{1}(g,h,\bar{a}).\] Now put \[exp_{1}(u,v,\bar{x})=([u,v]=1\wedge expnil_{1}(u,v,\bar{x}).\] Then the formula \(exp_{1}(u,v,\bar{x})\) holds in \(G\) on elements \(g,h\in G\) and \(m=[\bar{a}]\) if and only if \(h=g^{m}(mod\ G^{\prime})\) and \(h\in C_{G}(g)\). Since the centralizer \(C_{G}(g)\) is cyclic there is only one such \(h\) and in this case \(h=g^{m}\). Case 2. Let \(g\in G^{\prime}\). Then for any \(w\in G\smallsetminus G^{\prime}\) and every \(m\in\mathbb{Z}\) there exists \(c\in G^{\prime}\) such that the following equality holds \[(wg)^{m}=w^{m}g^{m}[c,w].\] hence the elements \(g\) and \(g^{m}\) and \(m=[\bar{a}]\in Z^{*}\) satisfies the following formula \[\exp_{2}(u,v,\bar{x})=\forall w(w\in G\smallsetminus G^{\prime}\to\exists c(c \in G^{\prime}\wedge((wu)^{m}=w^{m}u^{m}[c,w])).\] Here, of course, we use the formula \(exp_{2}(u,v,\bar{x})\) to write down the condition\((wu)^{m}=w^{m}u^{m}[c,w]\). We claim that for given \(g\in G^{\prime}\) and \(m=[\bar{a}]\in\mathbb{Z}^{*}\) the formula \(\exp_{2}(g,v,\bar{a})\) holds in \(G\) only one one element - precisely on \(g^{m}\). Indeed, let \(h\in G\) be such that for a given \(m\in\mathbb{Z}\) for any \(w\in G\smallsetminus G^{\prime}\) there exists \(c_{1}\in G^{\prime}\) such that \[(wg)^{m}=w^{m}h[c_{1},w].\] Then \(w^{m}g^{m}[c,w]=w^{m}h[c_{1},w]\), so \[h^{-1}g^{m}=[c,w][c_{1},w]^{-1}=[c,w][c_{1}^{-1},w]=[cc_{1}^{-1},w].\] Now by Lemma 2 one gets \(h^{-1}g^{m}=1\), so \(h=g^{m}\), as claimed. This shows that the formula \(\exp_{2}(u,v,\bar{x})\) defines the exponentiation on \(G^{\prime}\). Finally, the formula \[\exp(u,v,\bar{x})=(u\notin G^{\prime}\to exp_{1}(u,v,\bar{x}))\wedge(u\in G^ {\prime}\to exp_{2}(u,v,\bar{x}))\] defines \(\mathbb{Z}\)-exponentiation on the whole group \(G\). ### Interpretation of \(\mathbb{Z}\bar{G}\)-module \(G^{\prime}\) in \(G\) In this section we interpret in \(G\) the action of the ring \(\mathbb{Z}\bar{G}\) on \(G^{\prime}\). We first show how to interpret the action of \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) on \(G^{\prime}\) and then the action of the whole ring \(\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{n},a_{n}^{-1}]\) on \(G^{\prime}\). But first we need two preliminary results. For a tuple \(\bar{\alpha}=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{Z}^{m}\), \(m\leq n\), denote by \(\lambda_{\bar{\alpha}}\) the homomorphim \(\lambda_{\bar{\alpha}}:\mathbb{Z}[a_{1},\ldots,a_{n}]\to\mathbb{Z}[a_{m+1}, \ldots,a_{n}]\) such that \(a_{i}\to\alpha_{i},i=1,\ldots,m\). The kernel \(I_{\bar{\alpha}}\) of \(\lambda_{\bar{\alpha}}\) is the ideal generated in \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) by \(\{a_{1}-\alpha_{1},\ldots,a_{m}-\alpha_{m}\}\). Notice, that for every polynomial \(P=P(a_{1},\ldots,a_{m})\in\mathbb{Z}[a_{1},\ldots,a_{n}]\) one has \(\lambda_{\bar{\alpha}}(P)=P(\alpha_{1},\ldots,\alpha_{n})\), so \[P(a_{1},\ldots,a_{m})=P(\alpha_{1},\ldots,\alpha_{m})+\Sigma_{i=1}^{m}(a_{i}- \alpha_{i})f_{i},\] for some \(f_{i}\in\mathbb{Z}[a_{1},\ldots,a_{n}]\). Let \(A\) and \(B\) be rings and \(\Lambda\) a set of homomorphisms from \(A\) into \(B\). Recall that \(A\) is discriminated into \(B\) by a set \(\Lambda\) if for any finite subset \(A_{0}\subseteq A\) there is a homomorphism \(\lambda\in\Lambda\) which is injective on \(A_{0}\). The following result is known, but we need the argument from the proof in the sequel. **Claim 1.** The ring \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) is discriminated into \(\mathbb{Z}\) by the set of homomorphisms \(\lambda_{\bar{\alpha}},\bar{\alpha}\in\mathbb{Z}^{n}\). Proof.: Since \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) is an integral domain it suffices to show \(\Lambda\) separates \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) into \(\mathbb{Z}\), i.e., for every non-zero polynomial \(Q\in\mathbb{Z}[a_{1},\ldots,a_{n}]\) there exists \(\lambda\in\Lambda\) such that \(\lambda(Q)\neq 0\). Indeed, let \(A_{0}=\{P_{1},\ldots,P_{t}\}\) with \(P_{i}\neq P_{j}\) for \(1\leq j<i\leq t\). Put \(Q_{ij}=P_{i}-P_{j}\) and \(Q=\Pi_{1\leq j<i\leq t}Q_{ij}\). Then \(Q\neq 0\). If for some \(\lambda\in\Lambda\)\(\lambda(Q)\neq 0\) then \(\lambda\) is injective on \(A_{0}\). Now we prove by induction on \(n\) that \(\Lambda\) separates \(\mathbb{Z}[a_{1},\ldots,a_{n}]\) into \(Z\). If \(P\in\mathbb{Z}[a_{1}]\) then \(\lambda_{\alpha_{1}}\) for each sufficiently large \(\alpha_{1}\) separates \(P\) into \(\mathbb{Z}\). If \(P\in\mathbb{Z}[a_{1},\ldots,a_{n}]\) then for some \(m\in\mathbb{N}\) \[P=Q_{m}a_{n}^{m}+Q_{m-1}a_{n}^{m-1}+\ldots+Q_{1}a_{n}+Q_{0},\] where \(Q_{i}\in\mathbb{Z}[a_{1},\ldots,a_{n-1}]\) and \(Q_{m}\neq 0\). By induction there is \(\bar{\beta}=(\beta_{1},\ldots,\beta_{n-1})\in\mathbb{Z}^{n-1}\) such that the homomorphism \(\lambda_{\bar{\beta}}\) discriminates \(Q_{m}\) into \(\mathbb{Z}\). Then \[\lambda_{\bar{\beta}}(Q_{m})a_{n}^{m}+\lambda_{\bar{\beta}}(Q_{m-1})a_{n}^{m- 1}+\ldots+\lambda_{\bar{\beta}}(Q_{1})a_{n}+\lambda_{\bar{\beta}}(Q_{0})\] is a non-zero polynomial in \(\mathbb{Z}[a_{n}]\). Now one can separate this polynomial into \(\mathbb{Z}\) by sending \(a_{n}\) to a large enough integer \(\alpha_{n}\), as above. This proves the claim. Denote by \((G^{\prime})^{I_{\alpha}}\) the submodule of the module \(G^{\prime}\) obtained from \(G^{\prime}\) by the action of the ideal \(I_{\bar{\alpha}}\). \((G^{\prime})^{I_{\alpha}}\) is an abelian subgroup of \(G\) generated by the set \(\{g^{Q}\mid g\in G^{\prime},Q\in I_{\bar{\alpha}}\}\), hence by the set \(\{g^{a_{i}-\alpha_{i}}\mid g\in G^{\prime},i=1,\ldots,n\}\). **Claim 2.** For any basis \((x_{1},\ldots,x_{n})\) of \(G\) and any tuple \((\alpha_{1},\ldots,\alpha_{m})\in\mathbb{Z}^{m}\) the subgroup \((G^{\prime})^{I_{\bar{\alpha}}}\leq G^{\prime}\) is definable in \(G\) uniformly in \((x_{1},\ldots,x_{n})\) and \((\alpha_{1},\ldots,\alpha_{m})\). More precisely, let \(\mathbb{Z}^{*}\) be \(0\)-interpretation of \(\mathbb{Z}\) in \(G\) from Section 2.2. Then there is a formula \(\phi(y,y_{1},\ldots,y_{n},\bar{z}_{1},\ldots,\bar{z}_{m})\) of group theory such that for any basis \((x_{1},\ldots,x_{n})\) of \(G\) and any tuple \((\bar{k}_{1},\ldots,\bar{k}_{m})\in(\mathbb{Z}^{*})^{n}\) the formula \(\phi(y,x_{1},\ldots,x_{n},\bar{k}_{1},\ldots,\bar{k}_{m})\) defines in \(G\) the subgroup \((G^{\prime})^{I_{\bar{\alpha}}}\), where \(\alpha_{i}=\bar{k}_{i}\in\mathbb{Z}^{*},i=1,\ldots,m\). Indeed, let \((x_{1},\ldots,x_{n})\) be a basis of \(G\) and \((\alpha_{1},\ldots,\alpha_{m})\in\mathbb{Z}^{m}\). The abelian subgroup \((G^{\prime})^{I_{\alpha}}\) of \(G\) is generated by the set \(\{g^{a_{i}-\alpha_{i}}\mid g\in G^{\prime},i=1,\ldots,m\}\). It follows that every element \(u\in(G^{\prime})^{I_{\alpha}}\) can be presented as a product \[u=g_{1}^{a_{1}-\alpha_{1}}\ldots g_{m}^{a_{m}-\alpha_{m}},\] for some \(g_{1},\ldots,g_{m}\in G^{\prime}\), or, equivalently, in the form \[u=g_{1}^{x_{1}}g_{1}^{-\alpha_{1}}\ldots g_{m}^{x_{m}}g_{m}^{-\alpha_{m}} \tag{3}\] where \(g_{i}^{x_{i}}\) is a conjugation of \(g_{i}\) by \(x_{i}\), and \(g_{i}^{-\alpha_{i}}\) is the standard exponentiation of \(g_{i}\) by the integer \(-\alpha_{i}\), \(i=1,\ldots,n\). It was shown that there exists a formula \(\exp_{2}(u,v,\bar{z})\) such that for any \(g,h\in G^{\prime}\) and \(\alpha=\bar{m}\in\mathbb{Z}^{*}\) the formula \(\exp_{2}(g,h,\bar{m})\) holds in \(G\) if and only if \(g=h^{\alpha}\). Using formula \(\exp_{2}(u,v,\bar{z})\) and definability of the commutant \(G^{\prime}\) in \(G\) (see Section 2.2) one can write down the condition (3) by a group theory formula uniformly in \((x_{1},\ldots,x_{n})\) and \((\alpha_{1},\ldots,\alpha_{m})\), as claimed. **Lemma 3**.: _[_5_, Lemma 4.24]_ _Let \(g,h\in G^{\prime}\) and \(P\in\mathbb{Z}[a_{1},\ldots,a_{m}]\), \(m\leq n\). Then \(g^{P}=h\) if and only if the following condition holds:_ \[\forall\alpha_{1},\ldots\alpha_{m}\in\mathbb{Z}(g^{P(\alpha_{1},\ldots,\alpha _{m})}=h\ mod\ (G^{\prime})^{I_{\alpha}}). \tag{4}\] Every element \(Q\in\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{n},a_{n}^{-1}]\) can be written as \[Q=P(a_{1},...,a_{m})(a_{1}^{k_{1}}\ldots a_{m}^{k_{m}})^{-1}\] for some \(k_{1},\ldots,k_{m}\geq 0\). Therefore \(g^{Q}=h\) if and only if \(g^{P(a_{1},\ldots,a_{m})}=h^{(a_{1}^{k_{1}}\ldots a_{m}^{k_{m}})}.\) This gives the interpretation of the action of \(\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{n},a_{n}^{-1}]\) on \(G^{\prime}\). ### Proof of Theorem 2 In this section we will prove Theorem 2. Using the Magnus embedding \[x_{i}\rightarrow\begin{pmatrix}a_{i}&0\\ t_{i}&1\end{pmatrix},\] where \(a_{1},\ldots,a_{n}\) are the generators of the free abelian group and \(\{t_{1},\ldots,t_{n}\}\) the base of the free \(\mathbb{Z}\bar{G}\) module \(T\). Then \(G^{\prime}\) has the structure of the the submodule of \(T\) generated by \(t_{i}(x_{j}-1)-t_{j}(x_{i}-1),i\neq j.\) Denote this submodule by \(G^{\prime}_{\mathbb{Z}\bar{G}}\). The module \(G^{\prime}_{\mathbb{Z}\bar{G}}\) is a two sorted structure \((\mathbb{Z}\bar{G},G^{\prime},\delta)\) where \(\delta\) is the predicate describing the action of \(\mathbb{Z}\bar{G}\) on \(G^{\prime}\). We will show that \(G^{\prime}_{\mathbb{Z}\bar{G}}\) is bi-interpretable with \(\mathbb{Z}.\) By the result in [2] every infinite f.g. integral domain is bi-interpretable with \(\mathbb{Z}.\) Therefore \(\mathbb{Z}\bar{G}\) which is isomorphic to the ring of Laurent polynomials, is bi-interpretable with \(\mathbb{Z}\). A \(0\)-interpretation of the ring of Laurent polynomials \(\mathbb{Z}\bar{G}=\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{n},a_{n}^{-1}]\) in \(\mathbb{Z}\) is described, for example, in [4, Theorem 3]. The following result is established in [11]. **Proposition 3**.: _The subset \(\mathbb{Z}\) is regularly definable in \(\mathbb{Z}\bar{G}\) and there is a regular bi-interpretation of \(\mathbb{Z}\) and \(\mathbb{Z}\bar{G}\)._ Every f.g. free \(\mathbb{Z}\bar{G}\)-module is then regularly bi-interpretable with \(\mathbb{Z}\bar{G}\) (with some \(\mathbb{Z}\bar{G}\)-basis as constants), and, therefore, it is bi-interpretable with \(\mathbb{Z}\). Since \(\mathbb{Z}\) is bi-interpretable with \(T\) and with the submodule \(B\) generated by \([u_{1},u_{2}]\), we can define a map \(T\rightarrow_{\phi}\mathbb{Z}\rightarrow_{\psi}B\leq T\). The graph of the map \(\psi\circ\phi\) is definable in \(T\) (with parameters, regularly). The restriction of the map \(\phi\) to \(G^{\prime}_{\mathbb{Z}\bar{G}}\), \(\phi|_{G^{\prime}_{\mathbb{Z}\bar{G}}}\) is the interpretation of \(G^{\prime}_{\mathbb{Z}\bar{G}}\) in \(\mathbb{Z}\) and \(\psi\) is the interpretation of \(\mathbb{Z}\) in \(G^{\prime}_{\mathbb{Z}\bar{G}}\). Te graph of \(\psi\circ\phi|_{G^{\prime}_{\mathbb{Z}\bar{G}}}\) is definable and therefore \(G^{\prime}_{\mathbb{Z}\bar{G}}\) is regularly bi-interpretable with \(\mathbb{Z}\). Denote by \(\alpha\) the bijection from \(G^{\prime}_{\mathbb{Z}\bar{G}}\) to \(\mathbb{Z}\). We have the interpretation of \(G\) on the set \(\mathbb{Z}^{m+1}\) and, therefore, in \(\mathbb{Z}\) defined by \[\tau(u_{1}^{m_{1}}\ldots u_{n}^{m_{n}}h)=(m_{1},\ldots,m_{n},\alpha(h)).\] We have already shown that the exponentiation and the module structure of \(G^{\prime}_{\mathbb{Z}\bar{G}}\) are interpretable in \(G\). Now, given an \(n+1\) tuple \((m_{1},...,m_{n},k)\) of integers we reconstruct the element of the group as \(u_{1}^{m_{1}}\ldots u_{n}^{m_{n}}\alpha^{-1}(k)\). So we have a sequence of regular bi-interpretations: \[G\longleftrightarrow(\mathbb{Z}\bar{G},G^{\prime}_{\mathbb{Z}\bar{G}}) \longleftrightarrow\mathbb{Z}^{m+1},\longleftrightarrow\mathbb{Z}.\] This proves the theorem. ## 3 Groups elementarily equivalent to a free metabelian group In this section we will describe groups elementarily equivalent to the free metabelian group \(G\) with \(n\) generators (denote them by \(x_{1},\ldots,x_{n}\)). Since \(G\) is regularly bi-interpretable in \(\mathbb{Z}\), we can use Theorem 1 with \(G=\mathbb{A}\) and \(\mathbb{Z}=\mathbb{B}.\) Then \(G=\Gamma(\mathbb{Z}),\mathbb{Z}=\Delta(G,\bar{g}).\) If \(H\equiv G\), then the same formulas give the interpretation \(H=\Gamma(\mathbb{Z})\), where \(\mathbb{Z}\equiv\mathbb{Z}\). We call \(\bar{\mathbb{N}}\), a structure elementarily equivalent to \(\mathbb{N}\) a model of arithmetic. Notice that \(\mathbb{N}\) and \(\mathbb{Z}\) are absolutely bi-interprtable. A ring \(\tilde{\mathbb{Z}}\) (resp. \(\bar{\mathbb{N}}\)) is called a non-standard model if it is not isomorphic to \(\mathbb{Z}\) (resp. \(\mathbb{N}\)), see [7] and [11]. First, we notice that \(H\) is a so-called exponential group with exponents in \(\tilde{\mathbb{Z}}\). Let us recall the four axioms of exponential groups from [10]. Let \(A\) be an arbitrary associative ring with identity and \(\Gamma\) a group. Fix an action of the ring \(A\) on \(\Gamma\), i.e. a map \(\Gamma\times A\rightarrow\Gamma\). The result of the action of \(\alpha\in A\) on \(g\in\Gamma\) is written as \(g^{\alpha}\). Consider the following axioms: 1. \(g^{1}=g\), \(g^{0}=1\), \(1^{\alpha}=1\) ; 2. \(g^{\alpha+\beta}=g^{\alpha}\cdot g^{\beta}\), \(g^{\alpha\beta}=(g^{\alpha})^{\beta}\); 3. \((h^{-1}gh)^{\alpha}=h^{-1}g^{\alpha}h\); 4. \([g,h]=1\Longrightarrow(gh)^{\alpha}=g^{\alpha}h^{\alpha}\). **Definition 5**.: _Groups with \(A\)-actions satisfying axioms 1)-4) are called \(A\)-exponential groups._ These axioms can be written by first-order formulas in \(G\) and \(H\). This implies the following lemma. **Lemma 4**.: \(H\) _is \(\tilde{Z}\)-exponential group._ Our main goal now is to describe the structure of \(H\). We know that \(G\) can be represented as a pair \(\mathbb{Z}\bar{G}\) and a module \(G^{\prime}_{\mathbb{Z}\bar{G}}\) with the action of \(\mathbb{Z}\bar{G}\) on \(G^{\prime}_{\mathbb{Z}\bar{G}}\) interpretable in \(G\) by Section 2.4. We have \(G\to_{\Gamma}\mathbb{Z}\to_{\Delta}G\), where the interpretation \(\Gamma(\mathbb{Z})\) is via normal forms, therefore \(H\to_{\Gamma}\tilde{\mathbb{Z}}\to_{\Delta}H.\) The element \(g=x_{1}^{\gamma_{1}}\ldots x_{n}^{\gamma_{n}}u\in G\), where \[u=\Pi_{1\leq j<i\leq n}[x_{i},x_{j}]^{\beta_{ij}(a_{1},\ldots,a_{i})},\] where \(\beta_{ij}(a_{1},\ldots,a_{i})\in\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a_{i},a_{ i}^{-1}]\leq\mathbb{Z}\bar{G},\) is interpreted as a tuple of elements in \(\mathbb{Z}\), \(g\to(\gamma_{i},\ldots,\gamma_{n},\bar{\beta}_{11},\ldots,\bar{\beta}_{n-1,n})\), where \(\beta_{ij}\) are tuples. For \(H\) instead of \(\mathbb{Z}\bar{G}\) we will have a non standard Laurent polynomial ring \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) described in [11]. This is a ring elementarily equivalent to \(\mathbb{Z}\bar{G}\). More precisely, regular bi-interpretation of \(G\) with \(\mathbb{Z}\) induces the regular bi-interpretation of \(\mathbb{Z}\bar{G}\) with \(\mathbb{Z}\), \(\mathbb{Z}\bar{G}=\Gamma_{1}(\mathbb{Z})\). Then \(\tilde{\mathbb{Z}}\bar{G}_{NS}=\Gamma_{1}(\tilde{\mathbb{Z}}).\) The same formula as in the standard case says that for \(h\in H\), \[h=x_{1}^{\tilde{\gamma}_{1}}\ldots x_{n}^{\tilde{\gamma}_{n}}u,\quad u=\Pi_{1 \leq j<i\leq n}[x_{i},x_{j}]^{\beta_{ij}(a_{1},\ldots,a_{i})}, \tag{5}\] where \(\tilde{\gamma}_{i}\in\tilde{\mathbb{Z}}\), \(\beta_{ij}(a_{1},\ldots,a_{i})\in\tilde{\mathbb{Z}}\bar{G}_{NS}\). It is interpreted as a non-standard tuple of tuples of elements in \(\tilde{\mathbb{Z}}\), \[g\to(\tilde{\gamma}_{i},\ldots,\tilde{\gamma}_{n},\bar{\beta}_{11},\ldots,\bar {\beta}_{n-1,n}).\] There is a formula connecting \(P[a_{1},\ldots,a_{n}]\in\tilde{\mathbb{Z}}\bar{G}_{NS}\) and its evaluation \(P[\alpha_{1},\ldots,\alpha_{n}]\), where \(\alpha_{1},\ldots,\alpha_{n}\in\tilde{\mathbb{Z}}\). Lemma 3 gives the interpretation of the action of the standard Laurent polynomial ring \(\mathbb{Z}\bar{G}\) on \(G^{\prime}\) and, therefore, the interpretation of the action of the non-standard Laurent polynomial ring \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) on \(H^{\prime}\), where \(H^{\prime}\) is the \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) module generated by \(\{[x_{i},x_{j}]\}\). Denote by \(a\) the image of \(x\in H\) in \(\tilde{\mathbb{Z}}\bar{G}_{NS}\). The following three lemmas show how an arbitrary element in \(H\) can be written in the normal form. **Lemma 5**.: _For any \(z\in H\) and \(\delta\in\tilde{\mathbb{Z}},\)_ \[[z,x^{\delta}]=[z,x]^{(a^{\delta}-1)/(a-1)},\] \((a^{\delta}-1)/(a-1)\in\tilde{\mathbb{Z}}\bar{G}_{NS}\) _._ Proof.: There is an identity for integer \(\delta\): \([z,x^{\delta},x]=[z.x,x^{\delta}]\) that implies \[[z,x^{\delta}]=[z,x]^{(a^{\delta}-1)/(a-1)}.\] Here \((a^{\delta}-1)/(a-1)\in\mathbb{Z}\bar{G}=\mathbb{Z}[a_{1},a_{1}^{-1},\ldots,a _{n},a_{n}^{-1}]\) in the standard case and, therefore, \((a^{\delta}-1)/(a-1)\) is in \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) in the non-standard case. Notice that one can rewrite \[[x_{i},x_{j}]^{a_{k}^{\delta}-1},\] where \(j<i<k\), using the Jacobi identity as \[[x_{i},x_{k}{}^{\delta}]^{-(a_{j}-1)}[x_{j},x_{k}{}^{\delta}]^{a_{i}-1}.\] **Lemma 6**.: _If \(x,y\in H-H^{\prime}\) and \(b\) is the image of \(y\) in \(\tilde{Z}\bar{G}_{NS}\), then we have for the \(\delta\)-commutator_ \[y^{-\delta}x^{-\delta}(xy)^{\delta}=[x,y]^{-f(a,b)},\] _where \(f(a,b)\) is a non-standard polynomial such that_ \[(a-1)f(a,b)=\] \[(a^{\delta}b^{\delta}-1)/(ab-1)+(1-b^{\delta})/(b-1)=b^{\delta-1}(a^{\delta-1 }-1)+b^{\delta-2}(a^{\delta-2}-1)+\ldots+b(a-1).\] Proof.: We can write that \(y^{-\delta}x^{-\delta}(xy)^{\delta}\) is in the module generated by the commutator \([x,y]\), and we wish to see what is the non-standard polynomial \(f(a,b)\). We have \([x,y^{-\delta}x^{-\delta}(xy)^{\delta}]=[x,y]^{f(a,b)(a-1)}.\) At the same time \[[x,y^{-\delta}x^{-\delta}(xy)^{\delta}]=[x,(xy)^{\delta}][x,y^{-\delta}x^{- \delta}]^{(ab)^{\delta}}=[x,xy]^{((ab)^{\delta}-1)/(ab-1))}[x,y^{-\delta}]^{a- \delta}{}^{\delta}{}^{\delta}{}^{\delta}{}=\] \[[x,y]^{(a^{\delta}b^{\delta}-1)/(ab-1)+(1-b^{\delta})/(b-1)}.\] Polynomial \[(a^{\delta}b^{\delta}-1)/(ab-1)+(1-b^{\delta})/(b-1)=b^{\delta-1}(a^{\delta-1 }-1)+b^{\delta-2}(a^{\delta-2}-1)+\ldots+b(a-1)\] is divisible by \((a-1)\) in the standard ring of Laurent polynomials, therefore it is divisible by \((a-1)\) in the non-standard ring. Since the rings are integral domains, \(f(a,b)\) is the result of this division. The following lemma is immediate. **Lemma 7**.: \(x_{1}^{\gamma_{1}}\ldots x_{n}^{\gamma_{n}}x_{1}^{\delta_{1}}\ldots x_{n}^{ \delta_{n}}=x_{1}^{\gamma_{1}+\delta_{1}}\ldots x_{n}^{\gamma_{n}+\delta_{n}} \overline{\Pi},\) _where \(\overline{\Pi}\in H^{\prime}\)._ We summarise this now. **Theorem 3**.: _If \(H\) is a group elementarily equivalent to \(G\). Then is has the following structure._ 1. _Elements_ \(h\in H\) _have the normal form_ \[h=x_{1}^{\tilde{\gamma}_{1}}\ldots x_{n}^{\tilde{\gamma}_{n}}u,\quad u=\Pi_{1 \leq j<i\leq n}[x_{i},x_{j}]^{\beta_{ij}(a_{1},\ldots,a_{i})},\] _where_ \(\tilde{\gamma}_{i}\in\tilde{\mathbb{Z}}\)_,_ \(\beta_{ij}(a_{1},\ldots,a_{i})\in\tilde{\mathbb{Z}}\bar{G}_{NS}\)_._ 2. \(H^{\prime}\) _is a module over_ \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) _with generators_ \(\{[x_{i},x_{j}]\}\)_._ 3. _Multiplication in_ \(H\) _is defined as_ \[x_{1}^{\gamma_{1}}\ldots x_{n}^{\gamma_{n}}\Pi_{1\leq j<i\leq n}[x_{i},x_{j}] ^{\beta_{ij}(a_{1},\ldots,a_{i})}x_{1}^{\delta_{1}}\ldots x_{n}^{\delta_{n}} \Pi_{1\leq j<i\leq n}[x_{i},x_{j}]^{\nu_{ij}(a_{1},\ldots,a_{i})}=\] \[x_{1}^{\gamma_{1}+\delta_{1}}\ldots x_{n}^{\gamma_{n}+\delta_{n}} \overline{\Pi}\ \Pi_{1\leq j<i\leq n}[x_{i},x_{j}]^{a_{1}^{\delta_{1}} \ldots a_{n}^{\delta_{n}}\beta_{ij}(a_{1},\ldots,a_{i})+\nu_{ij}(a_{1},\ldots, a_{i})},\] _where_ \(\bar{\Pi}\) _is defined in Lemma_ 7_._ 4. _The following lemma is immediate._ **Lemma 8**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 9**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 10**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 11**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 12**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 13**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 14**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 15**.: _Let \(H\) be a group elementarily equivalent to \(G\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 16**.: _Let \(H\) be a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 17**.: _Let \(H\) be a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 18**.: _Let \(H\) be a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). Then \(H\) is a module over \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\)._ Proof.: We have \(\tilde{\mathbb{Z}}\bar{G}_{NS}\) with generators \(\{[x_{i},x_{j}]\}\). **Lemma 19**.: _Let \(H\) be a module over \(\tilde{\mathbb{Z}}\bar{G}_ \(A\)-metabelian groups The question about varieties of exponential groups was discussed in [1]. Let \(\Gamma\) be an arbitrary exponential group with exponents in \(A\). We set \[(\Gamma,\Gamma)_{A}=\langle(g,h)_{\alpha}=h^{-\alpha}g^{-\alpha}(gh)^{\alpha},g,h \in\Gamma,\alpha\in A\rangle_{A}.\] The \(A\)-subgroup \((\Gamma,\Gamma)_{A}\) is called the \(A\)-commutant of the group \(\Gamma\). By [1, Theorem 3.4], a free abelian \(A\)-group with base \(X\) is a free \(A\)-module and is \(A\)-isomorphic to the factor-group of the free \(A\)-group with base \(X\) by its \(A\)-commutant. The \(A\)-commutant is called the first \(A\) commutant and denoted by \(\Gamma^{(1,A)}\). The \(A\)-commutant of \(\Gamma^{(1,A)}\) is the second \(A\) commutant \(\Gamma^{(2,A)}.\) Then \(\Gamma\) is called in [1]\(n\)-step \(A\)-solvable group if \(\Gamma^{(n,A)}=1.\) Clearly \(n\)-step \(A\)-solvable group is \(n\)-solvable \(A\)-group. If \(\Gamma^{(2,A)}=1\) we call \(\Gamma\) an \(A\)-metabelian group. Notice, that \(H\) that is elementarily equivalent to a free metabelian group is \(\tilde{\mathbb{Z}}\)-metabelian because \(\alpha\)-commutators belong to \(H^{\prime}\) and commute in \(H\). A discretely ordered ring is an ordered ring in which there is no element between \(0\) and \(1\). Let \(A\) be a discretely ordered ring and \(K\) be a multiplicative \(A\)-module with generators \(a_{1},\ldots,a_{n}\). Consider a group algebra \(A(K)\). Let \(R\) be the \(A\) algebra generated by \(A(K)\) and for all positive \(\delta\in A\), by series \((a^{\delta}-1)/(a-1)=\Sigma_{0\leq\alpha<\delta}a^{\alpha}\), and \(\Sigma_{0\leq\alpha<\delta}b^{\alpha}\frac{a^{\alpha}-1}{a-1}\), where \(a,b\in K\). We define an \(A\)-metabelian exponential group \(M\) with generators \(x_{1},\ldots,x_{n}\) by the following axioms: 1) \(M\) is an \(A\)-exponential group. Let \(M^{\prime}\) consist of elements that commute with any \(\alpha\)-commutator. 2) For any \(\delta\in A\), \(y,x\in M\), \(y^{-\delta}x^{-\delta}(xy)^{\delta}\) is in \(M^{\prime}\). Moreover, \(M^{(2,A)}=1\) (therefore \(M\) is \(A\)-metabelian in the Amaglobeli's definition above). 3) For any \([z_{1},z_{2}]\) and any \(\sigma\in R\) there is \([z_{1},z_{2}]^{\sigma}\). 4) For any \(z,x\in M\) and \(\delta\in A\), \([z,x^{\delta}]=[z,x]^{(a^{\delta}-1)/(a-1)}\). and \(y^{-\delta}x^{-\delta}(xy)^{\delta}=[x,y]^{f(a,b)}\), where \[f(a,b)=[(a^{\delta}b^{\delta}-1)/(ab-1)+(1-b^{\delta})/(b-1)]/(1-a)\in R.\] Therefore \(A\)-commutant is an \(R\)-module. Let now \(M\) be a free group with generators \(x_{1},\ldots,x_{n}\) in the category of \(A\)-metabelian exponential groups. **Lemma 8**.: \(M^{\prime}\) _is an \(R\)-module generated by elements \([x_{i},x_{j}]\). If \(u\) belongs to \(M^{\prime}\), then it can be uniquely written as_ \[u=\Pi_{1\leq j<i\leq n}[x_{i},x_{j}]^{\beta_{ij}(a_{1},\ldots,a_{i})},\] _where \(\beta_{ij}(a_{1},\ldots,a_{i})\in R\)._ Proof.: Consider a \(\delta\)-commutator \(y^{-\delta}x^{-\delta}(xy)^{\delta}\). We have, using identities 1) and \[[x,y^{-\delta}x^{-\delta}(xy)^{\delta}]=[x,(xy)^{\delta}][x,y^{-\delta}x^{- \delta}]^{(ab)^{\delta}}=[x,xy]^{((ab)^{\delta}-1)/(ab-1))}[x,y^{-\delta}]^{a ^{-\delta}(ab)^{\delta}}=\] \[[x,y]^{(a^{\delta}b^{\delta}-1)/(ab-1)+(1-b^{\delta})/(b-1)}.\] Every commutator can be represented as \([x_{i}^{\beta},(x_{i_{1}}\dots x_{i_{t}})^{\alpha},x_{j_{1}}^{\alpha_{1}},\dots,x _{j_{k}}^{\alpha_{k}}]\). We can assume that \(i\geq j_{1}\geq j_{2}\dots\geq j_{k}\) otherwise use the Jacobi identity. If \(i\) is greater than or equal to all \(i_{1},\dots,i_{t}\), then the representation from Lemma 5 gives elements \([x_{i},x_{i_{1}}\dots x_{i_{t}}]^{f(a_{i_{1}},\dots,a_{i_{t}})}\). Bringing the commutator \([x_{i},x_{i_{1}}\dots x_{i_{t}}]\) to normal form and acting by \(f(a_{i_{1}},\dots,a_{i_{t}})\) gives elements in normal form. Suppose now some of \(i_{1},\dots,i_{t}\) is greater than \(i\). Consider a general example \[[x_{1},(x_{2}x_{3})^{\delta}]=[x_{1},(x_{2}x_{3})]^{((a_{2}a_{3})^{\delta}-1)/ (a_{2}a_{3}-1)}=[x_{1},x_{2}x_{3}]^{\delta}\Pi_{0\leq\alpha<\delta}[x_{1},x_{2 }x_{3},(x_{2}x_{3})^{\alpha}]=\] \[[x_{1},x_{2}x_{3}]^{\delta}\Pi_{0\leq\alpha<\delta}[x_{1},x_{2}x_{3},x_{3}^{ \alpha}]^{x_{2}^{\alpha}}\Pi_{0\leq\alpha<\delta}[x_{1},x_{2}x_{3},x_{2}^{ \alpha}]=\] \[[x_{1},x_{2}x_{3}]^{\delta}[x_{1},x_{2}x_{3},x_{3}]^{\Sigma_{0\leq\alpha< \delta}a_{2}^{\alpha}(a_{3}^{\alpha}-1)/(a_{3}-1)}[x_{1},x_{2}x_{3},x_{2}]^{ \Sigma_{0\leq\alpha<\delta}(a_{2}^{\alpha}-1)/(a_{2}-1)}.\] Using the Jacobi identity we rewrite \([x_{1},x_{2}x_{3},x_{3}]=[x_{3},x_{2}x_{3}]^{a_{1}-1}[x_{3},x_{1}]^{-a_{2}a_{3}}\). This finally gives a representation of the commutator \([x_{1},(x_{2}x_{3})^{\delta}]\) in the normal form. A general case can be treated similarly. To prove uniqueness of normal forms we need the analogue of Fox derivatives from \(A(M)\) to \(R\). We define a partial Fox derivative as a linear mapping \(d_{i}:A(M)\to R\) satisfying the properties of \(d_{i}\) from Section 2.1 and \[d_{i}(g^{\delta})=\frac{g^{\delta}-1}{g-1}d_{i}(g)=d_{i}(g)\Sigma_{0\leq\alpha <\delta}g^{\alpha}. \tag{6}\] A consequence of these is an equality: \[Dg^{-\delta}=-g^{-\delta}Dg^{\delta}.\] One can also compute for \(f(a,b)\in R\) \[d_{i}([x,y]^{f(a,b)})=f(a,b)_{inv}d_{i}([x,y]). \tag{7}\] The uniqueness of the normal form can be proved by using Fox derivatives and the homomorphisms \(\varepsilon_{I}\), where \(I\subseteq\{1,\dots,n\}\) and \(x_{i}\varepsilon_{j}=x_{i}\) if \(i\in I\) and \(x_{i}\varepsilon_{j}=1\) if \(i\not\in I\) (as it is done in [12] for normal forms in a free metabelian group). For example, we have \(u\varepsilon_{\{1,2\}}=[x_{2},x_{1}]^{\beta_{12}(a_{1},a_{2})}\), hence \(\beta_{12}(a_{2},a_{1})\) is defined uniquely. Multiply \(u\) by \([x_{2},x_{1}]^{-\beta_{21}(a_{1},a_{2})}\) to get \(u^{\prime}\). Then \(u^{\prime}\varepsilon_{\{1,2,3\}}=[x_{3},x_{1}]^{\beta_{31}(a_{1},a_{2},a_{3}) }[x_{3},x_{2}]^{\beta_{32}(a_{1},a_{2},a_{3})}\). Then \(d_{1}(u^{\prime}\varepsilon_{\{1,2,3\}})=\beta_{31(inv)}(a_{1},a_{2},a_{3})(a_ {3}-1)a_{1}^{-1}a_{3}^{-1}\), \(d_{2}(u^{\prime}\varepsilon_{\{1,2,3\}})=\beta_{32(inv)}(a_{1},a_{2},a_{3})(a_ {3}-1)a_{2}^{-1}a_{3}^{-1}\). This allows to compute uniquely \(\beta_{31}(a_{1},a_{2},a_{3})\) and \(\beta_{32}(a_{1},a_{2},a_{3})\), respectively, and so on. **Theorem 4**.: _If \(H\equiv G\), where \(G\) is a free metabelian group, then \(H\) contains a free \(\tilde{\mathbb{Z}}\)-metabelian exponential group as a subgroup._ Proof.: We know that \(H\) is \(\tilde{\mathbb{Z}}\)-exponential group for some \(\tilde{\mathbb{Z}}\equiv\mathbb{Z}\).. And \(H^{\prime}\) is a \(\tilde{\mathbb{Z}}\tilde{G}_{NS}\)-module. Let \(M\) be a free \(\tilde{\mathbb{Z}}\)-metabelian exponential group defined above. Then normal forms of elements in \(M\) are exactly normal forms (5) of elements in \(H\), therefore \(M\) is a subgroup of \(H\)
2309.16865
A Bayesian study of quark models in view of recent astrophysical constraints
In this work, we perform a comparative analysis between the density-dependent quark model and the vector MIT bag model using Bayesian analysis. We use the equations of state generated by these two models to describe quark stars. We impose four recent observational astrophysical constraints on both models to determine their model-dependent parameters in an optimized manner assuming that the compact objects observed are composed entirely of self-bound quarks. The restrictions are aimed at producing stars with maximum masses $2 - 2.35$ M$_\odot$ and a mass-radii diagram compatible with the observed pulsars: PSR J0740+6620, PSR J0952-0607, PSR J0030+0451 and the compact object XMMU J173203.3-344518. With this analysis, the parameter dependence of the nuclear equation of state (EoS) of both models is restricted.
Franciele M. da Silva, Adamu Issifu, Luiz L. Lopes, Luis C. N. Santos, Débora P. Menezes
2023-09-28T21:34:21Z
http://arxiv.org/abs/2309.16865v4
# A Bayesian study of quark models in view of recent astrophysical constraints ###### Abstract In this work, we perform a comparative analysis between the density-dependent quark model and the vector MIT bag model using Bayesian analysis. We impose four recent observational astrophysical constraints on both models to determine their model-dependent parameters in an optimized manner. The restrictions are aimed at producing stars with maximum masses \(2-2.35\) M\({}_{\odot}\) and a mass-radii diagram compatible with the observed pulsars: PSR J0740+6620, PSR J0952-0607, PSR J0030+0451 and the compact object XMMU J173203.3-344518. With this analysis, the parameter dependence of the nuclear equation of state (EoS) of both models is restricted. ## I Introduction The study of the dense deconfined quark matter phase in nuclear astrophysics is an open problem. There are several theoretical models developed to explore this matter and related phenomena [1; 2; 3]. One of the phenomena that is widely being considered these days is the probing of the inner core of the neutron star (NS). The recently operating Neutron Star Interior Composition Explorer (NICER) [4; 5; 6] and the gravitational wave laser interferometer (Advanced LIGO, Virgo, and KAGRA) [7; 8; 9; 10; 11] have started providing results that constrain some known properties of the NSs including their deformability, maximum masses and radii. The NSs are remnants of gravitationally collapsed supernovae that lead to the formation of the densest and smallest observed compact objects in the universe. Their densities are several times higher than the density of ordinary nuclear matter [12; 13]. The hadronic matter forming the star can undergo a phase transition to quark matter in the core of the star where matter is expected to be highly dense creating a hybrid star [14; 15; 16; 17]. Hypothetically, NSs could also be formed through self-bound deconfined quarks making up the entire star, which is effectively a quark star, also known as strange star [18; 19; 20; 21]. Indeed, the physical consequences of deconfined quark matter in NS/protoneutron stars and core-collapse supernovas have a long-standing history in observational astrophysics [22; 23; 24; 25; 26]. The EoS mainly consists of the pressure as a function of the energy density and carries information on the inner dynamics and composition of the NS. Through the EoS we can determine the macroscopic nature of the star via the Tolman-Oppenheimer-Volkoff (TOV) equations [27]. The EoS also serves as the basis for several astrophysical simulations of compact objects so, several investigations are done to improve its accuracy [28; 29]. Additionally, experimental nuclear physics provides valuable data required to benchmark theories of dense matter EoS in NS [30]. The asymptotic freedom behavior of the quantum chromodynamics (QCD) theory that characterizes the phase transition of strongly interacting matter from hadron phase to deconfined quark matter phase is an important subject in the study of the QCD phase diagram. The deconfined quark matter phenomena are expected to also occur at the higher baryon density region of NS matter [31; 32]. Proper understanding of the EoSs for high-density, cold quark matter plays a significant role in constraining the characteristics of strongly interacting matter believed to exist in the core of NSs. The EoS at high density for cold quark matter is considered a robust constraint [33; 34] when constructing the NS EoS at low densities [35; 36; 37; 38] as well. Aside from its phenomenological applications, the high-density cold quark matter EoS has enormous theoretical applications due to its associated rich physics, including dynamical screening of long-wavelength chromoelectric and chromomagnetic fields. Such screening behaviors are also expected to occur in a high-temperature quark-gluon plasma regime [39; 40]. Ever since Bodmer and Witten hypothesized that stable quark matter should necessarily contain strange quark matter (SQM) [19; 20], it has been assumed that quark stars consist of SQM [18]. The Bodmer-Witten conjecture states that the energy per baryon of SQM at zero pressure must be less than the one observed in the infinite baryonic matter: \[\varepsilon(p=0)_{SQM}=E/A\leq 930\ \text{MeV}. \tag{1}\] At the same time, the non-strange quark matter (NSQM) still needs to have an energy per baryon higher than the nonstrange infinite baryonic matter; otherwise, protons and neutrons would decay into \(u\) and \(d\) quarks: \[\varepsilon(p=0)_{NSQM}=E/A>930\ \text{MeV}. \tag{2}\] Therefore, Equations (1) and (2) must be simultaneously satisfied. In this work, we perform a comparative analysis of the density-dependent quark model (DDQM) [41; 42; 43; 44; 45] and the vector MIT bag model [46; 47; 48; 3; 48] using Bayesian analysis. The Bayesian approach is a very useful tool for inferring the probability distribution of a set of model parameters based on a set of measured data. In the context of nuclear physics and astrophysics, the Bayesian analysis can be used to optimize a set of EoS parameters in view of astrophysical observations [49; 50; 51], as well as nuclear matter properties [52; 53; 54; 55]. We rely on recent observational astrophysical data that leads to robust constraints on the NS EoS and its internal composition. The constraints include measurable global properties of the NS such as the mass and radii of PSR J0740+6620 [56], PSR J0952-0607 [57], PSR J0030+0451 [58] and XMMU J173203.3-344518 [59]. With regards to the DDQM, we determine optimal values of the low-density parameter, \(D\), which is related to the linear confining properties of the quarks, and the higher-density parameter, \(C\), which in turn is associated with the leading order perturbative interaction term in the QCD theory. These are model-dependent parameters that are usually determined in the model framework. We determine these parameters using Bayesian analysis that conforms with each observed star listed above. On the other hand, with the vector MIT bag model, we determine the model-dependent parameters such as the bag constant, \(B\), related to the vacuum pressure, the strength of the vector coupling, \(G_{V}\), responsible for the quark-quark repulsion, and the self-coupling channel, \(b_{4}\), which mimics the Dirac sea contribution and it is important to soften the EoS at very high densities. We determine each of these constants that satisfy a particular observed star for each of the models and compare their properties. The NS properties that we consider for comparison and analysis are the EoS, sound velocity, \(c_{s}\), tidal deformability, \(\Lambda\), mass, \(M\), and radius, \(R\) and the adiabatic index, \(\Gamma\). This work is organized as follows: In Section II we introduce the DDQM and discuss the fundamental relations that govern it. In Section III we introduce the vector MIT bag model and discuss its fundamental properties. A brief discussion of the mass-radius constraints and their associated expressions was presented in Section IV. In Section V we present the general overview of the Bayesian analysis and the corner diagrams for the four different cases (Figures 1, 2, 3 and 4) considered in this work for both models. The analysis of the stability window of the DDQM was also discussed briefly in Subsection V.1. The results and analysis of the study are presented in Section VI and the final findings in Section VII. ## II Density-dependent quark model The density-dependent quark model (DDQM) is a model that describes the SQM and incorporates the interaction between the quarks through a dependency of the mass on the density. In this work, we are using the model given in ref. [60]: \[m_{i}=m_{i0}+\frac{D}{n^{1/3}}+Cn^{1/3}, \tag{3}\] where \(m_{i0}(i=u,\,d,\,s)\) is the current quark mass, \(n\) is the baryon number density, and \(C\) and \(D\) are the parameters of this model. A possible problem with the introduction of a density dependency is that it can lead to thermodynamic inconsistencies. A way to avoid these inconsistencies is the inclusion of an effective chemical potential \(\mu_{i}^{*}\), and in this way, we can describe the system by a free-energy density \(f\) of a free particle system with masses \(m_{i}(n)\) and effective chemical potentials \(\mu_{i}^{*}\) \[f=\Omega_{0}\left(\{\mu_{i}^{*}\},\{m_{i}\}\right)+\sum_{i}\mu_{i}^{*}n_{i}, \tag{4}\] where \(\Omega_{0}\) is the thermodynamic potential density of the free quarks, given by the following expression \[\Omega_{0}=-\sum_{i}\frac{\gamma_{i}}{24\pi^{2}}\left[\mu_{i}^{*}\nu_{i}\left( \nu_{i}^{2}-\frac{3}{2}m_{i}^{2}\right)+\frac{3}{2}m_{i}^{4}\ln\frac{\mu_{i}^{ *}+\nu_{i}}{m_{i}}\right], \tag{5}\] with \(\gamma_{i}=6\) (\(3\ \mathrm{colors}\times 2\) spins) is the degeneracy factor. The Fermi momenta is given in terms of the effective chemical potentials \(\mu_{i}^{*}\): \[\nu_{i}=\sqrt{{\mu_{i}^{*}}^{2}-m_{i}^{2}}, \tag{6}\] so that the particle number density \(n_{i}\) can be written as \[n_{i}=\frac{\gamma_{i}}{6\pi^{2}}({\mu_{i}^{*}}^{2}-m_{i}^{2})^{3/2}=\frac{ \gamma_{i}\nu_{i}^{3}}{6\pi^{2}} \tag{7}\] and the chemical potential \(\mu_{i}\) and the effective chemical potential are related through the relation \[\mu_{i}=\mu_{i}^{*}-\mu_{I}. \tag{8}\] The \(\beta\)-equilibrium condition can be rewritten in terms of \(\mu_{i}^{*}\) as: \[\mu_{u}^{*}+\mu_{e}=\mu_{d}^{*}=\mu_{s}^{*}. \tag{9}\] To construct the EoS we also take into consideration the usual charge neutrality condition \[\frac{2}{3}n_{u}-\frac{1}{3}n_{d}-\frac{1}{3}n_{s}-n_{e}=0, \tag{10}\] and the baryon number conservation \[n=\frac{1}{3}(n_{u}+n_{d}+n_{s}). \tag{11}\] This way, the energy density \(\varepsilon\) of the system is given by \[\varepsilon=\Omega_{0}-\sum_{i}\mu_{i}^{*}\frac{\partial\Omega_{0}}{\partial \mu_{i}^{*}}, \tag{12}\] and the pressure \(p\) by \[p=-\Omega_{0}+\sum_{i,j}\frac{\partial\Omega_{0}}{\partial m_{j}}n_{i}\frac{ \partial m_{j}}{\partial n_{i}}. \tag{13}\] ## III Vector MIT bag model The vector MIT bag model is an extension of the original MIT bag model [3] that incorporates some features of the quantum Hadrodynamics (QHD) [61]. In its original form, the MIT bag model considers that each baryon is composed of three non-interacting quarks inside a bag. The bag, in turn, corresponds to an infinite potential that confines the quarks. As a consequence, the quarks are free inside the bag and are forbidden to reach its exterior. All the information about the strong force relies on the bag pressure value, which mimics the vacuum pressure. In the vector MIT bag model, the quarks are still confined inside the bag, but now they interact with each other through a vector meson exchange. This vector meson plays a role analog to the \(\omega\) meson of the QHD [61]. Moreover, the contribution of the Dirac sea can be taken into account through a self-interaction of the vector meson [62]. The Lagrangian of the vector MIT bag model, therefore, consists of the Lagrangian of the original MIT, plus the Yukawa-type Lagrangian of the vector field exchange, plus the Dirac sea contribution. We must also add the mesonic mass term to maintain the thermodynamic consistency. It then reads [47; 48]: \[\mathcal{L}=\mathcal{L}_{MIT}+\mathcal{L}_{V}+\mathcal{L}_{DIRAC}, \tag{14}\] where \[\mathcal{L}_{MIT}=\sum_{i}\{\bar{\psi}_{i}[i\gamma^{\mu}\partial_{\mu}-m_{i}]\psi_{i }-B\}\Theta(\bar{\psi}_{i}\psi_{i}), \tag{15}\] \[\mathcal{L}_{V}=\sum_{i}\{\bar{\psi}_{i}g_{iV}\psi_{i}-\frac{1}{2}m_{V}^{2}V^{ \mu}V_{\mu}\}\Theta(\bar{\psi}_{i}\psi_{i}), \tag{16}\] \[\mathcal{L}_{DIRAC}=b_{4}\frac{(g^{2}V_{\mu}V^{\mu})^{2}}{4}, \tag{17}\] where \(\psi_{i}\) is the Dirac quark field, \(B\) is the constant vacuum pressure, \(m_{V}\) is the mass of the \(V_{0}\) mesonic field, \(g_{iV}\) is the coupling constant of the quark \(i\) with the meson \(V_{0}\), and \(g\) = \(g_{uV}\). The \(\Theta(\bar{\psi}_{i}\psi_{i})\) is the Heaviside step function included to assure that the quarks exist only confined to the bag Applying mean-field approximation (MFA) [61], and the Euler-Lagrange equations, we can obtain the energy eigenvalue for the quark fields, and the equation of motion for the mesonic \(V_{0}\) field: \[E_{i}=\sqrt{m_{i}^{2}+\nu_{i}^{2}}+g_{iV}V_{0} \tag{18}\] \[gV_{0}+\bigg{(}\frac{g}{m_{v}}\bigg{)}^{2}\bigg{(}b_{4}(gV_{0})^{3}\bigg{)}= \bigg{(}\frac{g}{m_{v}}\bigg{)}\sum_{i}\bigg{(}\frac{g_{iV}}{m_{v}}\bigg{)}n_ {i}.\] To construct an EoS in MFA, we now consider the Fermi-Dirac distribution of the quarks, and the Hamiltonian of the vector field, and the bag pressure value, \(\mathcal{H}=-\langle\mathcal{L}\rangle\). We obtain: \[\varepsilon_{i}=\frac{\gamma_{i}}{2\pi^{2}}\int_{0}^{\nu_{f}}E_{i}\ \nu^{2}d\nu, \tag{19}\] \[\varepsilon=\sum_{i}\varepsilon_{i}+B-\frac{1}{2}m_{V}^{2}V_{0}^{2}-b_{4} \frac{(g^{2}V_{0}^{2})^{2}}{4}. \tag{20}\] Now we define \(G_{V}\ \equiv\ (g/m_{V})^{2}\) and \(X_{V}\ \equiv\ (g_{sV}/g_{uV})\). The \(X_{V}\) is then taken as \(X_{V}\) = 0.4, once its value was calculated based on symmetry group arguments (see reference [47] for additional details). Finally, the pressure is easily obtained by thermodynamic relations: \(p=\sum_{i}\mu_{i}n_{i}-\varepsilon\). ## IV Constraints on mass-radius relations In this Section, we discuss the recent astrophysical observations and their connections with equilibrium properties associated to a specific EoS. By considering a spherically symmetric body, we are interested in determining the mass and the radius associated. Due to the high-density matter of compact objects such as quark stars, it is assumed that the correct equilibrium properties can be obtained by solving the TOV equations [27]: \[\frac{dp(r)}{dr} =-[\varepsilon(r)+p(r)]\frac{M(r)+4\pi r^{3}p(r)}{r^{2}-2M(r)r}, \tag{21}\] \[\frac{dM(r)}{dr} =4\pi r^{2}\varepsilon(r), \tag{22}\] where \(M(r)\) is the gravitational mass associated with a spherically symmetric compact object with radius \(R\), \(p(r)\), and \(\varepsilon(r)\) are the pressure and the energy density respectively, and we have used natural units such that \(G=c=1\). For realistic models of compact objects, Equations (21) and (22) are usually solved by using numerical techniques. In this regard, we consider a compact star with central energy density \(\varepsilon(r=0)=\varepsilon_{c}\) and total mass \(M\) obtained using the boundary condition \(p(R)=0\) where \(R\) is radius of the star. The next step in solving the hydrostatic equilibrium equations consists of determining the connection between energy density and pressure. This relation depends on the model we are using to construct the EoS associated with the astrophysical object. In this study, we utilize the models introduced in Sections II and III in order to obtain mass-radius relations and use recent observational constraints to restrict the model-dependent parameters. Recent measurements by the NICER mission are advancing our knowledge of the constraints that should be considered for a given EoS of dense matter performing strict limits on the radius. With information only about the mass and radius of a neutron star, its exact internal structure remains uncertain. It is expected that multi-messenger astronomy, such as gravitational waves, can be used in order to provide signatures of the composition of the star. An example of an effect that can be detected is the signature of some quasi-normal modes (QNMs) associated with tidal forces between neutron stars merging into gravitational waves [63]. Regarding the analysis carried out in this paper, the millisecond pulsar PSR J0740+6620 is an interesting system that orbits with a binary companion. Due to a favorable inclination, the Shapiro time delay was used to measure the mass of this source with remarkable precision, making this one of the most well-constrained massive neutron stars known [56; 64; 65]. The timing of PSR J0740+6620 made with data from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) in combination with observations from the Green Bank Telescope led to a mass estimate of \(2.14^{+0.10}_{-0.09}\) M\({}_{\odot}\) (68.3% credibility interval) [64]. Continued timing observations of this massive millisecond pulsar allowed the improvement of this estimate, leading to a mass of \(2.08^{+0.07}_{-0.07}\) M\({}_{\odot}\) (68.3% credibility interval) [65]. ## V Bayesian analysis In this work, we have optimized the EoS parameters for the constraints investigated by means of a Bayesian analysis. In this approach given an EoS model \(\mathcal{M}\) and a set of EoS parameters \(\mathbf{\theta}\), we have an estimate of the region in which the values of the parameters lie. This information is given probabilistically by the prior distribution \(\mathcal{P}(\mathbf{\theta},\mathcal{M})\). We can increase our information about \(\mathbf{\theta}\) if we observe what is the chance of measuring a specific data \(\mathfrak{D}\) when we consider a specific value of \(\mathbf{\theta}\), this increase of information is accounted by the likelihood function \(\mathcal{L}(\mathfrak{D}|\mathbf{\theta},\mathcal{M})\). So, with Bayes's theorem, we can obtain a posterior probability \(\mathcal{P}(\mathbf{\theta}|\mathfrak{D},\mathcal{M})\) for \(\mathbf{\theta}\) given a data set \(\mathfrak{D}\) \[\mathcal{P}(\mathbf{\theta}|\mathfrak{D},\mathcal{M})\propto\mathcal{L}(\mathfrak{D }|\mathbf{\theta},\mathcal{M})\mathcal{P}(\mathbf{\theta},\mathcal{M}). \tag{23}\] As already explained, we test two models \(\mathcal{M}\): DDQM and vector MIT bag. For the DDQM model \(\mathbf{\theta}=\{C,\ \sqrt{D}\}\), and for the vector MIT bag model, \(\mathbf{\theta}=\{B^{1/4},\ G_{V},\ b_{4}\}\), and the data set \(\mathfrak{D}\) are the masses and radii of four compact stars present in Table 1. The posteriors \(\mathcal{P}(\mathbf{\theta}|\mathfrak{D},\mathcal{M})\) were obtained using the _emcee_ package [66], which uses the Goodman and Weare's Affine Invariant Markov chain Monte Carlo (MCMC) [67] method for sampling the posterior probability density. In all the cases analyzed we have used a uniform prior and the domain of our priors was based on the previous works on the EoS we are testing. For the DDQM model, the prior domain was taken from references [44; 45], and the range in which the parameters \(C\) and \(D\) were tested were \(C=\{-2,2\}\) and \(\sqrt{D}=\{0\ \mathrm{MeV},300\ \mathrm{MeV}\}\). As for the vector MIT bag model, we based our choices on [47; 68], and the range in which parameters \(B^{1/4}\), \(G_{V}\) and \(b_{4}\) where tested were \(B^{1/4}=\{130\ \mathrm{MeV},170\ \mathrm{MeV}\}\), \(G_{V}=\{0\ \mathrm{fm}^{2},2\ \mathrm{fm}^{2}\}\) and \(b_{4}=\{-2,2\}\). In Table 1 we show the compact stars that we choose as constraints in this work and their respective masses and radii. The first star on the list is the "black widow" PSR J0952-0607, which is the fastest known spinning pulsar in the Milky Way, with a frequency of \(707\ \mathrm{Hz}\), and also has the largest mass measured with good accuracy found so far [57]. The second star on our table is the massive millisecond pulsar PSR J0740+6620, which had its mass and radius estimated from data of the NICER collaboration [56]. Our third star is the millisecond pulsar PSR J0030+0451 which also had its mass and radius estimated from NICER [58], and can be used to constrain the radius of the canonical neutron star (\(1.4\ \mathrm{M}_{\odot}\)). The last star in our table is the very low-mass compact star XMMU J173203.3-344518 which \begin{table} \begin{tabular}{l c c} **Star** & **Mass** & **Radius** \\ \hline PSR J0952-0607 [57] & \(2.35\pm 0.17\ \mathrm{M}_{\odot}\) & — \\ PSR J0740+6620 [56] & \(2.072^{+0.067}_{-0.066}\ \mathrm{M}_{\odot}\) & \(12.39^{+1.30}_{-0.98}\ \mathrm{km}\) \\ PSR J0030+0451 [58] & \(1.34^{+0.15}_{-0.16}\ \mathrm{M}_{\odot}\) & \(12.71^{+1.14}_{-1.19}\ \mathrm{km}\) \\ XMMU J173203.3-344518 [59] & \(0.77^{+0.20}_{-0.17}\ \mathrm{M}_{\odot}\) & \(10.4^{+0.86}_{-0.78}\ \mathrm{km}\) \\ \end{tabular} \end{table} Table 1: Mass and radius of the compact stars used as constraints in this work. is inside the supernova remnant HESS J1731-347 and, due to its low mass, was suggested to be a quark star [69; 70]. However, we would like to point out that the mass-radius measurement of this object has been contested by some groups, see for example [71]. For all the data present in Table 1 we have assumed a Gaussian likelihood function \[\mathcal{L}(\mathfrak{D}|\boldsymbol{\theta},\mathcal{M})=\prod_{i}\frac{1}{ \sqrt{2\pi\sigma_{i}^{2}}}e^{-\frac{1}{2}\left(\frac{d_{i}-m_{i}(\boldsymbol {\theta})}{\sigma_{i}}\right)^{2}} \tag{24}\] were \(d_{i}\) and \(m_{i}\) are the data and corresponding model values, and the uncertainty \(\sigma_{i}\) for each case is given by the highest data uncertainty, for example, for \(r_{i}=10.4^{+0.86}_{-0.78}\) km we take \(\sigma_{i}=0.86\) km. In all cases analyzed we assumed causal limits for the mass and radius of the compact stars, in the case of the mass we considered the limit M\({}_{max}<3.2\) M\({}_{\odot}\)[72] and, for the radius we used the limit \(R>3M\). In this way, if an EoS produces stars outside these limits we attribute \(\mathcal{L}(\mathfrak{D}|\boldsymbol{\theta},\mathcal{M})=0\) for the parameters \(\boldsymbol{\theta}\) that lead to this result. We performed the stability window calculations for both models using the conditions from Equations (1) and (2), and considering the following quark masses: \(m_{u}=2.16\) MeV, \(m_{d}=4.67\) Mev and \(m_{s}=93.4\) MeV [73]. For the analysis of the vector MIT bag model, we were able to take the stability window into account in the inference, this way, all the points shown in Figures 1 to 4 for this model are inside the stability window. Values previously obtained can be found in Table 4 of ref. [47]. As for the DDQM model, we show its stability window analysis in the next subsection. * **CASE I:** In Case I we have tested if the quark matter EoS studied here can describe two compact stars with high masses. One of them is the pulsar PSR J0952-0607 [57] and the other one is the pulsar PSR J0740+6620 [56]. We want to find EoSs that lead to mass-radius curves that have the masses and the radii of PSR J0952-0607 and PSR J0740+6620 in some point of the curve. We do not demand a specific value for M\({}_{max}\), only that EoSs that lead to M\({}_{max}\leq 2.18\) M\({}_{\odot}\), which corresponds to the mass of PSR J0952-0607 less its error margin, are forbidden. This way, if the maximum mass is outside the region \(2.18\) M\({}_{\odot}\leq M_{max}\leq 3.2\) M\({}_{\odot}\) we associate \(\mathcal{L}(\mathfrak{D}|\boldsymbol{\theta},\mathcal{M})=0\) to the parameters that lead to this result. In Figure 1, we show the corner plots [74] that resulted from the Bayesian analysis for the DDQM model on the left panel and for the vector MIT bag model on the right panel. Based on the result for the MIT case we selected the point \(\{B^{1/4}=139.79\) MeV, \(G_{V}=0.159\) fm\({}^{2},b_{4}=1.69\}\) to be analyzed. As for the DDQM model, we discuss the points chosen to be analyzed in Subsection V.1. * **CASE II:** In Case II we have checked if the EoS studied here can describe data from NICER. In this case, we consider a canonical star (PSR J0030+0451) and another star with a mass around 2 M\({}_{\odot}\) (PSR J0740+6620) Figure 1: Corner plot showing the posterior distributions of the parameters of DDQM model on the left, and the parameters of the vector MIT bag model on the right for Case I. The dark to light contours represent the \(1\sigma\), \(2\sigma\) and \(3\sigma\), respectively. The dashed vertical lines in the histograms represent the \(0.16\), \(0.5\), and \(0.84\) quantiles. which, according to data from NICER have approximately the same radius. Hence, we want EoSs that lead to mass-radius diagrams with the masses and radii of PSR J0030+0451 and PSR J0740+6620 at some point of their curves and, we disregard EoSs that lead to maximum mass outside the region \(2.005~{}\mathrm{M}_{\odot}\leq M_{max}\leq 3.2~{}\mathrm{M}_{\odot}\). In Figure 2, we can see the plots for this case and, based on these results we selected the point \(\{B^{1/4}=140.90\) MeV, \(G_{V}=0.116~{}\mathrm{fm}^{2},b_{4}=0.72\}\) for the MIT bag model to be analyzed. * **CASE III:** In Case III we have investigated if the EoS we are studying can describe two compact stars with small masses. One of these stars is XMMU J173203.3-344518 [69] and the other one is the canonical star from the previous case. In this case, we assume that \(2~{}\mathrm{M}_{\odot}\leq M_{max}\leq 3.2~{}\mathrm{M}_{\odot}\). From Figure 3, which was obtained from the Bayesian analysis for this case, we selected the point \(\{B^{1/4}=135.28\) MeV, \(G_{V}=0.366~{}\mathrm{fm}^{2},b_{4}=1.90\}\) of the MIT bag case to be analyzed. * **Case IV:** Lastly, we want to verify if the EoSs we are studying can describe all of the stars of previous cases at the same time. For Case IV, we assume the same limit for \(\mathrm{M}_{max}\) as for Case I and, based on the right panel of Figure 4, we chose the point \(\{B^{1/4}=137.96\) MeV, \(G_{V}=0.235~{}\mathrm{fm}^{2},b_{4}=1.63\}\) to be analyzed. ### DDQM stability window In Figure 5, we show the stability window for the DDQM EoS. We have the values of the parameter \(C\) in the x-axis and the values of \(\sqrt{D}\) in MeV in the y-axis. The lower region with green dots is a forbidden region because it represents the region where the quark matter (QM), composed of quarks \(u\) and \(d\), would be stable. The upper region with blue dots is the region where the SQM is unstable and hence, the matter with EoS obtained from values of \(C\) and \(\sqrt{D}\) in this region cannot form quark stars. Lastly, the region where we have at the same time purple dots, which represent unstable \(u-d\) quark matter, and red squares, which represent stable SQM, is the region we are interested in since the EoSs obtained from values of \(C\) and \(\sqrt{D}\) in this region fulfill the requirements to form strange stars. To follow the discussion below, please, refer to Figures 1, 2, 3 and 4 explained in Section V. The shaded areas in Figure 5 represent each of the probability distributions we can observe in Figures 1 to 4 for the DDQM EoS. We can observe that most parts of the distributions lie in the regions where quark stars are not possible. This way for the DDQM model, although we have chosen points to study based on Bayesian analysis, it is not possible to associate each of the points with Cases I to IV, as we did with the vector MIT bag model. Based on Figure 5 we have chosen 4 points to Figure 2: Corner plot showing the posterior distributions of the parameters of DDQM model on the left, and the parameters of the vector MIT bag model on the right for Case II. The dark to light contours represent the \(1\sigma\), \(2\sigma\) and \(3\sigma\), respectively. The dashed vertical lines in the histograms represent the \(0.16\), \(0.5\), and \(0.84\) quantiles. analyze, one of them \(\{0.5,137.5\) MeV\(\}\) is outside the shaded regions and, as can be seen in Figure 7 does not satisfy the requirement M\({}_{max}\geq 2\)M\({}_{\odot}\). Some of the main properties of each point can be seen in Table 2. ## VI Results and Analysis Since the models currently under consideration are associated with various constants, see Tables 2 and 3, we resort to Bayesian analysis, shown in Figures 1, 2, 3 and 4, to determine the values of these constants that are most consistent with the constraints of some selected stars presented on Table 1. For instance, the vector MIT bag model is associated with three free parameters, although they are not fully independent from each other due to the stability window [47], whilst the DDQM, is associated with two free parameters. In Table 2 we display the results for \(C\) and \(\sqrt{D}\) related to the DDQM and in Table 3 we display the results for \(B^{1/4},~{}G_{V}\) and \(b_{4}\) determined from the MIT bag model. Even though we impose the same constraints on the two models, we observe that the results obtained for the vector MIT bag model lead to stiffer EoSs than for the DDQM model. This, in turn, leads to higher maximum masses and smaller radii in the vector MIT bag case, as well as lower tidal deformability values when compared to the DDQM case, as will be evident in the discussions that follow. In Figure 6, we present the EoSs for both the DDQM model and the vector MIT bag model composed of three flavor quarks \((u,d,s)\) and electrons calculated in \(\beta\)-equilibrium. The stiffness of the EoS enables us to determine the maximum mass of the star. From Figure 6, right panel, we observe that an increase in the \(B^{1/4}\) and/or a decreasing \(G_{V}\) softens the EoS. In the same sense, an increase of \(b_{4}\) leads to the softening of the EoS at high densities. Looking at Table 3 and Figure 6 one might be led to believe that increasing \(b_{4}\) leads to a stiffer EoS but, if we keep the other \begin{table} \begin{tabular}{c c c c c c} \(C\) & \(\sqrt{D}\)[MeV] & M\({}_{max}\)[M\({}_{\odot}\)] & \(R\)[km] & \(n\)[fm\({}^{-3}\)] & \(R_{1.4}\)[km] & \(\Lambda_{1.4}\) \\ \hline 0.50 & 137.5 & 1.91 & 11.78 & 0.88 & 12.46 & 534 \\ 0.65 & 132.2 & 2.04 & 12.82 & 0.73 & 13.40 & 1398 \\ 0.70 & 130.6 & 2.10 & 13.25 & 0.70 & 13.86 & 1717 \\ 0.80 & 127.4 & 2.18 & 13.86 & 0.64 & 14.41 & 2163 \\ \end{tabular} \end{table} Table 2: Neutron star properties for the different DDQM models analyzed Figure 3: Corner plot showing the posterior distributions of the parameters of DDQM model on the left, and the parameters of the vector MIT bag model on the right for Case III. The dark to light contours represent the \(1\sigma\), \(2\sigma\) and \(3\sigma\), respectively. The dashed vertical lines in the histograms represent the 0.16, 0.5, and 0.84 quantiles. parameters fixed and vary only \(b_{4}\) we find that it effect is softening the EoS, as can be seen in [47]. What happens in the present case is that \(B^{1/4}\) and \(G_{V}\) have a greater influence on the EoS than \(b_{4}\). On the other hand, on the left panel, we observe that an increasing \(C\) and associated decreasing \(\sqrt{D}\) leads to the stiffening of the EoS, resulting in increasing maximum masses in that order, as can be seen in Table 2. These properties have a direct impact on the velocity of sound \(c_{s}\), the adiabatic index \(\Gamma\), and the tidal deformability \(\Lambda\) that enables us to study the inner composition of the star. As it is well known, to satisfy the 2 M\({}_{\odot}\) constraint of NSs, stiffer EoSs are the ones preferred. The results also show that the EoS for the vector MIT bag model is significantly stiffer when compared to the DDQM. Since stiffer EoS means higher maximum masses, this is reflected in the maximum masses calculated in the framework of the models. Figure 4: Corner plot showing the posterior distributions of the parameters of DDQM model on the left, and the parameters of the vector MIT bag model on the right for Case IV. The dark to light contours represent the \(1\sigma\), \(2\sigma\) and \(3\sigma\), respectively. The dashed vertical lines in the histograms represent the \(0.16\), \(0.5\), and \(0.84\) quantiles. Figure 5: The stability window for the DDQM model. While all the curves for the vector MIT bag model satisfy the constraint of having a maximum mass compatible with the estimated mass of PSR J0952-0607, only one of the curves - dash double-dot curve - satisfies this constraint for the DDQM model. The two other curves - dashed curve and dash-dotted curve - still present a maximum mass compatible with the estimated mass for PSR J0740+6620. In Figure 7, we show the mass-radius diagram for the DDQM model on the left panel and for the vector MIT bag model on the right panel. Starting with the left panel for the DDQM model, we can observe that the softer EoS, solid curve, only satisfies the constraints for the XMMU J173203.3-344518 object and the PSR J0030 + 0451 pulsar and does not achieve M\({}_{max}=2\) M\({}_{\odot}\). The two curves that came from the EoSs with intermediate stiffness for this model satisfy the constraints from NICER and the one from XMMU J173203.3-344518. As for the curve with stiffer EoS, it is the only one for the DDQM case that satisfies the mass of the "black widow" star with a radius of 13.86 km. For this model, we are not able to find a set of values for the parameters \(C\) and \(\sqrt{D}\) that would satisfy all the constraints at the same time. Now, analyzing the second panel of Figure 7 for the vector MIT bag model, we can see that all the curves can satisfy all the mass-radius constraints at the same time. Despite that, the dashed double-dotted curve which is associated with the softest EoS of this model, only satisfies the constraints for the canonical and the "black widow" stars slightly. We can also see that the curve with stiffer EoS for the MIT bag case, the solid curve, has a maximum mass above the estimated range of mass for PSR J0952-0607, M\({}_{max}=2.54\) M\({}_{\odot}\). In general, comparing the two models we can infer that, for the cases we chose to study, i.e., after the parameters are restricted to the ones suggested by the Bayesian calculation, the DDQM model leads to smaller maximum masses and higher radii than the vector MIT bag model. The QCD theory shows different properties in the perturbative and nonperturbative regions. Quark matter is expected to be in a deconfined state and approximately symmetric under conformal transformation. On the other hand, the hadronic matter is not symmetric under conformal transformation due to the manifestation of chiral symmetry breaking. These two extreme characteristics can be quantitatively differentiated through the determination of the speed of sound, \(c_{s}^{2}=dp/d\varepsilon\) in the stellar matter. It has been established that \(c_{s}^{2}\) is constant and attains a value of \(1/3\) in exactly conformal matter and approaches this value from below at high-density quark matter region, \(n>40\ n_{0}\)[75]. Hence, we can determine the interior dynamics and composition of the star by analyzing the \(c_{s}\), which depends on the EoS of the corresponding star. The \(c_{s}\) is determined to be \(c_{s}^{2}\ll 1/3\) below the saturation density in chiral effective theory and can grow up to \(c_{s}^{2}\gtrsim 0.5\) in hadronic matter at high densities [76; 77]. Causality requires that \(c_{s}^{2}\leq 1\) and thermodynamic stability also requires that \(c_{s}^{2}>0\). However, if the interaction between the particles is perturbative, \(c_{s}^{2}\leq 1/3\). This is applicable to the case of QCD at asymptotically high density or temperature where perturbative treatment of the theory is valid. \begin{table} \begin{tabular}{c c c c c c c c} \(B^{1/4}\)[MeV] & \(G_{V}\)[fm\({}^{2}\)] & \(b_{4}\) & M\({}_{max}\)[M\({}_{\odot}\)] & \(R\)[km] & \(n\)[fm\({}^{-3}\)] & \(R_{1.4}\)[km] & \(\Lambda_{1.4}\) \\ \hline 135.28 & 0.366 & 1.90 & 2.54 & 13.15 & 0.74 & 12.38 & 1078 \\ 137.96 & 0.235 & 1.63 & 2.40 & 12.46 & 0.82 & 11.94 & 850 \\ 139.79 & 0.159 & 1.69 & 2.28 & 11.96 & 0.89 & 11.63 & 712 \\ 140.90 & 0.116 & 0.72 & 2.21 & 11.66 & 0.94 & 11.43 & 633 \\ \end{tabular} \end{table} Table 3: Neutron star properties for the different vector MIT bag model analyzed Figure 6: In the figures above we compare the EoSs from the DDQM (left panel) with the EoSs for the vector MIT bag model (right panel). Comparing the graphs in Figure 8, in the left panel for the DDQM model, all the curves lie below the conformal limit \(c_{s}^{2}<1/3\). This implies that, for the possible values of \(C\) and \(\sqrt{D}\) obtained for the constraints considered in this work, the stars can possibly be formed through self-bound free quarks in a deconfined state. On the other hand, the vector MIT bag model on the right panel shows characteristics similar to the chiral effective theory, that is, the curves show \(c_{s}^{2}>1/3\) even at very low densities. Aside from that, the curves with higher values of \(b_{4}\), the solid curve and dashed one, show different characteristics from the other two, they rise steadily at low density and start falling at intermediate to high densities. Similar characteristics are observed among hybrid neutron stars, where deconfined quark matter is assumed to be in the core of the stars [78; 79; 15; 80]. In general, in the framework of the vector MIT bag model, the \(c_{s}^{2}\) behaves analogous to the hadronic matter at the lower to intermediate densities and then drops close to the conformal limit in the core of the star where the quark core is assumed to have formed. The other two curves, the dash-dot curve and the dash double-dot one, follow the usual characteristics of the \(c_{s}^{2}\) in the hadronic matter, where \(c_{s}^{2}\) rises steadily with \(n\) but exceeds the conformal limit \(c_{s}^{2}=1/3\)[37]. The decrease of \(c_{s}^{2}\) with the density shows the importance of the self-coupling that mimics the Dirac sea contribution, as the conformal limit must be satisfied at very high densities. In Figure 9, we analyze the stability of the stars using the adiabatic index as a benchmark. Following the seminal works of Chandrasekhar [81; 82], the dynamic stability of a star can be analyzed based on variational methods. An expression for the adiabatic index is given by \[\Gamma=\frac{p+\varepsilon}{p}\left(\frac{dp}{d\varepsilon}\right)_{S}, \tag{25}\] where \(dp/d\varepsilon\) is precisely the speed of sound and \(S\) is the specific entropy at which \(\Gamma\) is evaluated. Generally, the \(\Gamma\) is dimensionless and its behavior depends on the stiffness of the EoS for spherical relativistic fluid. For a stable star, the adiabatic index is required to be \(\Gamma>\Gamma_{cr}=4/3\) in the core of the star. Meanwhile, collapse of the star is expected to begin when \(\Gamma\) falls below \(4/3\), \(\Gamma_{cr}\) is the critical adiabatic index [83; 84; 85; 86]. The case of \(\Gamma=4/3\) is the starting Figure 8: Here we compare the square velocity of sound \(c_{s}^{2}\) resulting from the two models as a function of baryon density \(n\). Figure 7: Comparing the mass-radius diagram of the DDQM (left) and the vector MIT bag model(right). point of instability of the star. In Figure 9, we demarcate the instability threshold \(\Gamma_{cr}\) with a dotted horizontal gray line. We observe that the \(\Gamma\) decreases with increasing \(n\) but does not cross the instability threshold for both models under consideration. Additionally, we observe that more massive stars with stiffer EoSs approach \(\Gamma_{cr}\) line faster than relatively lighter stars with softer EoSs. The NS macroscopic properties such as the masses and radii have long been used as constraints to understanding the microscopic properties of these stars. Despite the extensive probe of the NSs, some of its key properties and interior compositions at extreme conditions of density and isospin asymmetry still remain uncertain. Here, we analyze another astrophysical observable property, the tidal deformability presented in Figure 10, that can also be used to probe the interior composition of the NS. The NSs like any other external objects with a defined structure can tidally deform when subject to the influence of an external tidal field. During the event of coalescences of NSs that led to gravitational wave emission, the deformation was quantified through a dimensionless parameter, called the tidal deformability \(\Lambda\). The \(\Lambda\) is given by the expression [87; 68; 88; 89]: \[\Lambda=\frac{2}{3}k_{2}\frac{R^{5}}{M^{5}}, \tag{26}\] where \(k_{2}\) is the gravitational Love number, \(M\) is the mass of the star and \(R\) is its radius. As expected, a relatively larger value of \(\Lambda\) implies the star is large, less compact, and can easily be deformed. On the contrary, a smaller \(\Lambda\) means, a smaller-sized star, highly compact, and hard to deform. Moreover, as pointed in references [89; 68], the value of \(y_{R}\) must be corrected, since strange stars are self-bound and present a discontinuity at the surface. Therefore we must have: \[y_{R}\to y_{R}-\frac{4\pi R^{3}\Delta\varepsilon_{S}}{M}, \tag{27}\] Figure 10: Tidal deformability \(\Lambda\) as a function of the mass \(M[M_{\odot}]\) for the DDQM model (left panel) and vector MIT bag model (right panel). Figure 9: We compare the adiabatic indices \(\Gamma\) of DDQM (left panel) and the vector MIT bag model (right panel) as a function of baryon density \(n\). where \(R\) and \(M\) are the star radius and mass respectively, and \(\Delta\varepsilon_{S}\) is the difference between the energy density at the surface (\(p=0\)) and the exterior of the star (which implies \(\varepsilon=0\)). We considered two events that satisfy some of our results in Figure 10 for both the DDQM and the vector MIT bag model. We first analyze event GW170817, which is arguably the most authoritative confirmed observed binary neutron star merger with an emitted gravitational wave as of now [7; 10]. The value of \(\Lambda_{1.4}\) estimated for this event is \(\Lambda_{1.4}=190^{+390}_{-120}\) at 90% confidence level [7]. If the DDQM model is used, this constraint is satisfied only by the solid curve, which corresponds to the softest DDMQ EoS which does not satisfy the constraint \(\mathrm{M}_{max}\geq 2\;\mathrm{M}_{\odot}\). As for the MIT bag model, none of its curves satisfy this constraint. We also demarcated the binary coalescence event GW190814, believed to consist of a black hole of mass within \(22.2-24.3\;\mathrm{M}_{\odot}\) and a compact object with a mass within \(2.50-2.67\,\mathrm{M}_{\odot}\)[90]. Analysis of the data associated with GW190814 shows that the possible nature of the compact object can be classified as a neutron star only if its EoS is very stiff [91] or it is a rapidly rotating compact object below the mass shedding frequency [92; 93; 94]. Its value has been estimated to be \(\Lambda_{1.4}=616^{+273}_{-158}\)[90]. This constraint is also satisfied only by the solid curve in the DDQM model. In the MIT bag model case, the three curves associated with the three softest EoSs satisfy this constraint. Hierarchically, curves with lower masses are the first to satisfy the constraints on tidal deformability before the massive ones in both model frameworks. The absolute values of \(\Lambda_{1.4}\) determined in the two separate models analyzed here are presented on Tables 2 and 3, where we can see that the value of \(\Lambda_{1.4}\) decreases with the increasing of the parameter \(C\) and decreasing of \(\sqrt{D}\) for the DDQM model. As for the MIT bag model, the absolute value of \(\Lambda_{1.4}\) decreases with increasing \(B^{1/4}\) and decreasing \(G_{V}\) as can be seen in Table 3. From Table 3, it seems that increasing \(b_{4}\) leads to increasing \(\Lambda_{1.4}\) as well but, since we know that increasing \(b_{4}\) softens the EoS [47], then increasing \(b_{4}\) will decrease \(\Lambda_{1.4}\). In general, the values of the tidal deformability for the MIT bag case are smaller than the values found for the DDQM case. ## VII Conclusions In this paper, we perform a comparative analysis between the DDQM and the vector MIT bag model and their applications to the study of quark stars that satisfy the constraints of some observed pulsars and compact objects listed in Table 1 using Bayesian analysis. We show the corner plot for the distribution of the various parameters determined at various confidence levels in Figures 1, 2, 3 and 4 for Cases I, II III and IV respectively. We imposed four different mass and radius constraints (corresponding to four different compact objects) on the EoSs for each model and determined the model parameters that satisfy these constraints. The parameters determined through this analysis for the vector MIT bag and the DDQM models are presented in Tables 3 and 2 respectively. In the case of the vector MIT bag model, we were able to include the stability window analysis in the Bayesian inference, so that, all points in the corner plots for this model are inside the stability window. In the case of the DDQM, we performed a stability window analysis separately, as can be seen in Figure 5, to clearly show the regions of \(C\) (associated with the leading-order perturbative term in QCD) and \(\sqrt{D}\) (associated with linear confinement) within which the corresponding EoS will lead to the determination of a stable quark star, according to the Bodmer-Witten conjecture. After obtaining the model parameters from the analysis, we applied them to study the properties of quark stars assuming they have the same constraints as NSs. Consequently, we calculated the EoSs for the two models under investigation for each set of values of the parameters that were chosen to be analyzed and determined their speed of sound (\(c_{s}^{2}\)), adiabatic index (\(\Gamma\)), tidal deformability (\(\Lambda\)) and mass-radius diagram (M-R) and compared their similarities and differences. * We find that the EoSs determined from the vector MIT bag model are stiffer than the ones determined from the DDQM even though we imposed similar constraints on both models, see the result in Figure 6. In this case, we can infer that different models can show different dynamics of EoS even if the same constraints are used in calculating them. Therefore, we expect stars with similar constraints to show different characteristics in each model framework. * In Figure 7, we present the mass-radius diagram obtained from both models. In the framework of the vector MIT model, all the stars satisfied the 2 \(\mathrm{M}_{\odot}\) mass constraint imposed on NSs. As mentioned above, the EoS determined in this model are stiffer compared to DDQM, hence the higher masses are expected. On the other hand, the maximum masses of the stars within the DDQM also satisfy the 2 \(\mathrm{M}_{\odot}\) constraint except one, the solid line curve with \(\mathrm{M}_{\mathrm{max}}=1.91\;\mathrm{M}_{\odot}\). * In Figure 8, we present the \(c_{s}^{2}\) as a function of \(n\) for both models to help determine the inner composition of the star. Here, the DDQM showed a characteristic similar to deconfined quark matter with \(c^{2}<1/3\). On the other hand, the vector MIT bag model crossed the conformal limit at \(c^{2}=1/3\) even at the low-density regions showing a behavior similar to hadron matter. Also, the curves that have higher maximum masses for this model (solid and dashed line curves in the right panel) showed a steady rise in \(c_{s}\) with \(n\) at low-density regions and started falling at intermediate to higher-density regions. * In Figure 9, we analyze the stability of the stars in both models through their adiabatic indices. Generally, all the stars analyzed are well within the stability threshold \(\Gamma\,>\,\Gamma_{cr}\). However, we observed that curves associated with more massive stars approach the \(\Gamma_{cr}\) line (the gray line) faster than the curves related to smaller M\({}_{max}\) in both models. * Additionally, we studied the tidal deformability that complements the study of the interior dynamics of the NSs. Generally, we observed that curves with higher maximum mass, which at the same time have canonical stars with higher radius, have higher values of \(\Lambda_{1.4}\) than the curves with lower M\({}_{max}\) and more compact canonical stars, for the same model. The stars determined in the framework of the vector MIT bag model have larger maximum masses, smaller radii, and smaller values of \(\Lambda_{1.4}\) compared to the ones determined from the DDQM with relatively low masses and higher radii. As expected, less compact stars are more likely to be deformed than the more compact ones in the same model framework. Only one of the curves (solid line) in the DDQM satisfies the constraints imposed by both GW170817 and GW 190814 and three of the curves (dash double-dotted, dash-dotted, and dashed lines) in the MIT bag model satisfy the GW190814, as can be seen in Figure 10. Consequently, the optimized model parameters for the MIT bag and the DDQM models determined through the analysis qualitatively reproduce some known NS properties such as the one listed above. Most of the results obtained conform with the 2 M\({}_{\odot}\) maximum constraint imposed on NSs [56]. Some of the results obtained for the \(\Lambda_{1.4}\) satisfy the GW170817 [7; 10] (DDQM) and GW190814 [90; 91] (MIT bag model) signal ranges. ## Acknowledgements This work is a part of the project INCT-FNA Proc. No. 464898/2014-5. D.P.M. was partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq/Brazil) under grant 303490-2021-7 and A.I. under grant 168546/2021-3. F.M.S. would like to thank FAPESC/CNPq for financial support under grant 150721/2023-4. L.C.N.S would like to thank CNPq for partial financial support through the research Project No. 200215/2023-0. LLL was partially supported by CNPq Universal Grant No. 409029/2021-1.
2309.08497
Statistical anisotropy of primordial gravitational waves from generalized $δN$ formalism
In this letter, we demonstrate how to use the generalized $\delta N$ formalism, which enables us to compute the evolution of all the large scale fluctuations, including gravitational waves, solely by solving the evolution of the background homogeneous Universe. Using the Noether charge density, we derive an analytic formula which describes the mapping between the fluctuations at the horizon crossing and the sourced gravitational waves at the end of inflation. This formula can apply also to an inflation model with an anisotropic background. Using this formula, we discuss the condition for the non-vanishing linear polarization and the qualitative difference between single- and multi-gauge field models.
Takahiro Tanaka, Yuko Urakawa
2023-09-15T16:01:41Z
http://arxiv.org/abs/2309.08497v2
# Statistical anisotropy of primordial gravitational waves from generalized \(\delta N\) formalism ###### Abstract In this letter, we demonstrate how to use the generalized \(\delta N\) formalism, which enables us to compute all the large scale fluctuations, including the gravitational waves, solely by solving the evolution of the background homogeneous Universe. Using the Noether charge density, we derive an analytic formula which describes the mapping between the fluctuations at the horizon crossing and the sourced gravitational waves at the end of inflation. This formula can apply also to an inflation model with an anisotropic background. _Introduction.-_ The \(\delta N\) formalism [1; 2; 3], which is based on the separate universe approach [4; 5], has played the central role to connect the prediction of each inflation model with various observations, enabling a simple computation of the superhorizon dynamics and also providing an intuitive understanding on the evolution of the primordial fluctuations. However, their application was limited to a system which contains only scalar fields and it could not be used even to compute the gravitational waves. In Ref. [6], we have shown that the separate universe approach can be applied under a rather general setup. Based on this, we can generalize the \(\delta N\) formalism to compute the adiabatic perturbation and the gravitational waves sourced by non-zero spin fields. In this letter, we demonstrate this computation, considering a model with \(U(1)\) gauge fields. _g\(\delta N\) formalism.-_ The \(\delta N\) formalism computes the evolution of fluctuations at the leading order of the gradient expansion [4; 7], which is an expansion scheme with respect to the spatial gradient. The gradient expansion starts with smoothing the small scale fluctuations. As a consequence of the smoothing, operating the spatial gradient gives rise to the suppression by a small quantity \(\epsilon\), which is usually characterized by the spatial variation of \(\varphi^{a}\) within each causally connected patch. At the leading order of the gradient expansion, we simply send \(\epsilon\) to \(0\). As shown in [6], the separate universe approach and the \(\delta N\) formalism generically applies to a theory which satisfies the spatial diffeomorphism invariance and the locality. Under these two conditions, one can compute the time evolution of the inhomogeneous Universe simply by solving a set of the corresponding ordinary differential equations for different initial conditions specified at around the horizon crossing. The conventional \(\delta N\) formalism can be applied only to a system with scalar fields, while the generalized \(\delta N\) formalism (g\(\delta N\) formalism) can be applied broadly to a general model that satisfies the above-mentioned two conditions [6]. _Preliminaries.-_ In this letter, we consider a system with \(D\) scalar fields \(\phi^{I}\) and \(D^{\prime}\) U(1) gauge fields \(A^{\mu}_{(\alpha)}\) whose Lagrangian density is given by \[\mathcal{L}_{\rm mat}=\mathcal{P}(X^{IJ},\,\phi^{I})-\frac{f_{(\alpha)}^{2}( \phi^{I})}{4}F_{\mu\nu(\alpha)}F^{\mu\nu}_{(\alpha)}\,, \tag{1}\] where \(F_{\mu\nu(\alpha)}\) denotes the field strengths of the gauge fields \(A^{\mu}_{(\alpha)}\). Here and hereafter, we do not explicitly write the summation over \(\alpha\), which counts different gauge fields. In this letter, we express the 4-dimensional line element as \[ds^{2}=-N^{2}dt^{2}+g_{ij}(dx^{i}+N^{i}dt)(dx^{j}+N^{j}dt)\,, \tag{2}\] with \(i,j=1,\cdots,3\). We express the spatial metric as \(g_{ij}=e^{2\psi}\,\gamma_{ij}\), where \(\gamma_{ij}\) satisfies \(\det[\gamma]=1\). Using \(\psi\), the determinant of \(g_{ij}\) is given by \(g=e^{\phi\psi}\). In the separate universe evolution, we need to employ local gauge conditions [6]. Here, we adopt \(N_{i}=0\) and \(A_{0}=0\). In \(N_{i}=0\) gauge, the expansion \(K\) and the shear \(A^{i}_{j}\), given by the trace part and traceless part of the extrinsic curvature, read \(K=3\dot{\psi}/N\) and \(A^{i}_{\ j}=\gamma^{im}\dot{\gamma}_{mj}/2N\), respectively, with a dot being the time derivative with respect to \(t\). We determine the time slicing at the horizon crossing, requiring \(\delta\psi(t_{*},\,\mathbf{x})=0\) and the one at \(t=t_{\rm f}\), requiring \(\delta K(t_{t},\,\mathbf{x})=0\). The residual gauge degrees of freedom are eliminated by imposing additional gauge conditions on the initial condition at the horizon crossing [8]. We introduce \(\delta\gamma^{i}_{\ j}\) and \(\delta\gamma_{ij}(t,\,\mathbf{x})\) as \(\gamma_{ij}(t,\,\mathbf{x})\equiv\bar{\gamma}_{ik}(t)\left[e^{\delta\gamma(t,\, \mathbf{x})}\right]^{k}_{\ j}\) and \(\delta\gamma_{ij}(t,\,\mathbf{x})\equiv\bar{\gamma}_{ik}(t)\delta\gamma^{k}_{\ j}(t,\,\mathbf{x})\). Here and hereafter, we put a bar on background variables. The Maxwell equations are given by \(\partial_{t}\pi^{i}_{(\alpha)}=\mathcal{O}(\epsilon^{2})\) with the conjugate momenta being \[\pi^{i}_{(\alpha)}\equiv\frac{\partial(N\sqrt{g}\mathcal{L}_{\text{matter}})}{ \partial\dot{A}_{i(\alpha)}}=\sqrt{g}f^{2}_{(\alpha)}(\phi^{I})g^{ij}\frac{ \dot{A}_{j(\alpha)}}{N}\,,\] which correspond to the Noether charge densities at the leading order of the gradient expansion [6]. The energy densities of the gauge fields are given by \[\rho_{A(\alpha)}=\frac{\gamma_{ij}\pi^{i}_{(\alpha)}\pi^{j}_{(\alpha)}}{2f^{2} _{(\alpha)}e^{4\psi}}+\mathcal{O}(\epsilon^{2})\,, \tag{3}\] which indicates that \(\rho_{A(\alpha)}\) grows, when \(f_{(\alpha)}\) decreases faster than \(e^{-2\psi}\). _Primordial gravitational waves.-_ In this letter, we assume the near de Sitter evolution (but not the slow-roll approximation) and that the shear is much smaller than the expansion, i.e., \[\text{(A1)}\quad-\frac{3\dot{K}}{NK^{2}}\ll 1\,,\quad\frac{A^{i}{}_{j}A^{j}{}_{i }}{K^{2}}\ll 1\,. \tag{4}\] Using the solution of the Maxwell equations and discarding trivial decaying solutions, we can formally solve the traceless part of \((i,\,j)\) component of the Einstein equations as \[\gamma_{ij}(t,\,\mathbf{x}) =\gamma_{ij*}(\mathbf{x})-2\left[\gamma_{il*}(\mathbf{x})\gamma_{jm*}(\bm {x})\right]^{\text{TL}}\] \[\quad\times\hat{\pi}^{l}_{(\alpha)}(\mathbf{x})\hat{\pi}^{m}_{(\alpha )}(\mathbf{x})\,\hat{\mathcal{I}}_{(\alpha)}(t;\{\varphi^{a^{\prime}}_{*}\}^{ \prime})+\mathcal{O}(\epsilon^{2})\,, \tag{5}\] with \(\hat{\mathcal{I}}_{(\alpha)}\) and \(\hat{\pi}^{i}_{(\alpha)}\) being \[\hat{\mathcal{I}}_{(\alpha)}(t;\{\varphi^{a^{\prime}}_{*}\}^{ \prime})\] \[\equiv\frac{\gamma_{ij*}\pi^{i}_{(\alpha)}\pi^{j}_{(\alpha)}}{M_{ \text{pl}}^{2}}\!\int_{t_{*}}^{t}\!\frac{dt^{\prime}N}{e^{3\psi(t^{\prime})}} \!\int_{t_{*}}^{t^{\prime}}\!\frac{dt^{\prime\prime}N}{e^{\psi(t^{\prime\prime })}f^{2}_{(\alpha)}(\phi^{I}(t^{\prime\prime}))}\,,\] and \[\hat{\pi}^{i}_{(\alpha)}\equiv\frac{\pi^{i}_{(\alpha)}}{\sqrt{\gamma_{ij*} \pi^{i}_{(\alpha)}\pi^{j}_{(\alpha)}}}=\frac{\gamma^{ij}_{*}\dot{A}_{j( \alpha)*}}{\sqrt{\gamma^{kl}_{*}\dot{A}_{k(\alpha)*}\dot{A}_{l(\alpha)*}}}\,, \tag{6}\] where \(\{\varphi^{a^{\prime}}_{*}\}^{\prime}\) denotes the set of the fields by which we provide the initial condition at the horizon crossing (a detailed argument can be found in [8]). Here, TL denotes the traceless part about the \((i,j)\) indices defined by using the spatial metric \(\gamma_{ij*}\). The formal solution (5) can be more easily obtained by using the Noether charge density [6; 8] than by solving the Einstein equations directly. The second assumption in (A1) can be verified when the energy density of the gauge fields, \(\rho_{A}=\sum_{\alpha}\rho_{A(\alpha)}\), remains much smaller than the total one. Here we only take into account the leading contribution of the gauge fields to \(\gamma_{ij}\). This equation also gives the longitudinal part of \(\gamma_{ij}\), which is necessary to compute \(\zeta\) in the present gauge. Since \(\hat{\pi}^{i}_{(\alpha)}\) is constant regardless the evolution of \(f_{(\alpha)}\), the scalar fields contribute to \(\gamma_{ij}\) only through \(\hat{\mathcal{I}}_{(\alpha)}\). Ignoring the time variation of the expansion under the assumption (4), we can further rewrite \(\hat{\mathcal{I}}_{(\alpha)f}\equiv\hat{\mathcal{I}}_{(\alpha)}(t_{f})\) with \(t_{f}\) being the reheating surface as \[\hat{\mathcal{I}}_{(\alpha)f}\simeq\frac{1}{\rho_{*}}\int_{\psi_{*}}^{\psi_{f}} \frac{d\psi^{\prime\prime}\gamma_{ij}\pi^{i}_{(\alpha)}\pi^{j}_{(\alpha)}}{e^ {4\psi^{\prime\prime}}f^{2}_{(\alpha)}(\phi^{I}(\psi^{\prime\prime}))} \tag{7}\] with \(\rho_{*}\) being the total energy density at \(t_{*}\). Here, we inserted \(\gamma_{ij*}\pi^{i}\pi^{j}\) in the integral, since the time variation of \(\gamma_{ij}\) becomes higher order under the assumption (A1). Here, we have integrated over \(t^{\prime}\) after exchanging the order of the integrals and ignored the exponentially suppressed term. Using Eq. (3), one can find that the integrand of \(\hat{\mathcal{I}}_{(\alpha)f}\) corresponds to the energy density of each gauge field. Therefore, when \(\rho_{A(\alpha)}\) takes a maximum value \(\rho_{A(\alpha)}^{\text{max}}\) during \(\Delta\psi_{(\alpha)}\) after the horizon crossing, \(\hat{\mathcal{I}}_{(\alpha)f}\) is roughly given by \(\hat{\mathcal{I}}_{(\alpha)f}\sim 2\Delta\psi_{(\alpha)}\rho_{A(\alpha)}^{\text{max}}/ \rho_{*}\). We further assume \[\text{(A2)}\quad\left|\frac{\delta\hat{\mathcal{I}}_{(\alpha)f}}{\hat{ \mathcal{I}}_{(\alpha)f}}\right|\ll|\delta\hat{\pi}^{i}_{(\alpha)}|\,. \tag{8}\] For example, when \(\rho_{A(\alpha)}^{\text{max}}\) is determined by the potential gradient \(V_{I}/V\) as a consequence of the balance with the friction, the fluctuation of \(\rho_{A(\alpha)}^{\text{max}}\) is suppressed by the slow-roll parameter [9; 10]. Meanwhile, in the model discussed in Ref. [11], since \(\rho_{A(\alpha)}^{\text{max}}\) becomes constant, there is no fluctuation. Then, the left hand side of (A2) becomes \(|\delta\Delta\psi_{(\alpha)}|/\Delta\psi_{(\alpha)}\), which typically becomes smaller than \(|\delta\hat{\pi}^{i}_{(\alpha)}|\!\sim\!|\delta\rho_{A(\alpha)*}/\rho_{A( \alpha)*}|\) by a factor of \(1/\Delta\psi_{(\alpha)}\). _Perturbative expansion in \(g\delta N\) formalism.-_ In the conventional \(\delta N\) formalism, the adiabatic curvature perturbation \(\zeta\) is given by computing the \(e\)-folding number under different initial conditions. Similarly, we can compute the gravitational waves simply by computing the anisotropic expansion. Under the assumptions listed above, perturbing Eq. (5), the dominant contribution at the linear perturbation is given by \[\delta\gamma_{ij}(t_{f},\,\mathbf{x})\] \[\simeq\delta\gamma_{ij*}(\mathbf{x})-4\left[\bar{\gamma}_{il*}\bar{ \gamma}_{jm*}\right]^{\text{TL}}\bar{\pi}^{l}_{(\alpha)}\delta\hat{\pi}^{m}_{( \alpha)}(\mathbf{x})\,\bar{\tilde{\mathcal{I}}}_{f(\alpha)}\,. \tag{9}\] The first term in the second line is the usual vacuum contribution. The second term describes the shear fluctuation sourced by the fluctuations in the directions of \(\pi^{i}_{(\alpha)}\). The fluctuation of \(\gamma_{ij*}\) also perturbs the second term of Eq. (5), yielding several terms whose amplitudes amount to \(\tilde{\mathcal{I}}_{f(\alpha)}(H_{*}/M_{\text{pl}})\sim(\bar{\rho}^{\text{ max}}_{A(\alpha)}/\bar{\rho}_{*})\Delta\psi_{(\alpha)}(H_{*}/M_{\text{pl}})\). They turn out to be smaller than the second term of Eq. (9) by \(\sqrt{\bar{\rho}^{\text{max}}_{A(\alpha)}/\bar{\rho}_{*}}<1\). This suppression originates from the difference between the amplitudes of \(\delta\gamma_{ij*}\) and \(\delta\hat{\pi}^{i}_{(\alpha)*}\). Perturbing \(\hat{\pi}^{i}_{(\alpha)}\), we obtain \[\delta\hat{\pi}^{i}_{(\alpha)}=\frac{\bar{\gamma}^{ij}_{*}\dot{\delta}\dot{A}_{j( \alpha)*}}{\dot{\tilde{A}}_{(\alpha)*}}-\frac{\bar{\gamma}^{kl}_{*}\dot{\tilde{A} }_{k(\alpha)*}\delta\dot{A}_{l(\alpha)*}\bar{\gamma}^{ij}_{*}\dot{\tilde{A}}_{j( \alpha)*}}{\dot{\tilde{A}}^{3}_{(\alpha)*}}+\cdots\,,\] where we have introduced \(\dot{\hat{A}}_{(\alpha)*}\equiv\sqrt{\dot{\gamma}_{*}^{k}\dot{\hat{A}}_{k(\alpha)* }\dot{\hat{A}}_{l(\alpha)*}}\). Here we abbreviated the terms with the fluctuation of the spatial metric, which only give sub-dominant contributions. Without these abbreviated terms, we find \(\bar{\gamma}_{ij*}\delta\hat{\pi}^{i}_{(\alpha)}\bar{\pi}^{j}_{(\alpha)}=0\). _Polarization bases._-In what follows, we set the background spatial metric at the reheating surface \(t_{f}\) as \(\bar{\gamma}_{ij}(t_{f})=\delta_{ij}\). Then, at \(t=t_{f}\), we can define the gravitational waves as usual by using the polarization tensors \(e^{(\lambda_{\rm sw})}_{ij}\) with \(\lambda_{\rm gw}=+,\,\times\), which satisfy \(\dot{k}e^{(\lambda_{\rm gw})}_{ij}(\hat{\mathbf{k}})=0\), \(e^{(\lambda_{\rm sw})}_{ii}(\hat{\mathbf{k}})=0\), and \(e^{(\lambda_{\rm sw})}_{ij}(\hat{\mathbf{k}})e^{(\lambda^{\prime}_{\rm gw})}_{ij }(\hat{\mathbf{k}})=\delta^{\lambda_{\rm sw}\lambda^{\prime}_{\rm sw}}\). The adiabatic curvature perturbation is also given by the usual definition as \(\zeta(t_{f})=\delta\psi(t_{f})-\frac{1}{4}\hat{k}\hat{k}_{j}\delta\gamma_{ij} (t_{f})\). Here, \(\hat{\mathbf{k}}\) denotes the unit wavenumber \(\mathbf{k}/k\). Here and hereafter, we lower and raise the indices \(\dot{k}^{i}\) and \(e^{(\lambda_{\rm sw})}_{ij}\) by using \(\bar{\gamma}_{ij}(t_{f})=\delta_{ij}\) and \(\bar{\gamma}^{ij}(t_{f})=\delta^{ij}\). (When the background shear had already become negligible at \(t=t_{f}\), the background spatial metric remains \(\bar{\gamma}_{ij}(t)=\delta_{ij}\) all the time after \(t_{f}\), ensuring the linear decomposition among the scalar, vector, and tensor type perturbations.) Using the two polarization bases of the gauge fields, \(e^{(A)}_{i}(\hat{\mathbf{k}})\) with \(A=1,\,2\), we define \(e^{(\lambda_{\rm sw})}_{ij}(\hat{\mathbf{k}})\) as \(e^{(+)}_{ij}\equiv(e^{(1)}_{i}e^{(1)}_{j}e^{(-)}_{i}e^{(2)}_{j})/\sqrt{2}\) and \(e^{(\times)}_{ij}\equiv(e^{(1)}_{i}e^{(2)}_{j}+e^{(2)}_{i}e^{(1)}_{j})/\sqrt{2}\). For an arbitrary \(\hat{k}^{i}\), we can choose \(e^{(2)}_{i}\) such that being orthogonal to one of the background gauge fields, i.e., \(e^{(2)}_{i}\bar{\pi}^{i}_{(1)}=0\). In fact, we can choose \(e^{(1)}_{i}\) in the 2D plane spanned by \(\hat{k}_{i}\) and \(\bar{\pi}^{i}_{(1)}\), ensuring \(e^{(2)}_{i}\bar{\pi}^{i}_{(1)}=0\). When we define \(\Theta_{\alpha}\) as \(\cos\Theta_{\alpha}=\hat{\mathbf{k}}\cdot\bar{\bar{\pi}}_{(\alpha)}\), under this choice, we obtain \(\mathbf{e}^{(1)}\cdot\bar{\bar{\mathbf{\pi}}}_{(1)}=\cos(\pi/2+\Theta_{1})=-\sin\Theta_ {1}\). _Linear polarization._-For \(D^{\prime}=1\), choosing the direction of \(e^{(A)}_{i}\) to satisfy \(e^{(2)}_{i}\bar{\pi}^{i}=0\) and operating \(e^{(\lambda_{\rm gw})}_{ij}\) on Eq. (9), we obtain \[\gamma^{(\lambda_{\rm sw})}(t_{f},\,\mathbf{k})\] \[=\gamma^{(\lambda_{\rm sw})}_{*}(\mathbf{k})+2\sqrt{2}\sin\Theta\, \bar{\bar{\mathcal{I}}}_{f}\begin{cases}\cos^{2}\Theta\frac{e^{(1)}_{i}\delta \hat{A}_{i*}(\mathbf{k})}{\hat{A}_{*}}&(\lambda_{\rm gw}=+)\\ \frac{e^{(2)}_{i}\delta\hat{A}_{i*}(\mathbf{k})}{\hat{A}_{*}}&(\lambda_{\rm gw}= \times)\end{cases},\] where we have abbreviated the index \(\alpha=1\). This indicates that the gauge field contributes differently to the two linear polarization modes even if \(e^{i}_{(1)}\delta\hat{A}_{i*}\) and \(e^{i}_{(2)}\delta\hat{A}_{i*}\) have the same amplitude [12]. This is because the fluctuation of the amplitude of \(\pi^{i}\), which corresponds to the second term in \(\delta\hat{\pi}^{i}\), includes only the \(\lambda=1\) mode under the choice of \(e^{(2)}_{i}\bar{\pi}^{i}_{(1)}=0\) and only the \(\lambda=1\,(2)\) mode of the gauge field contributes to \(\lambda_{\rm gw}=+\,(\times)\). We can similarly compute the sourced gravitational waves for \(D^{\prime}\geq 2\) by using Eq. (9). Also for \(D^{\prime}\geq 2\) we find that when the background component of a gauge field is along the direction of \(\mathbf{k}\) (corresponding to \(\Theta_{\alpha}=0\)), this gauge field does not contribute to the gravitational waves, \(\gamma^{(\lambda_{\rm gw})}\). _Power spectrum of the gravitational waves._- So far, we have computed the mapping between the horizon crossing \(t_{*}\) and the reheating \(t_{f}\) simply by assuming (16) and (17). In what follows, assuming further that the background anisotropy was still very small at \(t=t_{*}\), we compute the power spectrum of \(\delta\gamma_{ij*}\) and \(\delta\hat{A}_{i*}\) by adopting the FLRW background approximation. The amplitudes of the mode functions of the gauge fields, \(\hat{A}^{\lambda}_{(\alpha)}\), are given by \[\frac{|\hat{A}^{\lambda}_{(\alpha)}(t_{*},\,k)|^{2}}{\hat{A}^{2}_{*(\alpha)}}= \frac{1}{12}\left(1+\left(\frac{\dot{\bar{f}}_{*}}{H_{*}}\right)^{2}\right) \frac{1}{k^{3}}\frac{\bar{\rho}_{*}}{\bar{\rho}_{A*}(\alpha)}\left(\frac{H_{*} }{M_{\rm pl}}\right)^{2},\] where \(H\) denotes the Hubble parameter. Combining the expressions given above, we obtain the power spectrum of the primordial gravitational waves for \(D^{\prime}=1\) as \[\langle\gamma^{(\lambda_{\rm sw})}(t_{f},\,\mathbf{k})\gamma^{(\lambda _{\rm sw})}(t_{f},\,\mathbf{p})\rangle\] \[=\delta(\mathbf{k}+\mathbf{p})\frac{2}{k^{3}}\left(\frac{H_{*}}{M_{\rm pl }}\right)^{2}(1+g^{\lambda_{\rm sw}}_{t}\sin^{2}\Theta)\,. \tag{10}\] with \[g^{\times}_{t}=\frac{\bar{\rho}_{*}}{3\bar{\rho}_{A*}}\bar{\bar{ \mathcal{I}}}_{f}^{2}\left(1+\left(\frac{\dot{\bar{f}}_{*}}{H_{*}}\right)^{2} \right),\quad g^{+}_{t}=g^{\times}_{t}\cos^{4}\Theta\,. \tag{11}\] Using \(\bar{\bar{\mathcal{I}}}_{f}\sim 2\Delta\psi\bar{\rho}^{\rm max}_{A}/\bar{\rho}_{*}\), we find \(g^{\lambda_{\rm sw}}_{t}\sim(\bar{\bar{\rho}}^{\rm max}_{A}/\bar{\rho}_{*})(\bar {\bar{\rho}}^{\rm max}_{A*}/\bar{\rho}_{A*})\Delta\psi^{2}\). This indicates that even if the energy density of the gauge field all the time remains much smaller than the total energy density, satisfying \((\bar{\rho}^{\rm max}_{A}/\bar{\rho}_{*})\Delta\psi^{2}\ll 1\), the anisotropic component of the primordial gravitational waves can be as large as or even larger than the isotropic one [13]. This is because of the enhancement by \(\bar{\rho}^{\rm max}_{A}/\bar{\rho}_{A*}\), which becomes much larger than 1 when the mode \(k\) crosses the horizon before \(\rho_{A}\) reaches the maximum value. This is because the conversion from the fluctuation of the gauge field to the gravitational waves takes place mainly when \(\rho_{A}\) reaches the maximum value, while the (normalized) power spectrum of the gauge field at the horizon crossing is inversely proportional to \(\bar{\rho}_{A*}\). Because of this enhancement, even if the gauge field is only sourced by the spectator fields, which only occupies a small fraction of the total energy density, we can obtain a large statistical anisotropy [12]. Similarly, we can generalize the discussion to \(D^{\prime}\geq 1\), obtaining \[\langle\gamma^{(\lambda_{\rm gw})}(t_{f},\,\mathbf{k})\gamma^{(\lambda_{\rm gw })}(t_{f},\,\mathbf{p})\rangle\] \[=\delta(\mathbf{k}+\mathbf{p})\frac{2}{k^{3}}\left(\frac{H_{*}}{M_{\rm pl}} \right)^{2}\!\left(1+\sum_{\alpha=1}^{D^{\prime}}g^{\lambda_{\rm sw}}_{t( \alpha)}\sin^{2}\Theta_{\alpha}\right)\,. \tag{12}\] with \(g^{\lambda_{\rm sw}}_{t(\alpha)}\sim(\bar{\bar{\rho}}^{\rm max}_{A(\alpha)}/\bar{ \rho}_{*})(\bar{\rho}^{\rm max}_{A(\alpha)}/\bar{ of \(\delta\gamma_{ij}\) at \(t=t_{f}\), where the background spatial metric is set to \(\delta_{ij}\). Under the assumptions (A1) and (A2), we obtain the dominant contribution of the latter as \[\frac{1}{4}\hat{k}_{i}\hat{k}_{j}\delta\gamma_{ij}(t_{f},\,\mathbf{k})=\sum_{\alpha= 1}^{D^{\prime}}(\hat{k}_{i}\hat{\pi}^{i}_{(\alpha)})^{2}\bar{\tilde{\mathcal{I}} }_{f(\alpha)}\hat{\pi}^{j}_{(\alpha)}\frac{\delta\dot{A}_{j\ast(\alpha)}(\mathbf{k })}{\tilde{A}_{\ast}}\,,\] while the former depends significantly on the detail of the models. For example, when a canonical inflaton field \(\phi\) which dominates the energy density of the Universe does not interact directly with other scalar fields \(\sigma_{I}\) with \(I=1,\,\cdots D-1\) nor with the gauge field (\(D^{\prime}=1\)), under the slow-roll approximation, we obtain \[\frac{d\phi}{d\psi}\simeq-M_{\rm pl}^{2}\frac{V_{\phi}}{V+\rho_{\sigma}+\rho_ {A}}\,,\] where \(V\) and \(V_{\phi}\) denote the scalar potential of \(\phi\) and its derivative and \(\rho_{\sigma}\) denotes the energy density of \(\sigma_{I}\)s. Assuming that when the total energy density of the gauge field takes the maximum value (\(\psi_{\rm max}\leq\psi\leq\psi_{\rm max}+\Delta\psi\)) both \(\rho_{A}\) and \(\rho_{\sigma}\) remain almost constant until \(\psi_{f}\), the above equation can be solved as \[\psi_{f}-\psi_{\ast}\simeq(\psi_{f}-\psi_{\ast})_{\phi}+\frac{\rho_{\sigma}^{ \rm max}+\rho_{A}^{\rm max}}{V}\Delta\psi\,,\] where \(\rho_{\sigma}^{\rm max}\) denotes \(\rho_{\sigma}\) during \(\Delta\psi\) but not the maximum value of \(\rho_{\sigma}\), and the first term denotes the \(e\)-folding number, which is determined only by the dynamics of \(\phi\). The other scalar fields can also provide a angular-independent sub-dominant contributions to \(\psi_{\rm max}-\psi_{\ast}\), which can be addressed by using the conventional \(\delta N\) formalism. Here and hereafter, we ignore them, focusing on the leading angular dependent contribution. Perturbing the above expression, we find that the dominant angular dependent contribution appears from the fluctuation of \(\rho_{\sigma}^{\rm max}\) under the assumptions (A2) and \(\Delta\psi\gg 1\). Expressing the time evolution of \(\rho_{A}\) during \(\psi_{\ast}\leq\psi\leq\psi_{\rm max}\) by using a transfer function \(\mathcal{T}\) as \(\rho_{A}^{\rm max}=\mathcal{T}(\psi_{\rm max}-\psi_{\ast})\times\rho_{\mathcal{ A}\ast}\), we obtain \[\delta(\psi_{\rm max}-\psi_{\ast})\simeq-\frac{1}{(\ln\mathcal{T})^{\prime}} \frac{\delta\rho_{A\ast}}{\bar{\rho}_{A\ast}}\,,\] since we can ignore \(\delta\rho_{A}^{\rm max}\) under (A2). Here, the prime denotes the derivative with respect to the argument of \(\mathcal{T}\), \(\psi_{\rm max}-\psi_{\ast}\). Using this expression in the fluctuation of \(\rho_{\sigma}^{\rm max}\simeq\rho_{\sigma\ast}+(d\rho_{\sigma}/d\psi)_{\ast}( \psi_{\rm max}-\psi_{\ast})\), we obtain \[\zeta(t_{f},\,\mathbf{k})\simeq-\frac{V_{\ast}}{M_{\rm pl}^{2}V_{\phi \ast}}\delta\phi_{\ast}(\mathbf{k})-2\frac{\rho_{A}^{\rm max}}{\rho_{\ast}}\Delta \psi\frac{\hat{\pi}^{i}\delta\dot{A}_{i\ast}(\mathbf{k})}{\tilde{A}_{\ast}}\] \[\times\bigg{(}\frac{d\rho_{\sigma}}{d\psi}|_{\ast}\frac{1}{(\ln \mathcal{T})^{\prime}\rho_{A}^{\rm max}}+\cos^{2}\Theta\bigg{)}.\] The second term in the parenthesis is the model independent contribution which comes from the longitudinal mode of \(\delta\gamma_{ij}\) and the first one comes from the fluctuation of the \(e\)-folding number. The different \(\rho_{A\ast}\) results in the difference in \(\rho_{\sigma}^{\rm max}\), since the \(e\)-folding number spent until \(\psi_{\rm max}\) differs, generating the fluctuation of the \(e\)-folding number, \(\delta(\psi_{f}-\psi_{\ast})\). For the specific choice of \(f\) and the potential of \(\sigma_{I}\) with \(D=2\), the above formula of \(\zeta\) reproduces the result obtained in Ref. [12]. Since the statistical anisotropy in the power spectrum of \(\zeta\) is suppressed by the slow-roll parameter \(\varepsilon=(M_{\rm pl}V_{\phi}/V)^{2}/2\) compared to that of the gravitational waves \(g_{t}^{\rm max}\), the statistical anisotropy in the power spectrum of the gravitational waves can be as large as \(\mathcal{O}(1)\), keeping the scalar spectrum consistent with the present observations [12]. This remains the same as long as the amplitude of the first term in the parenthesis of \(\zeta\) does not exceed \(\mathcal{O}(1)\). Using these formulae, we can also compute the cross-correlation of \(\zeta\) and \(\gamma^{+}\), which takes a non-vanishing value. _Primordial non-Gaussianity._- Using the g\(\delta N\) formalism, the local type non-Gaussianity can be computed easily. Here, let us only provide the order estimation, leaving a detailed computation for elsewhere. We obtain the local type non-Gaussianity of \(\gamma^{(\lambda_{\rm sw})}\) sourced by the gauge fields as \[f_{\rm NL}^{\gamma,A}\sim\sum_{\alpha=1}^{D^{\prime}}\Bigl{(}\frac{\bar{\rho}_ {\ast}}{\bar{\rho}_{A(\alpha)\ast}}\Bigr{)}^{2}\bar{\tilde{\mathcal{I}}}_{f( \alpha)}^{3}\sim\sum_{\alpha=1}^{D^{\prime}}\Bigl{(}\frac{\bar{\rho}_{A(\alpha )}^{\rm max}}{\bar{\rho}_{A(\alpha)\ast}}\Bigr{)}^{2}\frac{\bar{\rho}_{A( \alpha)}^{\rm max}}{\bar{\rho}_{\ast}}\Delta\psi_{(\alpha)}^{3}\,,\] which can be also enhanced by the square of \(\bar{\rho}_{A(\alpha)}^{\rm max}/\bar{\rho}_{A(\alpha)\ast}\), which can be very large when the corresponding gauge field reaches the maximum value after the horizon crossing. Here, we have ignored the angular dependence which requires a more detailed computation. The local type non-Gaussianity of \(\zeta\) which stems from the longitudinal part of \(\delta\gamma_{ij}\) is suppressed by \(\varepsilon^{2}\) compared to \(f_{\rm NL}^{\gamma,A}\), while there is also model dependent contribution in the fluctuation of the \(e\)-folding number. _Summary._- In this letter, we showed the g\(\delta N\) formalism can largely facilitate the computation of \(\zeta\) and the gravitational waves by considering an inflation model with U(1) gauge fields. Since the g\(\delta N\) formalism generically applies to a model with the locality and the spatial diffeomorphism invariance, various applications will be possible. ###### Acknowledgements. T. T. is supported by Grant-in-Aid for Scientific Research (A) and (C) under Contract Nos. JP23H00110 and JP20K03928, respectively. Y. U. is supported by Grant-in-Aid for Scientific Research (B) under Contract Nos. JP19H01894 and JP23H01177 and Fostering Joint International Research (B) under Contract No. 21KK0050.
2309.16871
Nucleonic Shells and Nuclear Masses
The binding energy of an isotope is a sensitive indicator of the underlying shell structure as it reflects the net energy content of a nucleus. Since magic nuclei are significantly lighter, or more bound, compared to their neighbors, the presence of nucleonic shell structure makes an imprint on nuclear masses. In this work, using a carefully designed binding-energy indicator, we catalog the appearance of spherical and deformed shell and subshell closures throughout the nuclear landscape. After presenting experimental evidence for shell and subshell closures as seen through the lens of nuclear masses, we study the ability of global nuclear mass models to predict local binding-energy variations related to shell effects.
Landon Buskirk, Kyle Godbey, Witold Nazarewicz, Wojciech Satula
2023-09-28T21:59:46Z
http://arxiv.org/abs/2309.16871v2
# Nucleonic shells and nuclear masses ###### Abstract The binding energy of an isotope is a sensitive indicator of the underlying shell structure as it reflects the net energy content of a nucleus. Since magic nuclei are significantly lighter, or more bound, compared to their neighbors, the presence of nucleonic shell structure makes an imprint on nuclear masses. In this work, using a carefully designed binding-energy indicator, we study the appearance of spherical and deformed shell and subshell closures throughout the nuclear landscape. After presenting experimental evidence for shell and subshell closures as seen through the lens of nuclear masses, we study the ability of global nuclear mass models to predict local binding variations related to shell effects. ## I Introduction Nuclei with 2, 8, 20, 28, 50, 82, and 126 nucleons have been found to be special by having an exceptionally high natural abundance or being locally lighter than their neighbors [1]. These _magic_ nucleon numbers were explained by the nuclear shell model [2; 3] in terms of completely filled nucleon shells. The nuclei with such numbers of nucleons are referred to as magic, like doubly-magic \({}^{48}_{20}\)Ca\({}_{28}\) or semi-magic \({}^{120}_{50}\)Sn\({}_{70}\). Experimentally, there are numerous signatures of magic gaps of shell closures. They include: enhanced binding energies, rapid changes of separation energies, low-lying collective excitations, kinks in charge radii, and spectroscopic factors, among other things [4; 5; 6]. The quantal stability of the atomic nucleus is determined by the behavior of the single-particle level density \(\rho(e)\) of the mean-field (intrinsic) Hamiltonian. As the ground state for many-fermion systems should correspond to the lowest possible degeneracy, the nucleus is expected to be more bound if the nucleonic level density near the Fermi level is low. Exceptionally stable systems (doubly magic nuclei) are indeed those with the least degenerate single-particle level density around the Fermi level. Quantitatively, the extra stability due to the presence of shell gaps can be encapsulated in the microscopic shell energy \(E^{\rm shell}\)[7; 8; 9] that fluctuates with particle number and reflects the non-uniformities of the single-particle level distribution. Formally, the shell energy can be approximated by: \[E^{\rm shell}=\sum_{i=1}^{A}e_{i}-\int e\tilde{\rho}(e)de, \tag{1}\] where \(e_{i}\)'s are single-particle (Hartree-Fock) energies and \(\tilde{\rho}(e)\) is the smoothed single-particle density that smooths out single-particle energies within large energy interval of the order of the energy difference between major shells. The total binding energy of a nucleus can be roughly given by [7; 8] \[B=B^{\rm macr}+E^{\rm shell}, \tag{2}\] where \(B^{\rm macr}\) is the "macroscopic" energy that gradually depends on the number of nucleons and thus associated with the smooth distribution of single-particle levels given by \(\tilde{\rho}(e)\). The behavior of \(E^{\rm shell}\) changes periodically with particle number. The lowest shell energy is expected in the regions of low single-particle level density, e.g., for the spherical magic numbers 8, 20, 28, 50, 82, and 126. However, below and above these magic numbers, the level density becomes large [(2\(j\)+1)-fold degeneracy of spherical orbitals] and a Jahn-Teller transition takes place towards deformed shapes [10; 11]. The stabilisation of deformed nuclei can be associated with energy gaps in deformed single-particle levels, i.e., deformed sub-shell closures [8; 9; 12]. ## II Binding-energy indicators Empirical information on the magnitude of nucleonic correlations is often extracted from experimental data using binding-energy relations (filters, indicators) based on measured masses of neighboring nuclei [13; 14]. Usually, the binding-energy indicators are the finite-difference expressions representing various derivatives of (positive) nuclear binding energy \(B(N,Z)\) with respect to \(N\) and \(Z\). Their role is to isolate some specific parts of the correlation energy by filtering out that part of the binding energy which behaves as a polynomial of a required order in \(N\) and \(Z\). The commonly used mass differences are one-nucleon separation energies \(S_{\tau}\) (\(\tau=n,p\)). For neutrons: \[S_{n}(N,Z)=B(N,Z)-B(N-1,Z). \tag{3}\] The two-neutron separation energy is \[S_{2n}(N,Z)=B(N,Z)-B(N-2,Z). \tag{4}\] The neutron chemical potential \(\lambda_{n}\) can be expressed through two-neutron separation energies [15; 16; 17]: \[\lambda_{n}(N-1,Z)\approx-\frac{1}{2}S_{2n}(N=2k,Z), \tag{5}\] where \(2k\) indicates an even number. We note that \(\lambda_{n}\) is negative for bound systems. In addition, \[S_{n}(N=2k,Z)\approx -\lambda_{n}(N-1,Z)-\frac{1}{2}\frac{\partial\lambda_{n}(N-1,Z)}{ \partial N}\] \[+\Delta_{n}(N-1,Z), \tag{6}\] where \(\Delta_{n}(N-1,Z)\) is the average neutron pairing gap [15; 16]. The single-particle (s.p.) neutron energy splitting at the Fermi level, \(\Delta e_{n}\), can thus be defined in terms of one-nucleon separation energy differences [18; 19]: \[\Delta e_{n}(N=2k,Z)=S_{n}(N,Z)-S_{n}(N+2,Z). \tag{7}\] As demonstrated in Refs. [18; 19], if variations of the mean field and pairing are smooth along isotopic or isotonic chains, the filter \(\Delta e_{\tau}\) represents the energy difference between the lowest particle level and the highest hole (occupied) level. For instance: \[\Delta e_{n}(N=2k,Z)\approx e_{k+1}^{n}-e_{k}^{n}. \tag{8}\] Similar relations to Eqs. (3 - 8) hold for protons. It directly follows from Eqs. (6) and (7) that \(\Delta e_{\tau}\) is proportional to the derivative of \(\lambda_{\tau}\) with respect to the particle number \(N_{\tau}\) (\(N_{\tau}=Z\) or \(N\) for \(\tau=p\) or \(n\)), i.e., it is inversely proportional to the level density [20]. The indicator \(\Delta\bar{e}_{\tau}\) is thus sensitive to small changes of the level density at the Fermi level. Indeed, the regions of the low level density are expected to correspond to increased values of \(\Delta\bar{e}_{\tau}\). Since for the smoothly varying mean-field potentials the chemical potential gradually _increases_ with particle number, \(\Delta e_{\tau}\) should be positive in general. The deviations from the monotonic behavior of \(\lambda_{\tau}(N_{\tau})\) do occur, and are usually associated with the rapid change of nuclear mean fields due to configuration changes. In some cases, usually associated with shape transitions, \(\Delta e_{\tau}<0\); this corresponds to a backbending in the gauge space of \(N_{\tau}(\lambda_{\tau})\)[21; 17; 20]. As an illustrative example, Fig. 1 shows \(\Delta\bar{e}_{n}\) for the Zr isotopic chain. The local maxima in \(\Delta\bar{e}_{n}\) can be associated with spherical and deformed s.p. gaps discussed in Sec. V. The negative value of \(\Delta\bar{e}_{n}\) at \(N=58\) reflects the well-known spherical-to-deformed shape transition around \({}^{98}\)Zr [22; 23]. While the goal of our work is to demonstrate that \(\Delta e_{\tau}\) is a superb measure of spherical and deformed shell closures, this indicator can be used to study mean level spacing, or mean level density, at the Fermi energy. Indeed, beyond the regions of low level density associated with gaps, \(\Delta e_{\tau}\) represents mean level splitting at the Fermi energy. In the simplest scenario assuming Kramers and isospin degeneracy, the mean level spacing equals \(\bar{\varepsilon}=4/\tilde{\rho}(\lambda)\), where \(\tilde{\rho}(\lambda)=6a/\pi^{2}\) and \(a\) stands for the level density parameter, the value of which is uncertain. In the simplest isoscalar scenario assuming dominant volume-like \(A\)-dependence the estimates for \(a\) vary from \(A/10\) (which is the harmonic oscillator limit [24]) to \(A/8\) MeV\({}^{-1}\)[25; 26; 27]. This, in turn, gives \(\bar{\varepsilon}\approx(60\pm 6)/A\) MeV. Note, that for the Zr isotopic chain presented in Fig. 1 it varies from 0.750(75) MeV for \(A=80\) to 0.600(60) MeV for \(A=100\). The estimates agree relatively well with the data shown in Fig. 1 outside the regions of low level density associated with deformed and spherical energy gaps. ## III Datasets and models In our analysis we use the most recent measured values of nuclear binding energies from the AME2020 dataset [28]. In this analysis, we do not consider experimental errors and theoretical uncertainties. A full error analysis of \(\Delta e_{\tau}\) will be a subject of forthcoming study. As for prediction, we consider seven theoretical models based on the energy density functional theory (EDF) which are capable of describing the whole nuclear chart: SkM\({}^{*}\)[29], SkP [30], SLy4 [31], SV-min [32], UNEDF0 [33], UNEDF1 [34], and UNEDF2 [35]. The above set of EDF models was augmented by a well-fitted mass model FRDM-2012 [36] that has significantly more parameters than the (less phenomenological) DFT models, resulting in a better fit to measured masses. For \(\Delta e_{\tau}\) extraction from the data, the Wigner energy has to be removed from experimental binding energies. In Ref. [37], the Wigner term has been parameterized as \[E_{W}(2)=a_{W}|N-Z|/A, \tag{9}\] where \(a_{W}=47\) MeV. However, this expression notably underestimates the Wigner energy for \({}^{80}\)Zr and \({}^{56}\)Ni, two locations of shell closures that are later discussed. For this reason, we supplement \(E_{W}(2)\) with the model of Ref. Figure 1: Experimental values and model predictions of \(\Delta e_{n}\) across zirconium isotopes. Extrapolated values from experimental data are marked with stars. Strong peaks appear for the deformed gap at N=40, the magic gap at N=50, and the spherical gap at N=56. See text for details. [38]: \[E_{W}(1)=V_{W}e^{-\lambda_{W}(\frac{N-Z}{A})^{2}}+V^{\prime}_{W}|N-Z|e^{-\left( \frac{A}{A_{0}}\right)^{2}} \tag{10}\] where \(V_{W}=1.8\) MeV, \(\lambda_{W}=380\), \(V^{\prime}_{W}\) = -0.84 MeV, and \(A_{0}=26\). In our analysis, the average of \(E_{W}(1)\) and \(E_{W}(2)\) has been subtracted from all experimental binding energies. The effect of such subtraction is illustrated in Fig. 1 for \(\Delta e_{n}\) the Zr chain (see Ref. [39] for the discussion of the \({}^{80}\)Zr case). ## IV Infrastructure The exploration of the experimental and theoretical data was performed using the Bayesian Mass Explorer (BMEX) [40] web application and the associated database. An evolution of the Mass Explorer project [41], BMEX contains a suite of online plotting and comparison tools that were used to produce the draft figures in the current work. The BMEX database and software are hosted in a cloud computing environment and do not require any downloads or installation by the end user to access the tool. To save the user's sessions, plot exporting and link sharing is also included without the need for any user accounts or logins. A screenshot of the application can be found in the Supplemental Material [42]. ## V Systematic trends In order to remove the average mass and isospin dependence of shell gaps, we scale \(\Delta e_{\tau}\) by the average oscillator frequency [43]: \[\hbar\omega_{0}=41A^{-1/3}(1\pm\frac{N-Z}{3A})\ \text{MeV}, \tag{11}\] where the plus sign holds for neutrons and the minus sign for protons. In the following, we discuss the dimensionless splittings \[\Delta\tilde{e}_{\tau}\equiv\Delta e_{\tau}/\hbar\omega_{0}. \tag{12}\] When interpreting the patterns of shell gaps in the \((N,Z)\) plane, it is important to recall that nuclei close to the spherical magic gaps at \(Z=20\), 28, 50, 82, and 126 are nearly spherical and that the quadrupole collectivity primarily depends on the distance of \(Z\) and \(N\) to the closest magic proton and neutron number [44; 45]. That is, the largest quadrupole deformations are expected in the regions between spherical magic gaps. ### Experimental single-nucleon shell gaps Figure 2 shows the proton shell gaps \(\Delta\tilde{e}_{p}\) extracted from experimental binding energies. The experimental neutron shell gaps \(\Delta\tilde{e}_{n}\) are displayed in Fig. 3. The spherical magic gaps are clearly seen for both protons and neutrons. In addition, isotopic and isotonic bands of locally enhanced values of \(\Delta\tilde{e}_{\tau}\) are present; they can be associated with local subshell closures, both spherical and deformed. They are discussed in the following. _Spherical magic gaps--_ In the protons, the pronounced \(Z=50\) gap extends across the nuclear landscape. The \(Z=82\) gap is large for \(N\geq 126\) but it seems to gradually fade away in neutron deficient Pb isotopes. This is consistent with the presence of shape coexistence effects in these nuclei, in which spherical, prolate, and oblate structures coexist (and interact) at low energies [46; 22]. While the \(Z=28\) proton shell gap is generally pronounced, the \(Z=20\) gap becomes fairly diluted below \(N=24\). The neutron magic gaps \(N=50,82\), and 126 are well pronounced. The \(N=28\) gap deteriorates in the lightest isotones, and a similar situation is seen at \(N=20\). The disappearance of \(N=20\) and 28 magic gaps in neutron-rich nuclei is supported by an appreciable experimental evidence for deformed structures below \({}^{44}\)S and \({}^{32}\)Mg [22; 4]. _Spherical subshell closures--_ Several local spherical shell gaps can be identified in Figs. 2 and 3. They include: \(Z=14\) subshell closure in the Si isotopes [47]; \(Z=64\) subshell closure in \({}^{146}\)Gd [17]; \(N=16\) subshell closure in \({}^{36}\)Ca [48] and \({}^{24}\)O [49]; \(N=32\) subshell closure in \({}^{52}\)Ca [50]; \(N=56\) subshell closure in \({}^{96}\)Zr [51]; and \(N=64\) subshell closure in Sn [52]. The single \(2p_{1/2}\) orbital separates the \(N=126\) magic gap from the \(N=124\) spherical subshell [53]. Consequently, these two shell closures overlap in Fig. 3. _Deformed subshell closures--_ In the regions between spherical magic gaps, the indicator \(\Delta\tilde{e}_{\tau}\) provides important information about deformed shell gaps. The region of deformed nuclei around \({}^{64}\)Cr [54] can be associated with the deformed subshell closures \(Z=24\) and \(N=40\)[55]. In Fig. 2, the proton shell gap \(\Delta\tilde{e}_{p}\) is well pronounced for neutron-rich Cr isotopes. Of particular interest are deformed shell closures at \(Z=38,40\) that are responsible for very large ground-state deformations around \({}^{76}\)Sr [56], \({}^{80}\)Zr [39], and \({}^{102}\)Zr [22] The \(Z=80\) oblate gap is responsible for weakly deformed ground states of the Hg isotopes [53]. It is separated from the \(Z=82\) magic gap by a single \(2s_{1/2}\) orbit so these two shell closures overlap in Fig. 2. The deformed neutron gaps in the rare-earth nuclei seen in Fig. 3 include: \(N=98\) gap in the Gd-Dy region [57; 20]; \(N=104\) gap around \({}^{174}\)Yb [17]; and \(N=108\) gap known around \({}^{182}\)W [20]. In the actinide and transfermium regions, the most pronounced deformed neutron closures are \(N=152\)[58; 59] and \(Z=162\)[17]. In the protons, the deformed shell gap at \(Z=108\) is particularly pronounced [60; 61; 62]. These subshells are essential for the stabilization of nuclear binding in the transactinides. In addition to the above list of shell and subshell closures that can be straightforwardly identified, there are other regions in Figs. 2 and 3 with moderately enhanced values of \(\Delta\tilde{e}_{\tau}\). For instance, the \(N=92\) shell effect around \({}^{152}\)Nd can probably be attributed to octupole cor relations. _Shape transitions--_ Negative values of \(\Delta e_{\tau}\) are associated with shape transition. Several regions of shape-transitional behavior are seen in Fig. 3. They include the region of shape coexistence around \({}^{98}\)Zr and the transition regions to well deformed prolate shapes around \(N=88\)[20; 22]. It is interesting to notice that rapid shape transitions are clearly seen in \(\Delta\tilde{e}_{n}\) in Fig. 3 but not in \(\Delta\tilde{e}_{p}\). Indeed, no regions of \(\Delta\tilde{e}_{p}<0\) can be seen in Fig. 2, which indicates that the proton chemical potential \(\lambda_{p}\) increases monotonically with \(Z\) throughout the nuclear landscape. Figure 2: Experimental values of \(\Delta\tilde{e}_{p}\) throughout the nuclear landscape. The nuclei for which the expression (8) involves binding energies extrapolated from systematic trends in [28] are marked by circles. The nuclei with negative values of \(\Delta\tilde{e}_{p}\) are marked by an asterisk. Shell closures corresponding to the bands of locally elevated values of \(\Delta\tilde{e}_{p}\) are clearly seen. ### Model predictions Figure 4 illustrates the performance of the representative UNEDF1 mass model with respect to \(\Delta\tilde{e}_{p}\). For a complementary \(\Delta\tilde{e}_{n}\) landscape obtained with UNEDF1, see Fig. S2 of Supplemental Material [42]. The predictions extend beyond the region of nuclei with experimentally-known masses, and hence provide useful guidance for the future experiments at radioactive ion beam facilities. For instance, it is seen that the magic gaps \(Z=50\) and \(Z=82\) are significantly weakened around \(N=106\) and \(N=150\), respectively. The overall performance of the mass models with respect to \(\Delta\tilde{e}_{\tau}\) is illustrated in Table 1. As expected, FRDM-2012 performs fairly well overall. Several deformed subshell closures are robustly predicted in almost all models: \(Z=70,80,92,108\) and \(N=92,104\), and \(162\). The same holds for spherical subshell closure \(N=56\). Other shells are predict by a subset of models. In some cases, the "theoretically-fragile" gaps have been discussed discussed in literature. See, e.g., Ref. [63] for the \(N=152\) gap predictions. Interestingly, the models consistently predict deformed proton shell gaps at \(Z=46\) around \(N=70\) and \(Z=56\) around \(N=72\), and the deformed neutron gap \(N=72\) around \(Z=62\). These features are not clearly seen in the experimental data. In general, the predictive power of the mass models used in this study with respect to \(\Delta\tilde{e}_{\tau}\) is quite reasonable. Moreover, the experimental finding that \(\Delta\tilde{e}_{p}\) is usually positive is nicely confirmed by theory, see Fig. 4. The predicted regions of \(\Delta\tilde{e}_{n}<0\) in Fig. S2 are broader than in experiment. This is to be expected as the shape transitions predicted by mean-field models are too abrupt due to the missing dynamical (zero-point) correlations. shell closures in spherical and deformed nuclei. In particular, this quantity can be very useful when studying the appearance and disappearance of nucleonic shell gaps in exotic nuclei. In general, EDF-based mean-field models perform well when it comes to \(\Delta e_{\tau}\) as the concept of intrinsic s.p. orbits and energies is naturally present there. In some cases, such as the deformed \(A\sim 80\) and \(A\sim 100\) regions, theory sometimes poorly predicts the spherical-to-deformed shape transition due to missing zero-point correlations [23]. This deficiency of current models will need to be addressed. Additionally, this work highlights the potential for user-focused scientific software to aid discovery and provide guidance for future experimental campaigns. To this end, the BMEX tool used in this work will be continually updated to include new experimental data and extended to a broader set of nuclear models. A broader set of uncertainty estimates for both experimental and theoretical data will also be added to the tool, including a Bayesian model mixing module that will combine the knowledge from multiple models [64, 65]. ## Acknowledgements This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Awards Nos. DE-SC0023688 and DOE-DE-SC0013365, the National Science Foundation under award number 2004601 (CSSI program, BAND collaboration), and by the Polish National Science Centre (NCN) under Contract No 2018/31/B/ST2/02220.
2304.00037
A 1.3% distance to M33 from HST Cepheid photometry
We present a low-dispersion period-luminosity relation (PL) based on 154 Cepheids in Messier 33 (M33) with Hubble Space Telescope (HST) photometry from the PHATTER survey. Using high-quality ground-based light curves, we recover Cepheid phases and amplitudes for multi-epoch HST data and we perform template fitting to derive intensity-averaged mean magnitudes. HST observations in the SH0ES near-infrared Wesenheit system significantly reduce the effect of crowding relative to ground-based data, as seen in the final PL scatter of $\sigma$ = 0.11 mag. We adopt the absolute calibration of the PL based on HST observations in the Large Magellanic Cloud (LMC) and a distance derived using late-type detached eclipsing binaries to obtain a distance modulus for M33 of $\mu$ = 24.622 $\pm$ 0.030 mag (d = 840 $\pm$ 11 kpc), a best-to-date precision of 1.3%. We find very good agreement with past Cepheid-based measurements. Several TRGB estimates bracket our result while disagreeing with each other. Finally, we show that the flux contribution from star clusters hosting Cepheids in M33 does not impact the distance measurement and we find only 3.7% of the sample is located in (or nearby) young clusters. M33 offers one of the best sites for the cross-calibration of many primary distance indicators. Thus, a precise independent geometric determination of its distance would provide a valuable new anchor to measure the Hubble constant.
Louise Breuval, Adam G. Riess, Lucas M. Macri, Siyang Li, Wenlong Yuan, Stefano Casertano, Tarini Konchady, Boris Trahin, Meredith J. Durbin, Benjamin F. Williams
2023-03-31T18:00:04Z
http://arxiv.org/abs/2304.00037v1
# A 1.3% distance to M33 from HST Cepheid photometry ###### Abstract We present a low-dispersion period-luminosity relation (PL) based on 154 Cepheids in Messier 33 with _Hubble_ Space Telescope (HST) photometry from the PHATTER survey. Using high-quality ground-based light curves, we recover Cepheid phases and amplitudes for multi-epoch HST data and we perform template fitting to derive intensity-averaged mean magnitudes. HST observations in the SH0ES near-infrared Wesenheit system significantly reduce the effect of crowding relative to ground-based data, as seen in the final PL scatter of \(\sigma=0.11\) mag. We adopt the absolute calibration of the PL based on HST observations in the Large Magellanic Cloud (LMC) and a distance derived using late-type detached eclipsing binaries to obtain a distance modulus for M33 of \(\mu=24.622\pm 0.030\) mag (\(d=840\pm 11\) kpc), a best-to-date precision of 1.3%. We find very good agreement with past Cepheid-based measurements. Several TRGB estimates bracket our result while dissagreeing with each other. Finally, we show that the flux contribution from star clusters hosting Cepheids in M33 does not impact the distance measurement and we find only \(\sim 3.7\%\) of the sample is located in (or nearby) young clusters. M33 offers one of the best sites for the cross-calibration of many primary distance indicators. Thus, a precise independent geometric determination of its distance would provide a valuable new anchor to measure the Hubble constant. 0000-0002-8181-888]Louise Breuval 0000-0002-2885-7885]Adam G. Riess 0000-0002-4883-0885]Lucas M. Macri 0000-0002-1883-0885]Siyang Li 0000-0002-4883-0885]Wenlong Yuan 0000-0002-1888-0885]Stefano Casertano 0000-0002-1888-0885]Tarini Konchady 0000-0002-1888-0885]Boris Trahin 0000-0002-1888-0885]Meredith J. Durbin 0000-0002-1888-0885]Benjamin F. Williams ## 1 Introduction Cepheid variables are the best-calibrated primary distance indicators and are commonly used to form the first rung of the empirical distance ladder (e.g., Riess et al., 2022). Their Period-Luminosity (PL) relation, also known as the "Leavitt Law" (Leavitt & Pickering, 1912), is calibrated geometrically in the Milky Way (MW) from _Gaia_ DR3 parallaxes (Riess et al., 2021), in the Large Magellanic Cloud (LMC) from detached eclipsing binaries (Riess et al., 2019; Pietrzynski et al., 2019), and in NGC 4258 with water masers (Reid et al., 2019). Cepheid distances are used to calibrate the second rung of the distance ladder, type Ia supernovae (SNe Ia), which allows us to measure the distance to further galaxies in the Hubble flow and to derive the value of the Hubble constant, \(H_{0}\). Messier 33 (hereafter M33) is a nearby type Sc II-III spiral galaxy and the third largest member of the Local Group. As early as 1926, Edwin Hubble used this galaxy as one of the _spiral nebulae_ to learn about the structure of the Universe and observed 35 Cepheid variables to measure its distance (Hubble, 1926). Since then, it has been extensively studied, and is still a crucial object for the distance scale (Freedman et al., 1991; Lee et al., 2022): M33 has intermediate inclination (\(i=57\pm 4^{\circ}\), Kourkchi et al., 2020)1, which limits the effects of reddening and of geometry that can produce additional scatter in the PL relation. Additionally, M33 is known for its steep metallicity gradient, which was measured using red giant branch (RGB) stars (Tiede et al., 2004), planetary nebulae (Magrini et al., 2009), and H ii regions (Bresolin, 2011; Toribio San Cipriano et al., 2016; Rogers et al., 2022). Footnote 1: [http://edd.ifa.hawaii.edu](http://edd.ifa.hawaii.edu), Table “CF4 Initial Candidates” Cepheids are numerous in M33, and large samples have been obtained by various programs (Macri et al., 2001; Hartman et al., 2006; Pellerin & Macri, 2011). Recently, the PHATTER collaboration (PI: J. Dalcanton) published a detailed catalog2 of UV to NIR photometry for 22 million stars in the central disk of M33 (Williams et al., 2021) using the _Hubble_ Space Telescope (HST). Although they are not time-series observations, serendipitous overlaps between the PHATTER fields of view in a given filter provide multiple data points randomly spread across the phase of M33 Cepheids. Out of 250 Cepheids in the HST sample (defined in SS2.2), 225 variables have more than one epoch in \(F475W\) and \(F814W\), and 66 objects have more than one epoch in \(F160W\) (due to smaller overlaps of the WFC3/IR fields). Knowledge of the date and time of observation for each HST exposure, combined with periods previously measured from other surveys, enable the correction of these random-phase observations to mean magnitude. Finally, past studies (e.g. Macri et al., 2001; Riess et al., 2012; Wagner-Kaiser et al., 2015; Kodric et al., 2018) have revealed the advantages of space-based observations such as HST in limiting crowding effects and their impact for the PL dispersion, as well as providing homogeneous photometry including in the near-infrared. In this paper we aim to take advantage of the recently published high-quality PHATTER catalog in order to provide a new PL calibration for M33 Cepheids in HST filters and to improve the M33 distance measurement. Footnote 2: [https://archive.stsci.edu/hlsp/phatter](https://archive.stsci.edu/hlsp/phatter) Figure 1: Map of M33: the template (i.e. ground-based) sample is shown in red while the HST (i.e. PHATTER) sample is shown in blue. Dark blue and light blue markers are Cepheids from the gold and silver sample respectively. Empty blue markers are excluded Cepheids (see §3.2). Note the HST sample is a subset of the template sample. The outline of this paper is the following. In SS2 we present the samples of M33 Cepheids used in this study. In SS3 we describe the construction of template light curves from ground-based data and the procedure to recover mean magnitudes from random-epoch photometry. In SS4 we calibrate the Cepheid PL relation and determine the M33 distance modulus. Lastly, in SS5 we investigate the effects of Cepheids located in star clusters, we estimate their occurrence rate in M33 and compare it with that of other Local Group galaxies. ## 2 Photometric Data In order to recover intensity-averaged mean magnitudes from random-phase HST data (hereafter the HST sample), we use templates obtained by compiling a large number of well-sampled ground-based light curves of M33 Cepheids (hereafter the _template_ sample). Both samples are described below. ### The template sample We used a sample of 609 previously-known Cepheids (Macri et al., 2001; Pellerin & Macri, 2011) with homogeneous _gri_ light curves obtained by Konchady et al. (in prep.) using archival CFHT/MegaCam observations (proposal ID 04BF26, PI Beaulieu; proposal ID 04BH98, PI Hodapp). They are represented in red in Fig. 1 and constitute the _template_ sample. The majority of the original CFHT observations (associated with proposal ID 04BF26) are extensively described in Hartman et al. (2006); they span roughly one-and-a-half years (2003 August to 2005 January) and were obtained on 27 separate nights. We supplemented these with an additional four nights of \(i\) observations obtained in 2004 August and September (associated with proposal ID 04HB98). Konchady et al. (in prep.) performed an independent analysis of these images, carrying out time-series PSF photometry that was calibrated against Pan-STARRS DR1 (Chambers et al., 2016). The periods and phases of the Cepheids were redetermined by simultaneously fitting the CFHT _gri_ photometry and the WIYN _BVI_ photometry of Pellerin & Macri (2011) using the Yoachim et al. (2009) templates. We solved for a common period and phase across the six bands, and independent mean magnitudes and light curve amplitudes in each band. Cepheid light curves of the _template_ sample have two purposes: they are used to build templates thanks to their complete phase coverage (see SS3.1) and to recover the amplitudes and phases of the HST light curves (see SS3.2). For this reason their periods must be known precisely. From the period uncertainty, we estimate the uncertainty in the phase-shift between the mid-date of the ground observations (MJD = 52170) and of PHATTER observations (MJD = 57989), and we flag Cepheids for which this uncertainty \(\sigma(\phi)\) is larger than 0.05 (or \(\log\sigma(\phi)>-1.3\), dashed horizontal line in Fig. 2). They constitute the "silver" sample (see Sect. 3.2). Additionally we only keep Cepheids which have optimal ground-based light curves (Table 3 of Pellerin & Macri, 2011). This leaves a total of 420 Cepheids, for which we perform a visual inspection of each light curve's quality. Cepheids have an average of 45, 31 and 44 data points per light curve in \(g\), \(r\), and \(i\), respectively. We note that our ground-based sample is minimally affected by blending given the relatively high image quality of the CFHT and WIYN observations and the rejection of outliers by Pellerin & Macri (2011). ### The HST sample The PHATTER survey (Williams et al., 2021) contains photometric measurements for 22 million stars in M33 with 6 UV to NIR filters (Advanced Camera for Surveys and Wide Field Camera 3) on the _Hubble_ Space Telescope (HST). The survey focuses on the inner disk of the galaxy and covers \(\sim 300\)\(\rm{O}\)\({}^{\prime}\) (equivalent to a de-projected area of \(\sim 38\,\rm{kpc}^{2}\)), extending up to \(\sim 14^{\prime}\) from the center (equivalent to a distance of \(\sim 3.5\,\rm{kpc}\)). The observations were taken between 2017 February 21 Figure 2: Distribution of phase uncertainties for Cepheids of the _template_ sample at the epoch of HST observations, estimated from the period uncertainty and the interval between the midpoint of the HST and of the ground observations. The dashed horizontal line represents our threshold for the gold sample (see below): we only keep Cepheids with \(\sigma(\phi)<0.05\). and 2018 February 25. The catalog reaches 26 to 28 mag in \(V\) depending on crowding. It is the largest and most complete catalog to date for stellar populations in M33. We identified 250 Cepheids from the template sample in the PHATTER catalog: they are represented in blue in Fig. 1 and are hereafter referred to as the HST sample. We matched the Cepheid coordinates to the full-frame PHATTER catalogs using an initial search radius of 0.1 arcsec, and found that all had matches within \(<0.5\) mas with expected magnitudes (\(19<F475W<23\)). We then used the pixel coordinates from these catalogs to retrieve the exposure-level photometry from the original DOLPHOT.phot outputs. These files contain columns with photometry from each individual input frame in addition to the combined measurements. The PHATTER survey does not provide time-series observations, which might make it _in principle_ poorly suited to study variable stars such as Cepheids. However, in the optical \(F475W\) and \(F814W\) filters, successive PHATTER pointings show a significant overlap, and therefore up to 4 epochs can be available for a given Cepheid. In the NIR, the pointings have smaller overlaps, which gives one to two epochs per Cepheid. Each epoch can be decomposed into 4 or 5 separate dithers/exposures and the phase-coverage of each epoch is random. We note that the first exposure of each \(F475W\) and \(F814W\) visit sequence is significantly shallower than the rest, as these are short exposures targeting the brightest stars (M. Durbin, 2023, private communication). They were therefore excluded as they are not useful for Cepheids. The date and time of a given HST observation provide the relative phase of the corresponding measurement. Then, mean magnitudes can be recovered from sparse data by applying a template-fitting procedure (SS3). ## 3 Template Fitting In this section we describe the construction of template light curves from ground-based data (SS3.1) and the procedure to recover mean magnitudes from PHATTER photometry in HST filters (SS3.2). The HST and _template_ samples were observed in different filters. The HST \(F475W\) filter is very similar to the \(g\) one from CFHT/MegaCam, and \(F814W\) corresponds to the \(i\) filter (see Fig. 3). Finally, the template sample does not cover the NIR up to the \(F160W\) filter, therefore we use the 2MASS \(H\)-band templates by Inno et al. (2015), based on a large sample of LMC Cepheid light curves. We adopt \(g\)-band and \(i\)-band templates to derive HST mean magnitudes in \(F475W\) and \(F814W\) respectively, and \(H\)-band templates to derive \(F160W\) mean magnitudes. ### Building template light curves We use the well-sampled light curves from the _template_ sample to build template light curves in the \(g\), \(r\) and \(i\) filters of CFHT/MegaCam. These ground-based light curves are ideal to build templates and to recover the mean magnitudes from HST random phase observations: they are representative of Cepheids from the HST sample as they belong to the same host galaxy and have a very similar period distribution (Fig. 4). Other templates from the literature (e.g. Yoachim et al., 2009, from LMC Cepheids) could have been used instead of creating new ones. However, adopting templates built from a population similar to the HST sample avoids possible differences in light curve shapes for Cepheids from different galaxies (possibly due to metallicity effects, Antonello et al., 2000). In order to account for changes in light curve shape as a function of period (Hertzsprung, 1926), we split the sample into four different period bins. They are described in Table 1. The number of bins was determined by the size and by the distribution of our calibrating sample: having a larger sample would have allowed us to use more bins. While the reference phase of a Cepheid is often defined by the epoch of maximum brightness, this quantity can Figure 3: Wavelength coverage of HST filters used in this analysis (top panel), CFHT filters adopted to build optical template light curves (middle panel) and 2MASS filters used for NIR templates (Inno et al., 2015, bottom panel). We adopt \(g\), \(i\) and \(H\) templates to fit light curves in \(F475W\), \(F814W\) and \(F160W\), respectively. be biased by the presence of a bump in the light curve that varies in shape and phase as a function of period along the Hertzsprung (1926) progression. This bump coincides with maximum light for Cepheids with periods around 10 days. To overcome this issue, Inno et al. (2015) adopted another feature to determine the phase of a Cepheid light curve: the mean magnitude along the rising branch (MRB). As mean magnitudes are known with great precision for our template sample, this approach is more reliable than using the maximum to set the phase (see more details in Inno et al., 2015) and we adopt it in our analysis: \[\phi_{obs}=\mathrm{mod}\left(\frac{\mathrm{JD_{obs}-JD_{MRB}}}{\mathrm{P}}\right) \tag{1}\] For a filter \(\lambda\), we normalize the magnitude values \(m_{i}\) by deriving the quantity: \[T_{\lambda}=\frac{m_{i}-\langle m_{i}\rangle}{A_{\lambda}} \tag{2}\] where \(\langle m_{i}\rangle\) is the mean magnitude and \(A_{\lambda}\) is the amplitude. Finally, we merge all phased and normalized light curves into a single template for each period bin. The final templates and compiled light curves are shown in Fig. 5 in the \(g\), \(r\) and \(i\) filters and for each of the four period bins. We follow Inno et al. (2015) and fit the merged light curves with a seventh order Fourier series of the form: \[F_{7}(\phi)=A_{0}+\sum_{i=1}^{7}A_{i}\,cos(2\pi i\phi+\Phi_{i}) \tag{3}\] The resulting coefficients are listed in Table 2. ### Template fitting procedure Before performing the fit, we set the first-guess \(V\)-band (\(F475W\)) amplitude \(A_{V}\) to that of the ground-based \(g\)-band light curve. Then, we fix the amplitude ratios to \(A_{I}=0.58\,A_{V}\) from Yoachim et al. (2009) and to: \[A_{H}=\left\{\begin{array}{ll}0.34\,A_{V}&\mathrm{if}\ P\leq 20\ \mathrm{d}\\ 0.40\,A_{V}&\mathrm{if}\ P>20\ \mathrm{d}\end{array}\right. \tag{4}\] from Inno et al. (2015). The first-guess phase in \(F475W\) is set to the phase in \(g\). By comparing CFHT \(g\) and \(i\) light curves, we derive a small phase lag between \(V\) and \(I\) of: \(\phi_{I}=\phi_{V}+0.027\), and we adopt the \(H\)-band phase lag from Inno et al. (2015) for \(F160W\): \(\phi_{H}=\phi_{V}+0.080-0.002\,\log P.\) We note that Soszynski et al. (2005) derived a different phase lag of about 0.3 between \(H\) and \(V\), but that is largely due to the choice of a different reference phase (maximum brightness). In SS4.5 we discuss the sensitivity of our results to the phase lag. We fit the templates to the HST measurements in the three filters simultaneously by performing a grid-search on \(A_{V}\) and \(\phi_{V}\), where \(\phi_{V}\) has a narrow, informative prior from the template sample. The amplitude ratios are fixed throughout the procedure and we retain as final parameters the solution that minimizes the quantity: \[Z=\chi^{2}_{\mathrm{tot}}+Q(A_{V}) \tag{5}\] where \(\chi^{2}_{\mathrm{tot}}=\chi^{2}_{H}+\chi^{2}_{V}+\chi^{2}_{I}\) and each \(\chi^{2}\) is defined as: \[\chi^{2}=\sum_{i}\frac{(O_{i}-C_{i})^{2}}{\sigma_{i}^{2}} \tag{6}\] with \(O_{i}\) the data points, \(C_{i}\) the fit values and \(\sigma_{i}\) the error of each data point. The quantity \(Q(A_{V})\) is a penalty function that prevents the fitted HST amplitudes to diverge too far from the expected values (i.e. the \(g\)-band amplitude of each Cepheid). It is defined as: \[Q(A_{V})=\frac{(A_{V,\,\mathrm{fitted}}-A_{V,\,\mathrm{ground}})^{2}}{\sigma_{ A}^{2}} \tag{7}\] The dispersion in the difference in amplitudes is set to \(\sigma_{A}=0.030\,\mathrm{mag}\) from the ground based sample. Finally, \begin{table} \begin{tabular}{c c|c c c} \hline \hline Bin & \(\log P\) & \(N_{g}\) & \(N_{r}\) & \(N_{i}\) \\ \hline 1 & \(0.3-0.9\) & 148 & 136 & 143 \\ 2 & \(0.9-1.2\) & 91 & 84 & 81 \\ 3 & \(1.2-1.5\) & 46 & 47 & 44 \\ 4 & \(1.5-2.0\) & 20 & 19 & 16 \\ \hline \end{tabular} \end{table} Table 1: Number of Cepheids in each period bin for the template sample. Figure 4: Period distribution of Cepheids from the template sample and from the HST sample. the errors on each apparent magnitude are estimated from a \(\chi^{2}\) distribution assuming \(\chi^{2}<\chi^{2}_{\rm min}+1\)(Press et al., 1992). Figure 6 shows a few examples of light curves obtained from the template fitting procedure. In the following we consider two different subsamples. The **gold sample** includes Cepheids for which the phase is known with good confidence from ground-based light curves: these Cepheids have at least a valid \(g\)-band light curve or a valid \(i\)-band light curve (or both ideally). In the case where only one light curve is available among \(g\) and \(i\), amplitudes and phases in the missing band can be easily recovered from the relations adopted above. Cepheids of the gold sample must also have a phase uncertainty of \(\sigma_{\phi}<0.05\) to allow for a precise rephasing of HST observations. As their phase and amplitude are assumed to be known from the ground, \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Bin & \(A_{0}\) & \(A_{1}\) & \(A_{2}\) & \(A_{3}\) & \(A_{4}\) & \(A_{5}\) & \(A_{6}\) & \(A_{7}\) & \(\Phi_{1}\) & \(\Phi_{2}\) & \(\Phi_{3}\) & \(\Phi_{4}\) & \(\Phi_{5}\) & \(\Phi_{6}\) & \(\Phi_{7}\) \\ \hline & & & & & & & & & & & & & & \\ \hline 1 & 0.004 & 0.431 & 0.161 & 0.062 & 0.022 & 0.007 & 0.004 & -0.005 & 1.665 & 1.502 & 1.132 & 0.918 & 0.043 & 0.036 & 1.649 \\ 2 & 0.007 & 0.450 & -0.073 & 0.019 & 0.020 & 0.008 & 0.005 & -0.004 & 1.615 & -1.554 & 0.747 & -0.567 & -1.423 & -13.477 & 1.027 \\ 3 & 0.004 & 0.424 & 0.134 & 0.074 & 0.052 & 0.032 & 0.025 & 0.014 & 1.802 & 1.615 & 1.113 & 0.924 & 0.714 & 0.511 & 0.289 \\ 4 & -0.001 & 0.415 & 0.169 & 0.082 & -0.040 & -0.023 & 0.014 & 0.008 & 1.656 & 1.587 & 1.555 & -1.682 & -1.854 & 0.464 & 0.749 \\ \hline & & & & & & & & & & & & & & \\ \hline & & & & & & & & & & & & & & & \\ \hline 1 & 0.003 & 0.429 & -0.162 & 0.064 & 0.023 & -0.007 & 0.005 & -0.005 & 1.642 & -7.820 & 1.409 & 1.078 & 3.979 & -0.036 & -4.029 \\ 2 & 0.005 & 0.451 & -0.076 & 0.019 & 0.022 & 0.009 & 0.007 & 0.008 & 1.616 & -1.273 & 1.251 & 0.046 & -0.583 & -0.073 & -0.397 \\ 3 & 0.002 & 0.431 & -0.135 & 0.081 & 0.057 & 0.039 & 0.024 & 0.013 & 1.694 & -1.447 & 1.322 & 1.181 & 1.090 & 1.003 & 0.952 \\ 4 & -0.001 & 0.422 & -0.173 & -0.082 & -0.037 & 0.022 & -0.017 & -0.009 & 1.477 & 4.767 & -7.740 & -7.727 & 1.217 & -1.784 & -1.636 \\ \hline & & & & & & & & & & & & & & & \\ \hline 1 & 0.005 & 0.429 & -0.156 & -0.065 & 0.019 & 0.008 & 0.008 & -0.003 & 1.595 & -1.378 & -1.505 & 1.399 & 1.025 & 0.346 & -2.680 \\ 2 & 0.007 & 0.462 & -0.080 & 0.024 & 0.022 & 0.005 & -0.005 & 0.002 & 1.599 & -1.124 & 1.579 & 0.527 & -4.821 & 16.604 & 14.585 \\ 3 & 0.004 & 0.448 & -0.131 & 0.082 & 0.058 & 0.039 & 0.029 & 0.012 & 1.601 & -1.400 & 1.422 & 1.424 & 1.365 & 1.382 & -11.207 \\ 4 & -0.002 & 0.440 & -0.171 & -0.082 & -0.037 & 0.023 & -0.015 & 0.010 & 1.382 & -1.433 & -1.285 & -1.166 & 1.450 & -1.479 & 1.365 \\ \hline \end{tabular} \end{table} Table 2: Fourier parameters for template light curves obtained with the calibrating sample of Cepheids. Figure 5: Merged light curves of M33 Cepheids (amplitude-scaled) used for the templates in the \(g\), \(r\) and \(i\) band (1\({}^{\rm st}\), 2\({}^{\rm nd}\) and 3\({}^{\rm rd}\) lines respectively). The four columns correspond to the four period bins listed in Table 1. The point in the bottom left corner shows the typical error bar, multiplied by a factor of 3 for better visibility. the grid-search is performed on a limited range of parameters: across \([A_{V}-0.4;A_{V}+0.4]\) in amplitude and \([\phi_{V}-0.05;\,\phi_{V}+0.05]\) in phase. The **silver sample** includes Cepheids with no \(g\) and no \(i\) light curves or with a larger phase uncertainty \(\sigma_{\phi}>0.05\). For these stars, the phasing is considered unknown and we perform the grid-search over [0; 1] in phase. The expected amplitude of these silver sample Cepheids is also unknown: as the mean V-band peak-to-peak amplitude of our sample is 0.8 mag, we perform the grid search within [0.3, 1.3] in amplitude, which corresponds to \(0.8\pm 0.5\) mag. After a visual inspection of each light curve, we find 9 Cepheids from the gold sample which appear to have an incorrect phasing (i.e. reaching the boundaries of the grid-search in \(\phi_{V}\)): for these stars we allow the search to cover [0; 1] in phase (but generally the final phase stays within \(\pm 0.1\) from the first guess) and we find much lower \(\chi^{2}\) and better fit quality. These nine stars are moved to the silver sample. Out of the 250 initial PHATTER Cepheids, we only keep the 220 which have optimal ground-based light curves (Table 3 from Pellerin and Macri, 2011). We rejected 29 stars with only one epoch per filter or with multiple but very close epochs (\(\Delta\phi<0.01\)), which did not allow the fit to converge successfully. We also excluded 26 stars for which the fit was not satisfactory and 11 stars which yielded a fitted \(V\)-band amplitude different by more than 0.5 mag from the expected amplitude from ground-based light curves. This leaves a total of 154 Cepheids which constitute the "gold+silver" sample. The final intensity-averaged mean magnitudes obtained for our sample of Cepheids in \(F160W\), \(F475W\) and \(F814W\) are listed in Table 9 (Appendix A). ## 4 Period-luminosity relation and distance to M33 ### Photometric transformations to WFC3 In order to derive the distance to M33, we will compare its PL relation with that established in the LMC, which has the most precise and Cepheid-independent distance measurement (Pietrzynski et al., 2019). The LMC PL relation (Riess et al., 2019) is calibrated in the SH0ES photometric system based on HST/WFC3 filters (\(F160W\), \(F555W\) and \(F814W\)). On the other hand, the mean magnitudes obtained in the present work from PHATTER photometry were measured with the WFC3/IR camera for the \(F160W\) filter and with ACS/WFC for the optical \(F475W\) and \(F814W\) filters. We transform the PHATTER color (\(F475W-F814W\))\({}_{\rm ACS}\) into the SH0ES color (\(F555W-F814W\))\({}_{\rm WFC3}\) using synthetic populations based on PARSEC isochrones generated by the CMD tool3 developed by Bressan et al. (2012) (version v3.7) for HST/WFC3 and HST/ACS bandpasses. We consider a population of Cepheid-like stars with ages of \(1-500\,\rm Myr\), \(\log g<2\), masses of \(3-7M_{\odot}\) and temperatures of \(4800-6500\,\rm K\). We derive the following transformation Figure 6: Example of light curves obtained from the template fitting procedure. Their quality is representative of that of the entire sample. The shaded area represents the mean magnitude error in each filter. For a given star, the three filters are shown with the same scale in magnitude. with a scatter of 0.003 mag: \[(F555W-F814W)_{\rm WFC3}=0.065\\ +0.658\,(F475W-F814W)_{\rm ACS} \tag{8}\] The mean Cepheid color of the sample is \((F475W-F814W)_{\rm ACS}=1.41\) mag (sample standard deviation = 0.29 mag), for which Eq. 8 gives \((F555W-F814W)_{\rm WFC3}=0.99\) mag (sample standard deviation = 0.19 mag). In the following we adopt the near-infrared HST/WFC3 Wesenheit index defined in Riess et al. (2022) assuming the reddening law from Fitzpatrick (1999) with reddening paramneter \(R_{V}=3.3\): \[m_{H}^{W}=F160W-0.386\,(F555W-F814W)_{\rm WFC3} \tag{9}\] We also derive optical Wesenheit magnitudes defined in Riess et al. (2019) as: \[m_{I}^{W}=F814W-1.3\,(F555W-F814W)_{\rm WFC3} \tag{10}\] ### Count-rate Nonlinearity Correction The WFC3-IR instrument, which is used in the SH0ES distance ladder to measure nearby bright Cepheids as well as distant stars in supernovae host galaxies, is affected by count-rate nonlinearity (CRNL, or reciprocity failure). This effect dims faint sources relative to bright ones due to a decreased photon collection efficiency. Its most recent calibration gives a correction of 0.0077 mag/dex (Riess et al., 2019). In order to derive the distance to M33, \(m_{H}^{W}\) magnitudes in the LMC and in M33 must be corrected for the CRNL consistently. The PL intercept of 15.898 mag calibrated in the LMC by Riess et al. (2019) does not include the CRNL term (despite its note to the contrary, see footnote to table 5 of Yuan et al., 2020). Finally, M33 Cepheids are fainter than LMC Cepheids by about 2 dex (Li et al., 2021), so we add \(0.015\pm 0.005\) mag to the LMC intercept from Riess et al. (2019) to account for this difference (which is equivalent to subtracting 0.015 mag to our \(m_{H}^{W}\) apparent magnitudes in M33). ### Geometric correction We take into account the position of each Cepheid relative to the center of M33 (\(\alpha=23.4625^{\circ}\), \(\delta=30.6602^{\circ}\) from van der Marel et al., 2019) by applying a geometric correction. Our HST sample is located very near the center of M33 and the galaxy has a moderate inclination, which limits the effects of projection and of reddening. We adopt an inclination angle of \(i=57\pm 4^{\circ}\) and a position angle of PA = 22.5\({}^{\circ}\)(both from Kourkchi et al., 2020), obtaining a mean correction of 0.0007 mag with a dispersion of 0.003 mag, with values ranging between -0.005 mag and +0.008 mag. A positive geometric correction corresponds to a Cepheid farther than the center of M33. Figure 7: Period-luminosity relation in \(m_{H}^{W}\) for M33 Cepheids. The dark solid line is the PL fit of the gold + silver sample assuming a free slope, and the red dashed line shows the same fit when the slope is fixed to \(-3.26\) mag/dex (Riess et al., 2019). ### Period-luminosity relation in M33 In this section, we adopt the apparent Wesenheit \(m_{H}^{W}\) mean magnitudes obtained from template fitting for our sample of M33 Cepheids. We include an additional \(0.07\,\mathrm{mag}\) in quadrature to all magnitude errors to account for the finite width of the instability strip (Riess et al., 2019). The PL relation is then calibrated for the two subsamples defined in SS3 (gold and silver samples). For the gold sample we obtain a slope of \(-3.207\pm 0.039\,\mathrm{mag/dex}\) in \(m_{H}^{W}\), which agrees well with that derived by Riess et al. (2019) in the LMC. The PL scatter is \(0.110\,\mathrm{mag}\) for a total of 99 stars. Including the silver sample yields a slightly shallower slope of \(-3.193\pm 0.032\,\mathrm{mag/dex}\), still in good agreement with Riess et al. (2019), and slightly raises the scatter to \(0.113\,\mathrm{mag}\). In the optical Wesenheit index \(m_{I}^{W}\), we adopt the slope of \(-3.31\,\mathrm{mag/dex}\) as well as the LMC PL intercept of \(15.935\) from Riess et al. (2019) and we obtain a PL dispersion of \(0.13\,\mathrm{mag}\) for the gold + silver sample, and \(0.14\) for the gold sample. We note that Li et al. (2021) obtained a PL scatter of \(0.13\,\mathrm{mag}\) in M31 for their gold sample with 42 Cepheids, which shows the great precision of our PL calibration. Our PL coefficients are listed in Table 3 and the PL relation is shown in Fig. 7. ### Distance to M33 To obtain the distance modulus for M33 (\(\mu_{\mathrm{M33}}\)), we compare the intercept of our \(m_{H}^{W}\) PL relation in M33 with that of the LMC obtained by Riess et al. (2019), \(m_{H}^{W}=15.898-3.26\log P\). We add the CRNL term of \(0.015\,\mathrm{mag}\) to the LMC intercept to account for the difference in brightness between LMC and M33 Cepheids (see SS4.2). We fix our PL slope to \(-3.26\) for consistency with the LMC and we derive: \[(\mu_{\mathrm{M33}}-\mu_{\mathrm{LMC}})=(\beta_{\mathrm{M33}}-\beta_{\mathrm{ LMC}})+\Delta m \tag{11}\] where \(\mu_{\mathrm{LMC}}=18.477\pm 0.026\,\mathrm{mag}\) is the most direct and precise geometric distance to the LMC available (Pietrzynski et al., 2019). The term \(\Delta m\) is the correction for the difference in metallicity between M33 and LMC Cepheids: \[\Delta m=-\gamma\left(\mathrm{[O/H]_{M33}}-\mathrm{[O/H]_{LMC}}\right) \tag{12}\] Romaniello et al. (2022) gives \(\mathrm{[O/H]_{LMC}}=-0.32\pm 0.01\,\mathrm{dex}\) from a sample of 89 Cepheids. In M33 we use the metallicity gradient by Bresolin (2011) which gives: \[12+\log(\mathrm{O/H})=8.50_{\pm 0.02}-0.045_{\pm 0.006}\,\mathrm{R_{kpc}} \tag{13}\] relative to 8.69 for solar (Asplund et al., 2009). For our HST sample, [O/H] metallicities range from -0.20 dex to -0.36 dex. We adopt the mean metallicity of \(\mathrm{[O/H]_{M33}}=-0.27\pm 0.03\,\mathrm{dex}\). Using the metallicity correction of \(\gamma=-0.217\pm 0.046\,\mathrm{mag}\)/dex from Riess et al. (2022), we obtain a correction of \(\Delta m=0.011\pm 0.007\,\mathrm{mag}\). Using the metallicity correction from Breuval et al. (2022) returns \(\Delta m=0.014\,\mathrm{mag}\) which is very similar, but we adopt the former as it is more suited for measurements in the Wesenheit \(m_{H}^{W}\) index. From Eq. 11 we obtain a final distance modulus of \(24.622\pm 0.030\,\mathrm{mag}\) based on the gold and silver samples combined in the NIR Wesenheit index \(m_{H}^{W}\). Using only the pure gold sample results in a very similar value (see Table 3). Finally, the optical Wesenheit index yields a very consistent distance of \(24.617\pm 0.032\) and \(24.624\pm 0.030\,\mathrm{mag}\) from the gold sample only and gold + silver samples combined, respectively. We retain as final distance modulus the one from the gold + silver sample in the \(m_{H}^{W}\) filter, \(24.622\pm 0.030\,\mathrm{mag}\), as it is based on the most precise PL intercept. The full error budget is detailed in Table 4. In SS3.2, we adopted the phase lag from Inno et al. (2015) between \(H\) and \(V\) light curves (\(\sim 0.08\), with a scatter of 0.03 in the relation), derived by using the mean magnitude along the rising branch as a reference for the phase. We made this choice for consistency, as we also use the NIR templates by Inno et al. (2015). On the other hand, Soszynski et al. (2005) found a larger phase lag of \(\sim 0.3\) (with a larger scatter of about 0.1 mag) by using the phase at maximum brightness as a reference. Inno et al. (2015) show that these two methods to phase \begin{table} \begin{tabular}{c c c c c c c|c} \hline \hline Band & \(\alpha\) & \(\beta_{\mathrm{free}}\) & \(\beta_{\mathrm{fixed}}\) & \(\sigma\) & \(\chi_{\mathrm{dof}}^{2}\) & N\({}_{\mathrm{stars}}\) & Sample & \(\mu_{\mathrm{M33}}\) (mag) \\ \hline \(m_{H}^{W}\) & \(-3.207\pm 0.039\) & \(21.993\pm 0.041\) & \(22.046\pm 0.010\) & 0.110 & 0.96 & 99 & Gold & 24.619 \(\pm\) 0.030 \\ \(m_{H}^{W}\) & \(-3.193\pm 0.032\) & \(21.980\pm 0.034\) & \(22.048\pm 0.008\) & 0.113 & 0.72 & 154 & Gold + Silver & **24.622 \(\pm\) 0.030** \\ \hline \(m_{I}^{W}\) & \(-3.167\pm 0.046\) & \(21.909\pm 0.051\) & \(22.065\pm 0.014\) & 0.141 & 1.32 & 99 & Gold & 24.617 \(\pm\) 0.032 \\ \(m_{I}^{W}\) & \(-3.179\pm 0.037\) & \(21.933\pm 0.041\) & \(22.072\pm 0.010\) & 0.130 & 1.01 & 154 & Gold + Silver & 24.624 \(\pm\) 0.030 \\ \hline \end{tabular} \end{table} Table 3: Calibration of the PL relation in M33 (\(m=\alpha\log P+\beta\)) and resulting distance modulus. The second column gives the fitted PL slope \(\alpha\). The third and fourth columns give the PL intercept \(\beta\) when the slope is a free parameter and when the slope is fixed to the LMC value (Riess et al., 2019) respectively. the data are very different; using the Soszynski et al. (2005) phase lag with the Inno et al. (2015) template is formally inconsistent, but it leads to a difference of only 0.006 mag in the distance modulus. We attempted to independently constrain the value of the metallicity effect of the PL relation, \(\gamma\). However, the very narrow range of abundances spanned by these Cepheids yields uncertainties \(\sigma(\gamma)\) of \(0.24-0.42\) mag/dex which do not improve upon previous measurements (Breuval et al., 2021, 2022; Riess et al., 2022). ### Comparison with the literature #### 4.6.1 Cepheids, RR Lyrae, Miras Figure 8 shows our final distance modulus for M33 and compares it with other values from the literature based on various indicators (listed in Table 5, all corrected to a common LMC distance modulus of 18.477 mag (Pietrzynski et al., 2019). Our distance agrees very well with other estimates based on Cepheids, especially with Freedman et al. (1991), Macri (2001), Scowcroft et al. (2009) and Bhardwaj et al. (2016). In particular, Cepheids appear to provide the most consistent distance measurements among all other distance indicators. The error of the Pellerin & Macri (2011) distance was revised to 0.05 mag to include the systematic uncertainties from the photometric comparison with Massey et al. (2006) in their section 3. The Cepheid distance to M33 by Lee et al. (2022) is larger than our value by 0.10 mag (1.7\(\sigma\)). This difference matches the size and direction of the metallicity dependence of Cepheids, \(\sim\) -0.2 mag/dex (Breuval et al., 2022), where metal poor Cepheids are fainter. (It is the wrong direction for crowding which, if uncorrected, would make ground-based observations of Cepheids appear too bright). The Lee et al. (2022) sample is in the outer regions of M33, around 5 kpc from the center, which corresponds to a metallicity of about [O/H] \(\sim-0.4\) dex, somewhat metal poor. They derive their distance to M33 relatively to the absolute PL calibration by Monson et al. (2012), based on Milky Way Cepheids which are metal rich with [O/H] \(\sim 0.1\) dex. This 0.5 dex difference in metallicity produces an expected difference of \(\sim 0.10\) mag, bringing it into agreement with the study here. An alternative to correcting for metallicity is to use a reference with a similar metallicity as M33, i.e., the metal poor Cepheids in the LMC, \(\sim\) -0.3 dex, and the geometric DEB distance as a reference for the Lee et al. (2022) sample which we find yields 24.65 mag in good agreement with our result. We also find good agreement with the RR Lyrae distance by Sarajedini et al. (2006). Finally, the two Mira-based distances (Yuan et al., 2018; Ou et al., 2023) differ by 0.13 mag, which can be attributed to the use of different data sets and methodologies, differences in periods and possible calibration systematics between ground and space-based data. #### 4.6.2 Tip of the Red Giant Branch Distance moduli based on TRGB show a larger dispersion than Cepheid-based measurements with differences as large as 0.34 mag between different studies. For example, McConnachie et al. (2004) derive a distance modulus of \(24.50\pm 0.06\) mag based on an annulus region between \(0.5-0.8^{\circ}\) in the outer disk of M33. They investigate the impact of crowding and conclude that this effect is negligible in regions farther than \(0.5^{\circ}\) from \begin{table} \begin{tabular}{l c c} \hline \hline Error & Value & Source \\ \hline LMC DEBs & 1.20 \% & Pietrzyński et al. (2019) \\ LMC PLR mean & 0.41 \% & Riess et al. (2019) \\ M33 PLR mean & 0.38 \% & Measured here \\ Metallicity correction & 0.33 \% & Riess et al. (2022) \\ CRNL across 2 dex & 0.23 \% & Riess et al. (2019) \\ \hline **Total** & **1.38** \% & \\ \hline \end{tabular} \end{table} Table 4: Error budget for the distance to M33. Figure 8: M33 distance modulus from the present work compared with values from the literature. All values shown here are rescaled to the recent LMC distance modulus of 18.477 from Pietrzyński et al. (2019) (see their original adopted LMC distance in Table 5). the center of M33. They also rule out the possibility of contamination from AGB stars. More recently, Lee et al. (2022) selected a TRGB field in the southern part of M33 at a distance of about \(0.25-0.45^{\circ}\) from the galactic center, and therefore no less likely to be affected by blending than the McConnachie et al. (2004) sample, yet obtained a much higher distance modulus of \(24.72\pm 0.06\) mag. We speculate some of this difference could be attributed in part to a difference in sample color, as the redder color cut from Lee et al. (2022) extends further into the region where the TRGB is fainter and may benefit from a color correction (see Jang and Lee, 2017). We note that although the mean color of the sample from Lee et al. (2022) is located at the boundary where Jang and Lee (2017) states a color correction is not necessary, Jang and Lee (2017) do not calibrate their color relation based on the mean color of their entire sample. It is also possible the different TRGB-based measurements are due to population differences as recently seen in Hoyt (2023), Anderson et al. (2023) and Wu et al. (2022) which all identify significant, intrinsic variations in the TRGB brightness with location, sub-RGB-population or the apparent ratio of RGB to AGB stars (that may relate to age or metallicity). Durbin et al. (2020) also identify additional systematics associated with the TRGB by comparing different calibration approaches. The \(\chi^{2}\) of the TRGB measurements around their weighted mean value is 3.67, compared to 1.42 for Cepheid-based distances (see Table 6). Finally, both measurements based on the JAGB method (Zgirski et al., 2021; Lee et al., 2022) agree to 1\(\sigma\) with our value. #### 4.6.3 Eclipsing binaries A distance to M33 based on detached eclipsing binaries (DEBs) was published by Bonanos et al. (2006). However, unlike the well-established LMC and SMC distances by Pietrzynski et al. (2019) and Graczyk et al. (2020) respectively, which are based on late-type DEBs (a purely empirical method calibrated geometrically through red giant interferometry), the M33 distance by Bonanos et al. (2006) relies on early-type DEBs which depend on surface flux calculated from non-local thermodynamic equilibrium models and is strongly affected by model uncertainties. Therefore we will not compare our result with this measurement as we limit our comparisons to empirical measures. In the future, the ability to measure many primary distance indicators in the same host offers the best chance to identify and rectify differences between distance ind \begin{table} \begin{tabular}{c l l l} \hline \hline \multicolumn{1}{c}{\(\mu_{\rm M33}\)} & Reference & Method & \(\mu_{\rm LMC}\) \\ \hline \(24.64\pm 0.09\) & Freedman et al. (1991) & Cepheids & 18.50 \\ \(24.56\pm 0.10\) & Freedman et al. (2001) & Cepheids & 18.50 \\ \(24.65\pm 0.12\) & Macri (2001) & Cepheids & 18.50 \\ \(24.50\pm 0.06\) & McConnachie et al. (2004) & TRGB & - \\ \(24.67\pm 0.08\) & Sarajedini et al. (2006) & RR Lyrae & 18.50 \\ \(24.71\pm 0.04\) & Rizzi et al. (2007) & TRGB & - \\ \(24.53\pm 0.11\) & Scowcroft et al. (2009) & Cepheids & 18.40 \\ \(24.84\pm 0.10\) & U et al. (2009) & TRGB & 18.50 \\ \(24.76\pm 0.05\) & Pellerin and Macri (2011) & Cepheids & 18.50 \\ \(24.57\pm 0.05\) & Conn et al. (2012) & TRGB & - \\ \(24.62\pm 0.07\) & Gieren et al. (2013) & Cepheids & 18.50 \\ \(24.62\pm 0.06\) & Bhardwaj et al. (2016) & Cepheids & 18.47 \\ \(24.80\pm 0.06\) & Yuan et al. (2018) & Miras & 18.493 \\ \(24.57\pm 0.06\) & Zgirski et al. (2021) & JAGB & 18.477 \\ \(24.67\pm 0.05\) & Lee et al. (2022) & JAGB & - \\ \(24.72\pm 0.07\) & Lee et al. (2022) & TRGB & - \\ \(24.71\pm 0.04\) & Lee et al. (2022) & Cepheids & - \\ \(24.67\pm 0.06\) & Ou et al. (2023) & Miras & 18.49 \\ \(\mathbf{24.622\ \pm\ 0.030}\) & \multicolumn{1}{c}{**Present work**} & \multicolumn{1}{c}{**Cepheids**} & \multicolumn{1}{c}{**18.477**} \\ \hline \end{tabular} \end{table} Table 5: M33 distance modulus from the literature. The last column gives the LMC distance modulus adopted to obtain the distances given in the first column. cators. M33 offers one of the best such opportunities. ## 5 Photometric Bias from Cluster Cepheids ### Motivation Contamination from crowded backgrounds such as star clusters can bias photometric measurements of Cepheids in nearby galaxies. Photometric measurements of extragalactic Cepheids are usually corrected for crowding effects by injecting artificial stars in the vicinity of Cepheids and by remeasuring their contribution (Riess et al., 2009). However, this test may not properly reproduce the impact of stars physically associated with Cepheids, which might be unresolved and whose light properties might differ from those of the background field stars. Anderson and Riess (2018) found that blending due to cluster Cepheids was responsible for a 0.23% overestimate of \(H_{0}\), although cluster Cepheids are a relatively rare phenomenon. They concluded that chance superposition of Cepheids with clusters was not a limit for a 1% measurement of the Hubble constant. In this section, we estimate the blending contribution from cluster Cepheids in M33: we measure the occurrence rate of Cepheids in clusters in M33 and we derive the typical flux contribution of the clusters in order to determine by how much they affect Cepheid photometry and our M33 distance modulus. ### Crossmatch of Cepheid and cluster catalogs First, we estimate the number of M33 Cepheids located in or near clusters. Anderson and Riess (2018) reported a fraction of 2.4% cluster Cepheids in the M31 galaxy, lower than in the Milky Way, LMC and SMC (with 8.5%, 11% and 6% respectively). In order to obtain the fraction of cluster Cepheids in M33, we crossmatch an initial sample of 609 fundamental mode Cepheids from Pellerin and Macri (2011) with the catalog of 2137 star clusters in M33 from Johnson et al. (2022). We adopt a separation of \(\theta_{\rm sep}<1.2\,r_{\rm ap}\) as membership criteria (Senchyna et al., 2015), with \(r_{ap}\) the mean cluster radius provided in Johnson et al. (2022). From this crossmatch we find a total of 10 cluster Cepheids, listed in Table 7. ### Creation of stamp images We identified these 10 cluster Cepheids in the PHATTER mosaics and we produced stamp images (cutouts centered on each Cepheid) in each filter. The HST \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{Indicator} & Mean \(\mu_{\rm M33}\) & \(\chi^{2}_{mean,\nu}\) & \(\chi^{2}_{B23,\nu}\) & N \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) \\ \hline Cepheids & 24.644 & 1.08 & 1.39 & 9 \\ TRGB & 24.639 & 3.67 & 2.96 & 5 \\ JAGB & 24.625 & 1.65 & 0.84 & 2 \\ Miras & 24.721 & 2.24 & 3.84 & 2 \\ \hline \end{tabular} \end{table} Table 6: Statistics of the M33 distances from the literature based on various distance indicators. Column (2) is the weighted mean distance modulus for each indicator. Columns (3) and (4) respectively give the \(\chi^{2}\) of the measurements from the literature around the weighted mean distance modulus and around the distance modulus of the present paper. Column (5) gives the number of estimates from the literature considered for each distance indicator. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{ Cepheid} & Cluster & \(r_{\rm ap}\) (\({}^{\prime\prime}\) ) & \(\theta_{\rm sep}\) (\({}^{\prime\prime}\) ) \\ \hline 01334331+3043559 & J22-241 & 1.17 & 0.85 \\ 01340959+3036215 & J22-477 & 1.55 & 1.46 \\ 01340060+3050079 & J22-521 & 1.83 & 1.77 \\ 01332060+3034584 & J22-665 & 1.49 & 0.12 \\ 01342512+3034381 & J22-722 & 1.41 & 0.27 \\ 01335311+3048343 & J22-836 & 1.43 & 0.49 \\ 01335809+3045568 & J22-900 & 1.31 & 1.51 \\ 01332768+3034238 & J22-1464 & 1.32 & 0.26 \\ 01334212+3032109 & J22-1492 & 1.60 & 1.74 \\ 01342784+3041012 & J22-1762 & 1.35 & 1.04 \\ \hline \end{tabular} \end{table} Table 7: Cluster Cepheids found by crossmatching Cepheids from Pellerin and Macri (2011) and star clusters from Johnson et al. (2022) with \(\theta_{\rm sep}<1.2\,r_{\rm ap}\), where \(r_{\rm ap}\) is the average cluster radius. Figure 9: Average cluster contribution (curve of growth) for the 10 confirmed M33 cluster Cepheids (Table 7) in the \(m_{H}^{W}\), \(F160W\), \(F475W\), \(F814W\) filters. stamp images of the 10 crossmatched cluster Cepheids are shown in Fig. 11 (Appendix B), where the clusters are easy to identify by eye. In particular, UV filters (\(F275W\) and \(F336W\)) are well suited for detecting hot blue cluster stars, while background red giant stars with luminosity similar to that of the Cepheid may contribute more in the infrared. We note that some stamps are blank because the Cepheid is located outside the limit of the PHATTER fields in NIR and UV. ### Visual inspection of stamp images In order to make sure that the cluster count is complete, we inspected each Cepheid stamp image for the presence of any additional undetected clusters. We report an additional 13 suspect cluster Cepheids, listed in Table 8 and shown in Fig. 13 (Appendix B). Three of them (01343182+3043050, 01343169+3043002, and 01340910+3036296) are listed in the Johnson et al. (2022) catalog but at a distance greater than \(1.2r_{\rm ap}\) from the Cepheid (on average at \(\sim 2r_{\rm ap}\)), therefore they were not found by the crossmatch procedure. Two of them are also listed in the Sarajedini & Mancone (2007) catalog at about \(1^{\prime\prime}\) and \(4.5^{\prime\prime}\) from the Cepheid (this catalog does not provide the cluster radii). ### Flux contribution from the clusters We followed the approach used in Anderson & Riess (2018) (see their SS3.2.1) to separate three contributions: the flux from the Cepheid, the flux of the cluster, and the background contribution. The average cumulative light contribution from clusters \(\Delta m\) (or curve of growth) is obtained using a series of apertures of increasing radius, starting from the Cepheid in the center (\(r=1\) pixel) to a radius of about \(2^{\prime\prime}\). We note that \(\Delta m\) can be negative (if a light contribution from the cluster is detected) or positive (if the cluster flux is low or if its location is statistically sparser than the nearby environment of the Cepheid). At the distance to M33 found in SS4.5, a separation of \(1^{\prime\prime}\) corresponds to 4.1 pc along the major axis and 3.7 pc along the minor axis as projected on the plane of the disk. Anderson & Riess (2018) find that the cluster light contribution in M31 flattens off at a separation of about 3.8 pc, which corresponds to approximately twice the average cluster radius. Fig. 9 shows the average light contribution from the 10 confirmed M33 clusters (SS5.2, Table 7). Similarly to Anderson & Riess (2018), the contribution in the optical is significant, with \(\Delta m\) around \(-0.5\) mag in \(F475W\) at a separation of 4 pc, and becomes lower towards the near infrared with \(\Delta m=-0.20\) mag and \(-0.17\) mag in \(F814W\) and \(m_{H}^{W}\) respectively. These values are about a factor of 2 smaller than those found by Anderson & Riess (2018) in M31 at the same separation. The contribution of the 13 additional cluster Cepheids obtained by visual inspection of the stamp images (SS5.4, Table 8) is represented in Fig. 10. In all filters, it is lower than the contribution from the 10 confirmed clusters (Fig. 9) with only \(\Delta m=-0.02\) mag at 4 pc in \(m_{H}^{W}\). This suggests that the 13 additional possible cluster Cepheids \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Cepheid} & Cluster & \(r_{\rm ap}\) (\({}^{\prime\prime}\)) & \(\theta_{\rm sep}\) (\({}^{\prime\prime}\) ) \\ \hline 01333896+3034140 & \(-\) & \(-\) & \(-\) \\ 01343182+3043050 & J22-40 & 1.60 & 3.73 \\ 0133015+3038039 & \(-\) & \(-\) & \(-\) \\ 01333438+3035307 & \(-\) & \(-\) & \(-\) \\ 01333433+3034270 & \(-\) & \(-\) & \(-\) \\ 01343169+3043002 & J22-40 & 1.60 & 2.96 \\ 01340910+3036296 & J22-27, S07-325 & 2.23 & 4.71 \\ 01341217+3036362 & \(-\) & \(-\) & \(-\) \\ 01340474+3049181 & S07-292 & \(-\) & 1.12 \\ 0133348+3033210 & \(-\) & \(-\) & \(-\) \\ 01334821+3038001 & \(-\) & \(-\) & \(-\) \\ 01342988+3047541 & \(-\) & \(-\) & \(-\) \\ 01340084+3049551 & \(-\) & \(-\) & \(-\) \\ \hline \end{tabular} \end{table} Table 8: Cluster Cepheids found by inspecting visually the image cutouts from PHATTER mosaics centered on each Cepheid. The second column indicates if the Cepheid is found nearby a cluster listed in J22 or in S07, and ”\(-\)” indicates that no known cluster was found around this Cepheid in the literature. Figure 10: Average cluster contribution (curve of growth) for the 13 additional cluster Cepheids found by visual inspection (Table 8) in the \(m_{H}^{W}\), \(F160W\), \(F475W\), \(F814W\) filters. do not contribute to the contamination of the Cepheid flux. Following Anderson and Riess (2018), we estimate the average photometric bias produced by the 10 confirmed cluster Cepheids from Table 7 by multiplying their occurence rate (1.6 %) by their flux contribution in \(m_{H}^{W}\), which gives a bias of 0.003 mag. This shows that cluster Cepheids do not impact the distance measurement of M33. If we conservatively assume that all 13 additional Cepheids from Table 8 are associated with clusters, we obtain a maximum occurence rate of 3.7%, still lower than the fraction of cluster Cepheids in the Milky Way, LMC and SMC, and a correspondingly lower bias per cluster Cepheid (since the additional 13 do not produce a significant difference). ## 6 Summary We take advantage of the recently published high-quality PHATTER photometric survey of the M33 galaxy and we construct the Cepheid PL relation in the SH0ES near-infrared Wesenheit system (\(m_{H}^{W}\)). We use well-sampled ground-based light curves for the same Cepheid sample to recover the phases and amplitudes and we correct the random-epoch PHATTER measurements to mean magnitude. We also present new optical template light curves based on the same population of M33 Cepheids. These can be directly applied to fit sparsely-sampled light curves. We improve the uncertainty in the Cepheid distance to M33 to the 1.3% level and we present the tightest PL relation to date in this galaxy, with a scatter of only 0.11 mag. In particular, the use of HST photometry allows to significantly reduce the effect of crowding over past studies based on ground-based observations. This new Cepheid distance provides groundwork for including M33 as an anchor galaxy in the empirical distance scale (Riess et al., 2022), with a similar role as the Milky Way, the LMC and NGC 4258. In order to consider M33 as an independent anchor of the distance scale, a precise geometric calibration of its distance is required, using for example a large sample of late-type detached eclipsing binaries. Future facilities such as the ELTs and the _Roman_ Space Telescope could enable significant improvements in that matter. We discuss differences between past measurements of the distance to M33, especially based on the TRGB method, and we identify factors that explain these discrepancies such as the possible effect of blending and the choice of color cut when defining the red-giant branch in the color-magnitude diagram. Finally, we investigate the bias from cluster Cepheids and estimate that at most 3.7% of M33 Cepheids are in these systems, resulting in a negligible contamination of 0.003 mag to our distance measurement. Our result, compared with other distance measurements from the literature, highlights the unprecedented reliability and precision of Cepheids as standard candles. ## Acknowledgements We thank Abigail Lee for discussions about the TRGB and Cepheid distances to M33. L.B. is deeply grateful to Arshia Jacob for her constant support and kindness during the preparation of this paper. This research has made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2018), as well as of the SVO Filter Profile Service4. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the STScI. Footnote 4: [http://svo2.cab.inta-](http://svo2.cab.inta-) csic.es/theory/fps/
2303.00090
Uncertainties in experiments on strongly-coupled vacuum field modification of superconductivity and ferromagnetism in solids
We discuss recent experiments in which fine particles of the organic superconductor Rb$_3$C$_{60}$ or the cuprate superconductor YBa$_2$Cu$_3$O$_{6+x}$ are held in a polystyrene film that is spin-coated on to a silicon substrate with or without an intervening gold, or another inert metallic layer. From SQUID magnetisation data for Rb$_3$C$_{60}$ there appears to be a striking and completely unexpected increase in the superconducting transition temperature from $30$ to $45$~K, which is ascribed to coupling between the electrons in the superconductor and vacuum fluctuations in the electromagnetic field just above the metallic film. We argue that this could be a non-intrinsic effect associated with the presence of solid oxygen in the Pyrex sample tube. We suggest that the ferromagnetic SQUID signal observed for YBa$_2$Cu$_3$O$_{6+x}$ particles in polystyrene could be attributed to ferromagnetic particles or magnetic clusters of unknown origin.
J. R. Cooper, L. Forró, A. Jánossy
2023-02-28T21:28:37Z
http://arxiv.org/abs/2303.00090v1
Uncertainties in experiments on strongly-coupled vacuum field modification of superconductivity and ferromagnetism in solids ###### Abstract We discuss recent experiments in which fine particles of the organic superconductor Rb\({}_{3}\)C\({}_{60}\) or the cuprate superconductor YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6+x}\) are held in a polystyrene film that is spin-coated on to a silicon substrate with or without an intervening gold, or another inert metallic layer. From SQUID magnetisation data for Rb\({}_{3}\)C\({}_{60}\) there appears to be a striking and completely unexpected increase in the superconducting transition temperature from 30 to 45 K, which is ascribed to coupling between the electrons in the superconductor and vacuum fluctuations in the electromagnetic field just above the metallic film. We argue that this could be a non-intrinsic effect associated with the presence of solid oxygen in the Pyrex sample tube. We suggest that the ferromagnetic SQUID signal observed for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6+x}\) particles in polystyrene could be attributed to ferromagnetic particles or magnetic clusters of unknown origin. ## I Introduction For over a decade there has been much work on how strong coupling to light influences material properties and chemical reactions [1; 2; 3]. In most of the experiments the materials are placed in cavities or other resonators. The cavity increases the light intensity from an external source at certain optimal frequencies needed to reach the light-matter strong coupling regime. Here we focus on more recent claims that the electronic properties of some superconducting compounds can be strongly altered by coupling to vacuum field fluctuations near suitable polarizable materials [4; 5; 6]. These papers report a substantial increase of the superconducting transition temperature (\(T_{\rm c}\)) of Rb\({}_{3}\)C\({}_{60}\) and the appearance of a previously unknown ferromagnetic phase persisting even above ambient temperature in the superconductor YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6+x}\) (YBCO\({}_{6+x}\)). In order to observe these extraordinary changes, the materials were embedded in polymeric polystyrene (PS) and spin-coated on a gold (Au) layer. No external light source was applied to induce the large changes in electronic properties claimed. Earlier measurements of the infrared spectrum using resonators to enhance light intensity established that PS infrared vibrations can strongly couple to light from an external source. The new experiments in Refs. [4] and [5] show a slight difference in the coupling to an external light source of PS with or without embedded superconductors. The claims of large changes in the electronic properties of the superconductors attributed to vacuum field fluctuations are based on magnetometry of samples layered on to highly conducting Au or insulating silicon in the absence of any external light. The electronic properties of both superconductors in these studies have been thoroughly investigated since they were discovered more than three decades ago [7; 8; 9; 10; 11]. Thus reports of unexpected and unexplained large changes in their physical properties should be verified with utmost care. The works [4; 5] have been cited in over a hundred, mainly theoretical papers, but the primary article on superconductivity enhancement in Rb\({}_{3}\)C\({}_{60}\) has only appeared on the cond-mat archive [4] that is cited in a review [6] but not in a regular refereed scientific journal. Here we wish to draw attention to possible uncertainties in the assessment of the experiments, which may be useful for any researchers, particularly younger people, who are tasked with verifying the experimental data reported for Rb\({}_{3}\)C\({}_{60}\) and YBCO\({}_{6+x}\). We detail difficulties encountered in measuring the magnetization of small samples of chemically reactive materials. We do not comment on aspects of vacuum fluctuations that are not directly related to superconductivity, for example the significance of the optical measurements on the PS films [4] or low temperature quantum Hall experiments [12] or the effect of high intensity light on superconductivity [13]. II Uncertainty in assessment of experiments on enhancement of superconductivity in Rb\({}_{3}\)C\({}_{60}\) The superconducting transition temperature was determined from the temperature dependence of the magnetic moment measured by SQUID magnetometry. All measurements were performed while heating the sample from low temperatures in a 100 G magnetic field after cooling in 100 G or in zero field. Below a well-defined temperature, defined as the superconducting (s/c) transition temperature, a difference was observed between the zero field (ZFC) and the 100 G (10 mT) field-cooled (FC) data. The difference is attributed to hysteresis in the Meissner effect. This kind of hysteresis is well-established
2309.13909
Chinese herb medicine in augmented reality
Augmented reality becomes popular in education gradually, which provides a contextual and adaptive learning experience. Here, we develop a Chinese herb medicine AR platform based the 3dsMax and the Unity that allows users to visualize and interact with the herb model and learn the related information. The users use their mobile camera to scan the 2D herb picture to trigger the presentation of 3D AR model and corresponding text information on the screen in real-time. The system shows good performance and has high accuracy for the identification of herbal medicine after interference test and occlusion test. Users can interact with the herb AR model by rotating, scaling, and viewing transformation, which effectively enhances learners' interest in Chinese herb medicine.
Qianyun Zhu, Yifeng Xie, Fangyang Ye, Zhenyuan Gao, Binjie Che, Zhenglin Chen, Dongmei Yu
2023-09-25T07:12:58Z
http://arxiv.org/abs/2309.13909v1
# Chinese Herb medicine in Augmented Reality ###### Abstract Augmented reality becomes popular in education gradually, which provides a contextual and adaptive learning experience. Here, we develop a Chinese herb medicine AR platform based the 3dsMax and the Unity that allows users to visualize and interact with the herb model and learn the related information. The users use their mobile camera to scan the 2D herb picture to trigger the presentation of 3D AR model and corresponding text information on the screen in real-time. The system shows good performance and has high accuracy for the identification of herbal medicine after interference test and occlusion test. Users can interact with the herb AR model by rotating, scaling, and viewing transformation, which effectively enhances learners' interest in Chinese herb medicine. Qianyun Zhu\({}^{1}\). Yijeng Xie\({}^{1,2}\) \(\cdot\)Fangyang Ye\({}^{1,3}\) \(\cdot\) Zhenyuan Gao\({}^{1,5}\) \(\cdot\) Binjie Che\({}^{5}\) \(\cdot\) Zhenglin Chen\({}^{1}\) \(\cdot\) Dongmei Yu\({}^{1}\) 1 School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong, China 2 Center of Precision Medicine and Healthcare, Tsinghua-Berkeley Shenzhen Institute, Shenzhen, Guangdong, China 3 Department of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, China 4 Department of Software Engineering, Northeastern University, Liaoning, Shenyang, China 5 Hanhai Xingyun Digital Technology Co., Ltd, Tianjian, China Augmented reality, Chinese herbal medicine, herb recognition, Vuforia SDK **INTRODUCTION** Plants are always the key source of drug or treatment strategy in different traditional medicine systems. With the advent of the industrial revolution and the introduction of modern drugs, the use of herb was abandoned for a specific period of time[1]. However, the obstacles on the way of natural compounds studies have recently diminished mostly through using modern techniques. This has resulted in higher interest in using natural compounds in pharmaceutics[2, 3], and many people tend to choose plant-based medicines or products to improve their health conditions or as curative substances either alone or in combination with others. Herb medicine includes herbs, herbal materials (like plant parts) or preparations, processed and finished herbal products, active ingredient[4]. In recent years, the use of herbal product has beginning to revive hugely due to the of modern drugs, failure of modern therapies against chronic diseases, and microbial resistance[5-7]. Herb medicines are being used by 75-80% of the world population, especially those living in developing countries[8, 9]. In India, approximately 70% of the modern drugs are discovered from natural resources and the number of other synthetic analogs have been prepared from prototype compounds isolated from plants[10, 11]. The Thai government promotes and advocates the use of traditional and alternative health care modalities through scientific research and product development[12-14]. Chinese herb medicine (CHM) is accepted by people widely and develop with modern technologies. It was estimated that 11% of the total 252 drugs found in the essential medicine list of WHO are exclusive of plant origin[15, 16]. Multiple surveys reported that people with cancer commonly use herbs or herbal products to slow down the metastatic transition, supporting the immunity system and reducing stress[17]. The number of medals with herbal medicines is increasing[18]. Global pharmaceutical companies and researchers equipped with modern scientific knowledge, technology, idea and started to rediscover medicinal plants as a source of new drug candidates based on traditional knowledge[19, 20]. People study the effects of herbs on animals and the relationship between herbs and prescription drugs so that they can be used in clinical treatments[21-23]. However, the learning and popularization of herbs are limited by their great variety and high similarity, which makes it difficult for beginners to remember and to develop an interest in learning them. As never before, the characteristics of Augmented reality (AR) suggest that it can solve this kind of learning difficulty[24]. In addition to the 2D and 3D objects, digital assets such as audio and video files, textual information, and even olfactory or tactile information can be incorporated into users' perceptions of the real world. Collectively, these augmentations can serve to aid and enhance individuals' knowledge and understanding of what is going on around them[24-26]. AR deriving from Virtual Reality (VR) enriches and renders the real world with digital information and media such as 3D models, which overlays in the camera view of a smartphone or connected glasses in real-time. AR is known to be a virtual object that is generated by a computer through the real environment seen by a mobile phone, tablet, or AR glasses[27]. It integrates digital or virtual information such as images, audio, video, and haptic sensations with the natural environment seamlessly. AR is regarded as the blend or the'middle ground' between synthetic and real world[28]. The implementation of AR is defined by three characteristics: (a) the combination of real-world and virtual elements, (b) which are interactive in real-time, and which (c) are registered in 3D (i.e., the display of virtual objects or information is intrinsically tied to real-world loci and orientation)[29-31]. Human-computer interaction proceeds through smartphones or AR browsers as an emerging novel technology[32, 33]. AR has a broad spectrum of applications such as education, military, and medicine, which displays the contents informatively in any media such as video clips, animation, 3D models, etc[9, 34, 35]. AR technology mainly works through the identification of the target object, and then tracking the identified objects, after that imposed virtual images onto the tacked object which then present it by the display device[36, 37]. We apply AR to CHM with the goal of improving the learning and recognition of diverse herbs. There are thousands of Chinese herb plants, and we choose 88 types of plants as the initial learning objective, which are commonly used in medical treating and daily body caring. Those 88 plants cure a wide range of diseases including influenza, the coldness of body, renal dysfunction, and so on. They are highly representative in Chinese medicine because they have a great effect on body caring, which is the main topic in Chinese medicine. Herb in AR stimulates cognitive learning through the mobile application. The android application includes herb species information, morphological characteristics, ecological habits, and AR models. We can include more herb models in the future as a comprehensive learning platform for CHM. ## Results **Herbal Model Creation**. We collect photographs from all angles of the herbs in order to get a complete knowledge of their structure. Using 3ds Max to build 3D models of the actual shape of the herbs, and then we use V-Ray to render the models so that they are closer to the real material. The supplementary Figure.1 is a rendering and classification of all the herbs (Supplementary Figure.1). **System and Platform.** CHM recognize system has the function of accurately identifying herbs and displaying the 3D structure and information of herbs. We use the mobile tablet terminal of the android system to test the display effect and usability of the herb learning platform. Tablet is more convenient than desktop, which has larger calculation capability than the mobile phone. The tablet is portable for location-independent learning and students could visualize 3D herb models to explore herb structural detail any time. **Function test.** The application interface is user-friendly including the initial view, the recognition interface, and herb information (Figure.1). In the test phase, we use an external camera (TianTianQuan, JW-02, 1920*1080) to capture images. In fact, when users use a mobile device officially, it's the device's built-in camera that gets the image. When the camera scans the picture of CHM, the corresponding 3D AR herb model and its information will pop up on the screen. The herb learning application has good interactivity that exposes knowledge about CHM to participants. Users can learn herb details by selecting the function buttons including herb species, morphological characteristics, and ecological habits (Figure.1c). The species information contains the herb's Chinese and English name, source area, and usage. Morphological characteristics describe the anatomy features including the roots, stems, leaves, and seeds. Ecological habits introduce the suitable growth environment and life cycle of the herb. All herb background knowledge coexist with the 3D herb model. **Confusion test.** To test the effectiveness of the herb AR display module in the presence of visual interference, we conduct three controlled trials: (i) Camera rotator We rotate the camera angles by 45, 90, and 180 degrees respectively to verify the system performance after camera rotation (Figure.2a). The results show that the system is robust to camera rotation and the same 3D herb model appears without perturbation. (ii) Occlusion test We cover the picture with blank paper to show whether the system could identify the herb after losing 50% of the information (Figure.2b). The results are encouraging, where the partial picture is able to trigger the display of the whole 3D model. It's worth noting that when select one herb with a small area and less characteristic the system may not identify it (supplementary Figure.2b). (iii) Interference test We test the accuracy of the recognition module by placing other plants next to the correct herb picture. As shown in Figure.2e, the application can identify the correct herbal information. (iv) Color test We adjust the herb image to grayscale to verify that the system can still work after the image loses color information. As shown in Figure.2d, the test result is exactly as we expected. All the experiments show that the module can overcome different visual interference. The system is robust to Figure.1: The user interfaces of the application. **a** The initial design. **b** Recognition interface. **c** Herbal information. variation in the 2D image that increases the generalization and applicability of the platform. ## Discussion In recent years, AR technology becomes mature and make great achievements in various applications. With the maturity of educational information technology, AR technology has been gradually integrated into the field of education[38]. We develop an AR CHM teaching system, which helps the user to enhance his vision and interest in the herbs. In our current work, we have only added 88 herbs commonly found in ten regions of China. In fact, there are about 12,000 kinds of medicinal plants in China, among which more than 5,000 kinds of Chinese herbs have been used[39]. For future works, we will add more herb medicine to our database and design more human-computer interaction features. Meanwhile, we will increase the information about the effectiveness of herbs and simplify all information to make it more readable and easier to remember. In addition, we also hope to increase in the database rather than herbal medicines, including animal drugs and mineral drugs. After system optimization, we will apply this herb-learning platform to real class to create a new learning environment that is interactive, interesting, imaginative, and intelligent. ## Method AR is a collection of real-world and digital information. Users can visualize the virtual models superimposed with the real world and interact with the virtual environment. ### Model creation and system interface design. We build the AR models with 3dsMax 2014 and Unity 2017 in the Android environment configured for the application export. The AR operations implement with the object-oriented programming C# package, The AR system consists of an autofocus camera and mid-range computer with a 3.6 GHz quad-core Intel i9-9900K processor and NVIDIA GeForce RTX 2080 graphics card. We construct and render 88 3D models for commonly CHM contains Lingzhi, Aive, Bajitian, and other 85 herb plants. Model building requires the tools of Edit Mesh and UVW Mapping in 3ds Max. The final representative model is demonstrated in Figure.3. The user interface (UI) scenarios and functionality of the AR application are realized in Unity. We create the cameras, canvas, lights, and objects as the scenes and associate the target object with scripts including text, picture, and button for user interaction through the Unity Engine class library from C#. ### The development of the application. The display module for AR herb is the Vuforia SDK[40]. The Vuforia QCAR algorithm calculates the similarity between scanned and the target image from the database to trigger the display of the corresponding 3D models. The realization of the herb AR display module involves two stages (Figure.3). Specifically, in the first stage (Figure.4a), we upload herb pictures that modeled by ourselves to the Vuforia target database and generate a unity package file. Second, we pack the total herb model by binding the file with the corresponding 3D virtual herb model in Unity. Besides, we unify all the herb information into a "txt." file, which includes herb species, morphological characteristics, and ecological habits. Then we add the "txt" file in the model as supplementary information. Third, we launch the herb model app on the android platform. In the next stage (Figure.4b), when we show the herb image to the app, the app calculates the similarity between scanned image and target image from the Vuforia target database to trigger the corresponding 3D model display. And users can feel the herb visual effect in the real work after the bound 3D model rendered by Unity. Users can learn herb details such as herb species, morphological characteristics, and ecological habits in the app, where they interact with the 3D model by rotating and scaling freely. Figure.2: Three kinds of visual interference tests. **a** Rotate the angle by 45,90, and 180 degrees. **b** Occlusion test. **c** Interference test. **d** Color test.
2309.14900
Pre-training-free Image Manipulation Localization through Non-Mutually Exclusive Contrastive Learning
Deep Image Manipulation Localization (IML) models suffer from training data insufficiency and thus heavily rely on pre-training. We argue that contrastive learning is more suitable to tackle the data insufficiency problem for IML. Crafting mutually exclusive positives and negatives is the prerequisite for contrastive learning. However, when adopting contrastive learning in IML, we encounter three categories of image patches: tampered, authentic, and contour patches. Tampered and authentic patches are naturally mutually exclusive, but contour patches containing both tampered and authentic pixels are non-mutually exclusive to them. Simply abnegating these contour patches results in a drastic performance loss since contour patches are decisive to the learning outcomes. Hence, we propose the Non-mutually exclusive Contrastive Learning (NCL) framework to rescue conventional contrastive learning from the above dilemma. In NCL, to cope with the non-mutually exclusivity, we first establish a pivot structure with dual branches to constantly switch the role of contour patches between positives and negatives while training. Then, we devise a pivot-consistent loss to avoid spatial corruption caused by the role-switching process. In this manner, NCL both inherits the self-supervised merits to address the data insufficiency and retains a high manipulation localization accuracy. Extensive experiments verify that our NCL achieves state-of-the-art performance on all five benchmarks without any pre-training and is more robust on unseen real-life samples. The code is available at: https://github.com/Knightzjz/NCL-IML.
Jizhe Zhou, Xiaochen Ma, Xia Du, Ahmed Y. Alhammadi, Wentao Feng
2023-09-26T12:58:44Z
http://arxiv.org/abs/2309.14900v2
Pre-training-free Image Manipulation Localization through Non-Mutually Exclusive Contrastive Learning ###### Abstract Deep Image Manipulation Localization (IML) models suffer from training data insufficiency and thus heavily rely on pre-training. We argue that contrastive learning is more suitable to tackle the data insufficiency problem for IML. Crafting mutually exclusive positives and negatives is the prerequisite for contrastive learning. However, when adopting contrastive learning in IML, we encounter three categories of image patches: tampered, authentic, and contour patches. Tampered and authentic patches are naturally mutually exclusive, but contour patches containing both tampered and authentic pixels are non-mutually exclusive to them. Simply abnegating these contour patches results in a drastic performance loss since contour patches are decisive to the learning outcomes. Hence, we propose the Non-mutually exclusive Contrastive Learning (NCL) framework to rescue conventional contrastive learning from the above dilemma. In NCL, to cope with the non-mutually exclusivity, we first establish a pivot structure with dual branches to constantly switch the role of contour patches between positives and negatives while training. Then, we devise a pivot-consistent loss to avoid spatial corruption caused by the role-switching process. In this manner, NCL both inherits the self-supervised merits to address the data insufficiency and retains a high manipulation localization accuracy. Extensive experiments verify that our NCL achieves state-of-the-art performance on all five benchmarks without any pre-training and is more robust on unseen real-life samples. [https://github.com/Knightzj/NCL-IML](https://github.com/Knightzj/NCL-IML). ## 1 Introduction Thrilling advances in media techniques grant us easier and easier access to manipulate images. Image Manipulation Localization (IML) is then indispensable for defensive information forensics and is heavily invested by the information security industry. Today, data insufficiency is the most prominent issue in building deep IML models. As dense annotations and expertise for tamper identification are exorbitant, public datasets for IML are all tiny-sized (with a few hundred to a few thousand images) and severely insufficient for training deep CNNs. Consequently, major deep IML methods carry out pre-training on additional large-scale datasets. In general, pre-training of IML models relies on synthesized datasets. On the one hand, synthesized datasets vanish the high labeling costs and pre-training on synthesized datasets refrains from overfitting. On the other hand, employing synthesized datasets to conduct pre-training impedes fair comparisons among models and even jeopardizes the model generalizability. Pre-training is crucial to the model performance, and for fair comparisons, models of the same task commonly practice their pre-training on the same dataset. However, synthesized pre-training datasets for IML models are strikingly different in annotation quantity and quality. For instance, ManTra-Net [34] grounds on a self-collected, pixel-wise labeled dataset of 102,028 Figure 1: Three categories of patches in a manipulated image and the non-mutually exclusive relations among them. There are three categories of patches in a manipulated image: tampered, authentic, and contour patches. In the middle picture, we depict them accordingly with red, blue, and purple squares. Only tampered and authentic patches are mutually exclusive. When the decisive contour patches are involved, non-mutually exclusivity occurs in contrastive learning. Best viewed in color. images and 385 manipulation types for pre-training; RGB-N [38] employs a randomly synthesized dataset of more than 42,000 images; BusterNet [33] entails a synthesized dataset of 100,000 copy-moved images for pre-training; MVSS [9] adopts a synthesized dataset of 8,4000 images. Faithful evaluations for models pre-trained on different synthesized datasets become impossible. Moreover, unlike real tampered images, these naively synthesized images severely lack elaborate post-processing to cover their manipulation traces or artifacts [5, 29, 9]. In other words, the sampling process of synthesized datasets is biased from the sampling process of manual build datasets [36, 37]. A model learned on such a dataset with sampling bias is short in generalizability, and measuring this mode on tiny-sized, non-homologous benchmarks cannot fully disclose its poor performance under real cases. To address the insufficient data problem without introducing such a tricky pre-training strategy, we advocate adopting contrastive learning in IML. On the one hand, self-supervised contrastive learning can yield massive contrastive pairs from real tampered images. These contrastive pairs boost the training sample number by at least one or two orders of magnitude without causing sampling bias or unfaithful evaluations. On the other hand, manipulation leaves artifacts in images, and artifacts cause feature discrepancies between tampered and authentic regions. This is the essential clue for identifying tampered areas by human experts. The contrastive learning objective explicitly follows this clue and reveals the vital feature discrepancies by encouraging the compactness between positive pairs and the margin between negative pairs. Although recent researches suggest pixel-level contrastive learning for pixel predictions [35], patch-wise contrastive learning is still more suitable for IML. Because manipulations rarely happen pixel-by-pixel, the patch-level features are proven to be outstanding in characterizing manipulation traces or artifacts [22]. Thus, in our method, positives and negatives are naturally the tampered and authentic image patches of pure tampered or authentic pixels. Image patches are in a fixed size, but the manipulated regions are arbitrarily shaped and sized. As shown in the middle picture of Figure 1, when sampling along the contour of manipulated regions, tampered and authentic pixels are inevitably mingled within one image patch. Then, we have the third patch, contour patches. Apparently, contour patches are neither mutually exclusive to tampered patches nor authentic patches. Conventional contrastive learning designed to handle the mutually exclusive relation between binary sets will then malfunction under such a trilateral, non-mutually exclusive circumstance. However, simply discarding the contour patches and merely employing the tampered and authentic patches to conduct contrastive learning is not feasible. Previous studies [23, 28, 21, 22] show that artifacts assemble along the borders of tampered areas. Therefore, discarding contour patches means throwing away samples with the richest artifacts' information. Besides, contour patches are the hard positives or negatives in contrastive learning since they contain both tampered and authentic pixels at the same time. Hard samples are decisive to the contrastive learning outcomes. Discarding contour patches also eliminates most of the hard samples in contrastive learning. In short, we are facing such a dilemma: the existing contrastive learning paradigm is incompatible with the non-mutually exclusive contour patches, but learning without contour patches results in a significant performance gap, and learning without the contrastive paradigm leads to model generalization and evaluation issues. Therefore, a brand-new learning framework that follows the contrastive learning paradigm and copes with non-mutual exclusivity is the key to saving IML models from this dilemma. Hence, we propose the _Non-mutually exclusive Contrastive Learning_ (NCL) framework. Every contour patch is partial-tampered and partial-authentic. Therefore, we can regard a contour patch as a hard positive in contrastive learning if we only count its tampered part. Likewise, this counter patch can be simultaneously regarded as a hard negative if only its authentic parts are counted. That is, a contour patch can be transferred into a hard positive or a hard negative referring to its partial information. Following this role-switching characteristic, we constructed a pivot structure with dual branches on the shallow layers of the backbone to squeeze the positive and negative parts accordingly from the contour patches. The name of the pivot indicates that it switches contour patches between the role of hard positives and hard negatives to constitute contrastive pairs. Thus, the trilateral, non-mutually exclusive contrast among tampered patches (positives), authentic patches (negatives), and the contour patches is then disentangled into three binary, mutually exclusive, contrastive pairs of {_positive, negative_}, {_positive, hard negative_}, {_negative, hard positive_}. The NCL loss is the sum of the three pair-wise contrastive losses. In addition, the pivot structure corrupts the spatial correlation among contour patches. Therefore, on the decoder side, we devise the _pivot-consistent loss_ with auxiliary classifiers to ensure the pixel-wise spatial relations are captured and preserved by the deeper layers of the encoder. We train our NCL-based method from scratch without additional datasets or pre-training stages. With only 5-10% of the total training data compared with pre-training-based methods, our model outperforms current pre-training-based approaches on all five public IML benchmarks. Despite this, deep CNNs are prone to overfitting on such small public benchmarks. Therefore, we further use non-homogeneous training and testing datasets to examine model generalization ability. The results verify that NCL endows our IML model with better localization accuracy and robustness. Last but not least, similar to contrastive learning, NCL also holds the plug-in merit. Regardless of backbone architecture, NCL functions well. In summary, our main contributions are quad-folded: * **Free of Additional Data**. To the best of our knowledge, we are the first work bringing contrastive learning in IML to address the insufficiency of training data and drawbacks caused by pre-training. * **Non-Mutually Exclusive Contrast**. As far as we know, we are also the first to handle non-mutual exclusive, trilateral relations through contrastive learning. Our Non-mutually exclusive Contrastive Learning (NCL) framework can serve other tasks like semantic segmentation or fine-grained object detection. * **Top Benchmark Performance**. Our method uses less and inferior training data but achieves state-of-the-art performances as well as the top model generalization ability on all five public benchmarks. * **Plug-in Merit**. Our method functions under both CNN and Transformer-styled backbones. Backbone selection will not break the integrity of NCL. ## 2 Related Work **Image Manipulation Localization**. Prior IML methods seek for pre-training strategy, hand-crafted features, and the self-adversarial paradigm to solve the data insufficiency issue. As discussed in the previous section, methods [29, 9, 8, 36] involving hand-crafted features or pre-training mechanisms are not the proper solution for the insufficient data issue. We here view the other Generative-Adversarial Network (GAN) based methods in IML. GAN-based solutions [19, 18, 39] also reach state-of-the-art performances without additional datasets. However, primary GAN-based methods are sensitive to the manipulation types. [18] only works on copy-moved images; [19] is practical merely for splicing manipulation. Our most related study is the self-adversarial GAN [39]. They also noticed the drawbacks of pre-training and built a self-adversarial training strategy in a dual-attention GAN to localize forged regions precisely. However, GAN-based methods do not explicitly follow the clue of image manipulation, which is the discrepancies between tampered and authentic regions, thereby undermining the model interpretability. Moreover, the generated training samples are still different from the real ones, thereby undermining model performances on real-life images. Our proposed NCL reveals the essential tamper-caused feature differences as well as boosts the number of real training samples. **Contrastive Learning**. Contrastive learning [6] is emerging and fast developing in self-supervised and unsupervised visual representation areas. Conventional contrastive learning is commonly applied in tasks whose problem space is bisected. Binary and mutually exclusive relations are the fundamental assumption to apply existing contrastive learning. This is why existing contrastive IML models [17, 25, 32] only conduct comparisons on images rather than image patches. As far as we know, current studies can only handle binary (similar or dissimilar) contrasts [14, 27]. Our NCL extends the contrastive learning paradigm into non-mutually exclusive relations among trilateral sets, thereby retains the information-rich contour patches and gains surpassing performances in the IML task. ## 3 Method ### Basic Encoder-Decoder Structure We adopt DeepLabV3+ [4] as the basic encoder-decoder structure of our IML model since it has been adopted by many other IML models as the baseline [13, 9]. Do notice, the base mode selection or the backbone selection will affect the efficacy of our NCL. Thus, the encoder backbone in Figure 2 is ResNet101 [15] blocks with atrous convolution in the last few blocks. The Atrous Spatial Pyramid Pooling (ASPP) block is likewise applied. Afterward, the encoded feature of size (64\(\times\) 64) is passed to the decoder. The decoder adopts two upsampling modules. The encoder output is twice upsampled by a factor of 4. In short, our basic encoder-decoder applies the same network structure and training settings as the DeepLabV3+ model. ### Non-Mutual Exclusive Contrastive Learning **Problem Formulation**. For conventional contrastive learning, define the problem domain as the universal set \(\mathbb{U}\). Like the conventional contrastive learning section in Figure 1 shows, we have the set of positives \(\mathbb{P}\), and set of negatives \(\mathbb{N}\), where: \[\begin{split}\mathbb{P}\cup\mathbb{N}&=\mathbb{U}\\ \mathbb{P}\cap\mathbb{N}&=\emptyset\end{split} \tag{1}\] \(\emptyset\) indicates the mutual exclusivity between positives and negatives. Mark \(p\) as one tampered image patch, which is one element of \(\mathbb{P}\). For \(\forall p\in\mathbb{P}\), we further denote \(p_{j}\in\mathbb{P},p_{j}\neq p\); and \(n_{i}\in\mathbb{N}\). Then, the conventional contrastive learning objective is: \[\arg\max_{f}\{\sum_{i,j}\phi(f(p),f(n_{i}))-\phi(f(p),f(p_{j}))\} \tag{2}\] \(f(\cdot)\) is the learned feature representation for an image patch. \(f(p_{j})\) and \(f(n_{i})\) are the red and blue cubes in the IML feature map in Figure 2. \(\phi(\cdot,\cdot)\) represents the measured distance, namely the similarity, between two feature vectors. Notations are unified throughout this paper, where sets of image patches are denoted by upper-case letters, image patches are represented by lower-case letters, and \(f(\cdot)\) function is the learned feature representation of an image patch. However, for the NCL illustrated Figure 1, we have: \[\begin{split}&\mathbb{N}\cup\mathbb{P}\cup\mathbb{C}=\mathbb{U}\\ &\mathbb{P}\cap\mathbb{N}=\emptyset;\mathbb{C}\cap\mathbb{N}= \mathbb{C}^{-};\mathbb{C}\cap\mathbb{P}=\mathbb{C}^{+}\end{split} \tag{3}\] \(\mathbb{C}\) is the set of all contour patches. \(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\) are denoted as \(\mathbb{C}\)'s intersections with the positives set and negatives set. Meaning the positive and negative pixels mingled in the contour patches. For contrastive learning, positive pairs can be easily formed by finding another element in the same set. According to (1) and (2), the empty intersection implies how to form the important negative pairs. Therefore, we first revise (3) into the exact same format with (1). With little tricks, we can have: \[\begin{split}&\mathbb{N}\cup\mathbb{P}\cup\mathbb{C}=\mathbb{U}\\ &\mathbb{P}\cap\mathbb{N}=\mathbb{C}^{+}\cap\mathbb{N}=\mathbb{C}^ {-}\cap\mathbb{P}=\emptyset\end{split} \tag{4}\] Then, according to (1), we now can transfer the non-mutually exclusive contrast written in (3) into three binary contrasts between (\(\mathbb{P}\cap\mathbb{N}\)), (\(\mathbb{C}^{+}\cap\mathbb{N}\)), and (\(\mathbb{C}^{-}\cap\mathbb{P}\)). To carry out the three pair-wise comparisons, we need to first find out \(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\) defined in (3). Also \(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\) are patch fragments or pixels. The basic encoder network cannot yield features for patch fragments. So, we design the pivot network to directly use contour patches as the input and generate feature representations for \(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\). That is, the pivot network switches the role of contour patches by learning two mapping function between \((\mathbb{C},\mathbb{C}^{+})\) and \((\mathbb{C},\mathbb{C}^{-})\). Naturally, the pivot network should own two similar branches with the same input. **Pivot Network**. Before building the detailed layouts for the pivot network, we need to further consider the input of the pivot network. Training the pivot network also requires adequate contour patches. But, if we select a small patch size to generate more contour patches. The small patch size leads to a small number of pixels in one image patch. Then, some elements in \(\mathbb{C}^{+}\) or \(\mathbb{C}^{-}\) may contain a trivial number of pixels and are improper for training the pivot network. Hence, in a single image, we concatenate all contour patch features into one entire embedding p, and send p as the input of the pivot network to ensure the learning outcomes are significant enough for comparison. In Figure 2, this concatenation assembles purple cubes into a strip of size \((k\times C\times W\times H)\). \(k=card(\mathbb{C})\). \(C,W,H\) are the channel, height, and width of one contour feature. \(card()\) denotes the cardinality or the number of elements in a set \(\mathbb{C}\). On the one hand, with \(k=card(\mathbb{C})\), we concatenate the contour patch features into a single vector \((k\times C\times W\times H)\). This vector aggregates contour patch features within an entire image to address the model inefficiency when a few contour patches exist. On the other hand, the Pivot network flattens this \((k\times C\times W\times H)\) vector into a fix-sized \((1\times C\times W\times H)\) vector. This further helps to deal with the varying size of \(k\) in feature processing. The detailed structure of the Pivot network is depicted in Figure 2 (b) through pink rectangles and green arrows. Then, we design two symmetrical branches for our pivot Figure 2: (a): General network structure of our NCL framework. (b): Detailed Pivot network structure. Green-colored arrows signify the flow of conducting non-mutually exclusive contrasts through Pivot network and then generate Non-mutually Exclusive Contrastive Learning (NCL) loss. Ocher-colored arrows indicate the flow to generate Pivot-Consistent (PC) loss. Feature map output by the first encoder block is point-wise classified into tamper (red), authentic (blue), and contour (purple) features according to ground truth. Forgery masks in yellow rectangular are the ground truth in different sizes. Feature sizes are enclosed by brackets. network. These branches share the same input and have the same structure. \(\mathfrak{p}\) is first process by the \((1\times 1)\) convolution. This \((1\times 1)\) convolution kernel flattens \(\mathfrak{p}\) into the shape of \((1\times C\times W\times H)\). Moreover, this \((1\times 1)\) kernel projects \(\mathfrak{p}\) into a latent Hilbert space \(\mathcal{H}:\mathbb{R}^{C\times W\times H}\), where \(f(p_{j})\) and \(f(n_{i})\) settle and the similarity between features can be uniformly measured by \(\phi(\cdot,\cdot)\). BN and ReLU are the batch normalization and ReLU activation layers. The pivot network constructs the reflection \(f(\cdot)\) between the input set \(\mathbb{C},(c\in\mathbb{C})\) and output sets \(\mathbb{C}^{+},(c^{+}\in\mathbb{C}^{+})\) and \(\mathbb{C}^{-},(c^{-}\in\mathbb{C}^{-})\). So, \(f(\cdot)\) are expected to satisfy: (1).\(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\) benefit the IML accuracy; (2).\(\mathbb{C}^{+}\) and \(\mathbb{C}^{-}\) are smooth manifolds to ensure the backpropagation of NCL loss. Since \(\mathbb{C}\) is a smooth manifold (limited Euclidean space), \(f(\cdot)\) should be a bijection; (3).No information loss after the reflection. Meaning we can assemble \(c^{+}\) and \(c^{-}\) back to \(c\) through some binary operation \((\cdot)\); \(c^{+}\cdot c^{-}=c\), \(c^{+}\cdot c=c\), \(c^{-}\cdot c=c\). Thus, we can have a Group \((G,\cdot)\), where \(G=\mathbb{C}^{+}\cup\mathbb{C}^{-}\). \(G\) is a Lie group because: \(\square\): The group inverse \(G\to G\) is smooth according to (2). \(\square\): The group product \(G\times G\to G\) is smooth due to (3). Therefore, the output of the pivot network (\(c^{+}\) and \(c^{-}\)) are Lie group elements. We then take the pivot network as a smooth mapping function and borrow the \(\mathfrak{se}\) notation from the Lie group. We write the output of the two branches as \(\mathfrak{se}^{+}(\mathfrak{p})\) and \(\mathfrak{se}^{-}(\mathfrak{p})\). \(\mathfrak{se}^{+}(\cdot)\) and \(\mathfrak{se}^{-}(\cdot)\) just signify the feature transformation function learned by Pivot network; we cannot assure they are differential manifolds. \(\mathfrak{se}^{+}(\mathfrak{p})\) and \(\mathfrak{se}^{-}(\mathfrak{p})\) are the light red and light blue cubes yielded in Figure 2 (b). The sets of \(\mathfrak{se}^{+}(\mathfrak{p})\) and \(\mathfrak{se}^{-}(\mathfrak{p})\) are the desired \(\mathbb{P}\mathbb{I}^{+}\) and \(\mathbb{P}\mathbb{I}^{-}\). An intuitive explanation of \(\mathfrak{se}^{+}(\mathfrak{p})\) and \(\mathfrak{se}^{-}(\mathfrak{p})\) is they are special positive and negative features squeezed from the generated feature \(\mathfrak{p}\) by Pivot network; while common positive and negative features are generated by the backbone network according to physically-existing image patches. From this point of view, the Pivot network swings the role of pivot between positives and negatives like a pendulum. Based on \(f(\cdot)\) and \(\phi(\cdot,\cdot)\) in \(\mathcal{H}\), \(\mathfrak{se}^{+}(\cdot)\) and \(\mathfrak{se}^{-}(\cdot)\), we formulate the NCL learning objective as: \[\arg\max_{f,\mathfrak{se}^{+},\mathfrak{se}^{-}}\{\sum_{i,j}\phi(f(p),f(n_{i})) -\phi(f(p),f(p_{j}))\}+ \tag{5}\] \[\{\sum_{i,j}\phi(\mathfrak{se}^{+}(\mathfrak{p}),\mathfrak{se}^{ -}(\mathfrak{p}))-\phi(\mathfrak{se}^{+}(\mathfrak{p}),f(p_{j}))\}+\] \[\{\sum_{i,j}\phi(\mathfrak{se}^{+}(\mathfrak{p}),\mathfrak{se}^{ -}(\mathfrak{p}))-\phi(\mathfrak{se}^{-}(\mathfrak{p}),f(n_{i}))\}\] **Non-Mutually Exclusive Contrast Loss**. We indeed can construct NCL loss function according to (5). But, as the pivot network yields one \(\mathfrak{se}^{+}(\mathfrak{p})\) and one \(\mathfrak{se}^{-}(\mathfrak{p})\) for each manipulated image, \(\phi(\mathfrak{se}^{+}(\mathfrak{p}),\mathfrak{se}^{-}(\mathfrak{p}))\) is independent from summing parameter \(i,j\) and becomes a constant amid the loss accumulation process. Such a constant undermines the diversity of contrastive pairs. Hence, we make minor substitutions in the construction of positive pairs and further refine (5) as: \[\arg\max_{f,\mathfrak{se}^{+},\mathfrak{se}^{-}}\{\sum_{i,j}\phi(f (p),f(n_{i}))-\phi(f(p),f(p_{j}))\}+ \tag{6}\] \[\{\sum_{i,j}\phi(\mathfrak{se}^{+}(\mathfrak{p}),f(n_{i}))-\phi( \mathfrak{se}^{+}(\mathfrak{p}),f(p_{j}))\}+\] \[\{\sum_{i,j}\phi(\mathfrak{se}^{-}(\mathfrak{p}),f(p_{j}))-\phi( \mathfrak{se}^{-}(\mathfrak{p}),f(n_{i}))\}\] Through our pivot network, in (6), NCL reforms the nonmutually exclusive relation among trilateral image patches into three mutual-exclusive, pair-wise, binary comparisons connected by \(``+"\). This is drawn by the NCL supervision in Figure 2. For simplification, we assign \(p\) a subscript by letting \(p=p_{m}\); mark \(e_{x}^{y}=\exp(f(x),f(y))/\tau\), \(e_{x}^{-}=\exp(\mathfrak{se}^{-}(\mathfrak{p}),f(x))/\tau\), and \(e_{x}^{+}=\exp(\mathfrak{se}^{+}(\mathfrak{p}),f(x))/\tau\), where \(\tau\) is the temperature parameter. Referring to (6), the NCL loss function is: \[L_{NCL}=\frac{1}{m\times j}\sum_{m}\sum_{j}\log\frac{e_{p_{m}}^{p _{j}}}{e_{p_{m}}^{p_{j}}+\sum_{i}e_{p_{m}}^{n_{i}}}+ \tag{7}\] \[\frac{1}{j}\sum_{j}\log\frac{e_{p_{j}}^{+}}{e_{p_{j}}^{+}+\sum_{ i}e_{n_{i}}^{+}}+\frac{1}{i}\sum_{i}\log\frac{e_{-_{i}}^{-}}{e_{n_{i}}^{-}+\sum_{j}e_{p _{j}}^{-}}\] Last but not least, we explored the exact place to impose the pivot network. Some previous works [3] truncate the deep CNNs at different layers and reveal that the earlier truncated networks provide better features for forgery detection. Besides, the early truncated network has a shallow layout, small reception fields, and a large feature map, which ideally meets the requirement of a small patch size in NCL. Then, we divide the ResNet101 into convolution blocks as in their paper [15] and explore the feature maps yielded by each ResNet101 block. As expected, the experimental results verify the feature map after the first block to be the most suitable one. In the Experiments section, we provide more detailed information about the selection of image patch size for NCL. ### Pivot-consistent Loss The pivot network applies convolution on concatenated contour patches; it corrupts the spatial correlations within and among contour patches. [16] has shown spatial information is vital in IML. Therefore, we develop a Pivot-Consistent (PC) loss on the decoder side to ensure that contour patches' spatial correlation remains after the pivot network. PC loss assigns extra weights \(\mu\) to contour pixels in the basic pixel-wise BCE loss to enforce the spatial connec tion among contour pixels. However, the number of contour pixels is far less than the manipulated or authentic pixels. To avoid overfitting, as depicted by ocher arrows on the decoder side in Figure 2 (a), we employ auxiliary classifiers [7] to accumulate PC loss through each upsampling process gradually. After each upsampling, we shrink the ground truth to the same size as the feature map; pixel-wise IML supervision can then be imposed through shrunken forgery masks. We slightly abuse notations of lower-case letters here. Denote \(t\) as pixels in an image, \(\hat{t}\) as contour pixels, and \(\mu\) as the extra weight. \(\gamma(\cdot)\) is the ground truth label for a pixel, \(\theta(\cdot)\) is the predicted label of our network for a pixel. \(\gamma(\cdot)\) and \(\theta(\cdot)\) give binary value as output. Then, our PC loss is: \[L_{PC}=\frac{\mu}{\hat{t}}\sum_{\hat{t}}(\gamma(\hat{t})\log( \theta(\hat{t}))+(1-\gamma(\hat{t}))\log(1-\theta(\hat{t}))) \tag{8}\] \[+\frac{(1-\mu)}{t}\sum_{t}(\gamma(t)\log(\theta(t))+(1-\gamma(t)) \log(1-\theta(t)))\] We find larger \(\mu\) benefits the final IML accuracy. The assessment of \(\mu\) is detailed in the Experiments section. ### Total Loss Function To sum up, NCL for IML has a hybrid total loss as: \[L_{total}=\omega\times L_{NCL}+L_{PC} \tag{9}\] \(\omega\) is the weight parameter for the non-mutually exclusive contrastive learning on the shallow encoder layers. More of \(\omega\) can be found in the Experiments section. ## 4 Experiments and Discussions **Datasets**. Unlike existing baseline models, our proposed NCL only utilizes four benchmark datasets for training and evaluation. **No other datasets are involved in our training process**. We train our NCL model on the training split of a dataset and then test it on the corresponding test splite. To distinguish from pre-training, we term our training process conducted only on the benchmark training split as _benchmark-training_. Our model applies benchmark training in the experiments; unless otherwise stated. The five public datasets for benchmark training and evaluation are: (1) **CASIA**[10]; (2) **NIST16**[1]; (3) **Columbia**[26]; (4) **Coverage**[30]; (5)**Defacto**[24]. Training and testing splits of datasets follow the widely accepted practices in [34]. For Defacto, the Defacto-84K is used for training and Defacto-12K is applied for testing. In particular, our method does not engage additional datasets, so we follow the standard splits of Coverage, where 75 samples are for training and the rest for testing. **Implementation Details**. As demonstrated in Figure 2 (a), we follow the standard settings of DeepLabV3+ to build the basic encoder-decoder. We adopt an ASPP block with atrous rates of 1, 12, 24, and 36. The \(outputstride\) is set to 8. The decoder expands encoded features by a factor of 4 until reaching the same size as the input image. We also follow the training protocols in [4] to train our proposed model. In detail, we set the batch size to 4 on each dataset. The crop size is 512 \(\times\) 512. We adopt Stochastic Gradient Descent (SGD) optimizer with the learning rate schedule "poly" policy (initial learning rate 0.007, momentum 0.9, and weight decay 5e-4). Our proposed model is trained end-to-end without staged pre-training of each component. Moreover, our total loss is backpropagated as a whole. The weight of NCL loss (\(\omega\) in equation (9)) is 0.01. The weight in PC loss (\(\mu\) in equation (8)) is 0.9. These parameters are set-still in the evaluation. **Evaluation Metrics**. Following the widely accepted practices, we adopt pixel-level \(F_{1}\) score and Area Under the receiver operating characteristic Curve (AUC) as our evaluation metrics. \(F_{1}\) and AUC measure the binary classification accuracy for every pixel. Both metrics range in \([0,100]\) percentage, and higher scores indicate better performances. According to our observation, \(F_{1}\) is more faithful in reflecting the model performance since the numbers of tampered and authentic pixels are extremely unbalanced. AUC will be affected by the huge amount of true-negatives and the optimized AUC threshold will over-estimating the model performance. ### Quantitative Analysis on Benchmarks We compare our model performance with current SoTA methods, including ELA [20], NOI [23], CFA [12], J-LSTM [2], RGB-N [38], ManTra-Net [34], SPAN [16], OSN [31], ObjectFormer [29], MVSS [5], and MVSS++ [9] on the five standard datasets. ELA, NOI, and CFA are traditional methods based on hand-crafted features. The rest are end-to-end models. The results measured by the \(F_{1}\) score and AUC are listed respectively in Table 1. Except for our model, all the other end-to-end methods use considerable additional images for pre-training and benchmark training splits for fine-tuning. In general, our method achieves state-of-the-art performance compared with existing methods. Except for our NCL, all the other methods use a large-scale, synthesized dataset for pre-training and the five benchmarks for fine-tuning. Notably, our model outperforms the others in \(F_{1}\) score. Compared with AUC, \(F_{1}\) score is more faithful in measuring the real performances of an IML. Prior studies uniformly adopt the optimal threshold for the AUC metric, which adjusts the AUC threshold per model and per test. This threshold adjustment is impractical in daily scenarios and commonly overestimates the model behaviors. Therefore, recent researches all turn into more persuasive \(F_{1}\) score or apply fixed threshold when measuring AUC [9]. Most existing studies do not public their performances measured by fixed AUC. Also, we cannot re-train these models with their pre-training datasets. Here in Table 1, we adopt the optimal AUC but explicitly show our \(F_{1}\) score to fully demonstrate the surpassing performance of our NCL. Besides, we can find that our model owns a much smaller gap between the \(F_{1}\) score and the AUC value. This indicates higher robustness to some degree. ### Generalizability and Robustness As we conduct benchmark training and benchmark testing, although we achieved state-of-the-art performance and early stop the training epoch at 70, the generalizability of our model is yet to be verified. In other words, we need to answer:"Does NCL overfit these training data?". To address this foremost model generalizability concern, we conduct experiments by training our model on one dataset and then testing it on another non-homogeneous dataset. The result is shown in Table 2. We first train our NCL model on the relatively large benchmarks, CASIAv2 and Defacto, then test the trained model on the other benchmarks. Since MVSS-Net adopts the same datasets as pre-training datasets, we employ MVSS-Net for comparison. In the first four rows of Table 2, under the same settings, our NCL exceeds the pre-training-based MVSS-Net in almost every dataset, but NCL does not require the fine-tune on these datasets or extra hand-crafted feature for auxiliary. Therefore, it is clear that NCL does not overfit the training data. Then, to further investigate the generalizability of NCL, we use the smallest two benchmarks, Coverage and Columbia, for training and testing NCL on larger benchmarks. Table 2 indicates NCL manages to cope with this harsh situation. Besides, we also put all the benchmark training datasets together to form a single training dataset and train NCL on this set to probe its edge performance. As shown in the last row of Table 2, training on this large dataset, NCL gains surpassing performances on almost every single testing dataset regarding existing models. However, compared to NCL with benchmark training, the AUC score is slightly lowered on the Coverage and Columbia datasets but sharply increased on the other three datasets. Considering the small size of Coverage and Columbia, NCL exchanges sensitivity for specificity, thereby achieving more balanced performances regarding all the testing cases. Then, we also conduct robustness tests. Typical robustness experiments are conducted through attacks. Built-in functions are used to attack the images, and IML methods are then applied to identify the tampered areas on the attacked images. The results measured by pixel-wise AUC are shown in Table 3. Our model achieves satisfying robustness against common attacks. Therefore, in short, our NCL-based IML method retains satisfying generalizability and is robust and resistant to attacks. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{pre-train} & \multirow{2}{*}{fine tune} & \multicolumn{3}{c}{NIST16} & \multicolumn{2}{c}{CASIA} & \multicolumn{2}{c}{Coverage} & \multicolumn{2}{c}{Columbia} & \multicolumn{2}{c}{Defacto} \\ \cline{5-12} & & & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) \\ \hline ELA [20] & \(\times\) & \(\times\) & 23.6 & 42.9 & 21.4 & 61.3 & 22.2 & 58.3 & 47.0 & 58.1 & - & - \\ NOI [23] & \(\times\) & \(\times\) & 28.5 & 48.7 & 26.3 & 61.2 & 26.9 & 58.7 & 57.4 & 54.6 & - & - \\ CFA [12] & \(\times\) & \(\times\) & 17.4 & 50.1 & 20.7 & 52.2 & 19.0 & 48.5 & 46.7 & 72.0 & - & - \\ \hline J-LSTM [2] & ✓ & \(\times\) & - & 76.4 & - & - & - & 61.4 & - & - & - & - \\ ManTra [34] & ✓ & ✓ & - & 79.5 & - & 81.7 & - & 81.9 & - & 82.4 & - & - \\ RGB-N [38] & ✓ & ✓ & 72.2 & 93.7 & 40.8 & 79.5 & 43.7 & 81.7 & 69.7 & 85.8 & - & - \\ SPAN (1) [16] & ✓ & \(\times\) & 29.0 & 83.6 & 33.6 & 81.4 & 53.5 & 91.2 & 81.5 & 93.6 & - & \\ SPAN (2) [16] & ✓ & ✓ & 58.2 & 96.1 & 38.2 & 83.8 & 55.8 & 93.7 & - & - & - & - \\ ObjectFormer [29] & ✓ & ✓ & 82.4 & **99.6** & 57.9 & **88.2** & 75.8 & 95.7 & - & - & - & - \\ OSN [31] & ✓ & ✓ & 28.6 & 76.4 & 40.5 & 83.3 & 72.7 & 88.3 & - & - & - & - \\ MVSS-Net [5] & ✓ & ✓ & - & 73.7 & - & 75.3 & - & 82.4 & - & 72.6 & - & 53.8 \\ MVSS-Net++ [9] & ✓ & ✓ & - & 71.5 & - & 77.1 & - & 52.5 & - & 56.3 & - & 88.6 \\ \hline ours (NCL) & \(\times\) & \(\times\) & **83.1** & 91.2 & **59.8** & 86.4 & **80.1** & **92.8** & **85.0** & **94.3** & **60.7** & **88.9** \\ \hline \hline \end{tabular} SPAN (1) is under the pre-training setup while SPAN (2) is under the fine-tuning. MVSS-Net++ is pre-trained on the Defacto-84K and MVSS-Net is pre-trained on the CASIAv2. \(\ddots\)’ denotes that the result is not available in the literature and’\(\uparrow\)’ indicates that the higher value is better. \end{table} Table 1: \(F_{1}\) score (%) and AUC (%) comparisons between our proposed method and baselines on benchmarks. \begin{table} \begin{tabular}{l|c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Train \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Test \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} NIST16 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} CASIAv2 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 73.7 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 75.3 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 82.4 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 72.6 \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} 53.8 \\ \end{tabular} } \\ \cline{5-12} & & & 75.6 & 86.4 & 81.6 & 66.9 & 53.0 \\ \hline \hline MVSS-Net++ [9] & Defacto & 71.5 & 77.1 & 52.5 & 56.3 & 88.6 \\ ours (NCL) & Defacto & 77.3 & 75.6 & 58.7 & 52.3 & 88.9 \\ \hline ours (NCL) & Coverage & 58.1 & 54.6 & 92.8 & 52.2 & 51.9 \\ ours (NCL) & Columbia & 58.9 & 57.3 & 51.5 & 95.3 & 52.6 \\ \hline \hline ours & All* & 95.0 & 88.4 & 91.1 & 92.3 & 90.1 \\ \hline \hline \end{tabular} *: All means putting all the benchmark training datasets together (around 25k images) for training. \end{table} Table 2: AUC (%) results for generalizability validation. ### Qualitative Analysis **Contribution of Each Component**. Before going further, we first clarify the exact improvements brought by each component of our NCL-based method. Our method is built upon the basic encoder-decoder of DeepLabV3+, termed the **Base model**. Then, we propose the Pivot structure to conduct non-mutually exclusive contrastive learning; we term it as the **Base+Pivot model**. Further, we engage with the PC loss to form the entire NCL framework; we term the entire NCL as the **Base+Pivot+PC model** in this section. **Qualitative Analysis**. We first conduct longitudinal qualitative comparisons of different components among the NCL in Figure 3 and Figure 4. The second to fourth rows in Figure 3 are the results of the Base model, Base+Pivot model, Base+Pivot+PC model regarding the input image. The leftmost two columns of Figure 3 vividly demonstrate the efficacy of each component in our method. The Base model totally fails these cases, but Base+Pivot+PC gradually catches the clue of manipulations. The rightmost column and the bell pepper picture present active examples of refining the delicate contour of roughly localized artifacts through PC loss. Shown in Figure 4, similar situations are also true in other benchmarks. Then, We conduct horizontal qualitative comparisons of different IML models in the lower-half of Figure 3. The MVSS-Net and Mantra-Net rows show the corresponding output of the widely compared MVSS-Net and Mantra-Net. With much less and inferior training data, our model outperforms these pre-training-depend and massive-data-required models. **Quantitative Analysis**. The prediction results measured in pixel-wise AUC for different variants of our model are shown in Table 5. Our NCL penetrates significant promotions to the basic encoder-decoder network, especially on model generalization. For base model training on NIST16 or Defacto dataset but testing on other datasets, it fails to generalize on other datasets. Adding the Pivot network to the base model drastically boosts the model generalizability. Base+Pivot gains decent AUC results when testing on different, non-homogeneous datasets. PC loss is also verified to be effective in improving performance. We offer more details in the supplementary materials. After quantifying the contribution of each component, we further probe the effect of the parameters in our modes. We have two parameters, the image patch sizes and weight parameters on the total loss. **Image Patch Size**. Different encoder layers generate features in different patch sizes, which are vital for the performance of NCL. To find the best patch size, we also try to add the Pivot network after different blocks of ResNet101. In detail, we divide the original ResNet101 into five stages as the original paper [15] and append the Pivot network at the end of each stage. As shown in Table 6, the earlier layers outperform the deeper layer a lot, which also confirms the observation from [3]. This finding is consistent across benchmarks. **Weight Parameters**. We explore different allocations of weights to maximize the \(F_{1}\) score and AUC. Under this circumstance, we find the allocation scheme and adopt these parameters as stated in the implementation details. Lower \(\omega\) and higher \(\mu\) facilitate the IML accuracy in both \(F_{1}\) and AUC. The weight effect is similar across datasets. Thereby, our weight choice is consistent. \begin{table} \begin{tabular}{l|c|c|c} \hline Operations & ManTra-Net & SPAN & Ours \\ \hline \hline None & 79.5 & 83.6 & **91.2** \\ \hline \hline Resize(0.78x) & 77.4 & 83.2 & **85.6** \\ Resize(0.25x) & 75.5 & 80.3 & **83.1** \\ \hline GaussianBlur(size=3) & 77.4 & 83.1 & **84.0** \\ GaussianBlur(size=5) & 74.6 & 79.2 & **80.6** \\ \hline GaussianNoise(\(\sigma\)=3) & 67.4 & 75.1 & **79.5** \\ GaussianNoise(\(\sigma\)=5) & 58.6 & 67.3 & **71.4** \\ \hline JPEGCompress(50) & 77.9 & 83.6 & **84.3** \\ JPEGCompress(100) & 74.4 & 80.7 & **81.9** \\ \hline \end{tabular} SPAN without fine-tuning is adopted here. \(100\) and \(50\) are the JEPG compress quality factors. \end{table} Table 3: Robustness analysis of models on NIST16 datasets. Figure 3: Prediction results of Variants of our methods. From top to bottom: forged images, baseline model, attaching Pivot structure and conducting non-mutually exclusive contrastive learning on the base model, adding PC loss to the former model, Mantra-Net [34], MVSS-Net [9] and ground-truth masks. **Backbone Architectures**. With the fast advances in Transformer-based image backbones, we will indeed embrace an IML backbone built on the self-attention mechanism. Like CNNs, ViT [11] also processes image patches by patches. Therefore, the initial assumption of our NCL holds. Regardless of the patching methods in ViT, the image patches will still be divided into three categories: tampered, authentic, and contour patches. Then, our NCL can be quickly adapted to the ViT-based backbone without any effort and boost the base model's performance. As shown in Table 4, we did some preliminary tests using the backbone of ObjectFormer [29], and the results met our expectations. ## 5 Conclusion This paper proposes a novel Non-mutually exclusive Contrastive Learning (NCL) paradigm to localize image manipulation without additional pre-training datasets. Our NCL-based IML model reaches state-of-the-art performance, top model generalization, and robustness in all five benchmarks, which indicates our NCL is more applicable to real-life scenarios. To a greater extent, NCL provides a brand-new self-supervised paradigm to tackle tasks with trisected problem spaces like semantic segmentation. ## 6 Acknowledgements This work is jointly supported by the "Key Research Program" (Grant No.2022YFC3801304), the Ministry of Science and Technology, PRC, and the "Fundamental Research Funds for the Central Universities" (Grant No.2022SCU12072, No.YJ2021159). The numerical calculation in this paper has been done at Hefei advanced computing center. The author would like to deliver special thanks to Miss. Chunfang Yu, for her attentive work on the dataset preparation. \begin{table} \begin{tabular}{l|c|c} \hline \hline \multicolumn{1}{c|}{Pivot network after} & \(F_{1}\) score & AUC \\ \hline \hline ResNet block 5 & 43.8 & 70.2 \\ ResNet block 2 & 56.3 & 79.0 \\ ResNet block 1 & 59.8 & 86.4 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of our model with Pivot network imposed after different encoder blocks. CASIA dataset is applied. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{pre-train} & \multirow{2}{*}{fine tune} & \multicolumn{2}{c}{NIST16} & \multicolumn{2}{c}{CASIA} & \multicolumn{2}{c}{Coverage} & \multicolumn{2}{c}{Columbia} & \multicolumn{2}{c}{Defacto} \\ \cline{3-11} & & & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) & \(F_{1}\uparrow\) & AUC \(\uparrow\) \\ \hline ObjectFormer [29] & ✓ & ✓ & 82.4 & **99.6** & 57.9 & 88.2 & 75.8 & 95.7 & - & - & - & - \\ ObjectFormer(with NCL appended) & ✓ & ✓ & **84.4** & **99.6** & 59.1 & **88.8** & 77.4 & **96.0** & - & - & - & - \\ \hline ours (NCL) & \(\times\) & \(\times\) & 83.1 & 91.2 & **59.8** & 86.4 & **80.1** & 92.8 & **85.0** & **94.3** & **60.7** & **88.9** \\ \hline \hline \end{tabular} \end{table} Table 4: \(F_{1}\) score (%) and AUC (%) comparisons between our proposed method and baselines on benchmarks. Figure 4: Predictions generated by variations of our model on Columbia, Coverage, NIST16, CASIA datasets. From top to bottom are the: forged images, ground-truth masks, results of Base model, results of Base+Pivot, and results of Base+Pivot+PC. \begin{table} \begin{tabular}{l|c|c c c c} \hline \hline Method & TrainTest & NIST16 & CASIA & Coverage & Columbia & Defacto \\ \hline Base & NIST16 & 75.6 & 26.4 & 21.6 & 16.9 & 4.3 \\ Base+Pivot & NIST16 & 85.6 & 60.8 & 71.7 & 66.1 & 41.4 \\ Base+Pivot+PC & NIST16 & 91.2 & 65.6 & 76.4 & 71.5 & 55.0 \\ \hline \hline Base & Defacto & 30.3 & 25.8 & 17.4 & 14.0 & 54.6 \\ Base+Pivot & Defacto & 70.4 & 72.1 & 51.1 & 50.9 & 78.2 \\ Base+Pivot+PC & Defacto & 77.3 & 75.6 & 58.7 & 52.3 & 88.9 \\ \hline \hline \end{tabular} \end{table} Table 5: AUC (%) on benchmarks for variants of our model.
2301.13695
Note on the chromatic number of Minkowski planes: the regular polygon case
The famous Hadwiger-Nelson problem asks for the minimum number of colors needed to color the points of the Euclidean plane so that no two points unit distance apart are assigned the same color. In this note we consider a variant of the problem in Minkowski metric planes, where the unit circle is a regular polygon of even and at most 22 vertices. We present a simple lattice-sublattice coloring scheme that uses 6 colors, proving that the chromatic number of the Minkowski planes above are at most 6. This result is new for regular polygons having more than 8 vertices.
Panna Gehér
2023-01-31T15:13:49Z
http://arxiv.org/abs/2301.13695v1
# Note on the chromatic number of Minkowski planes: ###### Abstract The famous Hadwiger-Nelson problem asks for the minimum number of colors needed to color the points of the Euclidean plane so that no two points unit distance apart are assigned the same color. In this note we consider a variant of the problem in Minkowski metric planes, where the unit circle is a regular polygon of even and at most 22 vertices. We present a simple lattice-sublattice coloring scheme that uses 6 colors, proving that the chromatic number of the Minkowski planes above are at most 6. This result is new for regular polygons having more than 8 vertices. ## 1 Introduction In 1950, Nelson raised the following question: What is the minimum number of colors that are needed to color the Euclidean plane so that no two points of the same color determine unit distance? We refer to such a coloring with \(k\) color classes as a proper \(k\)-coloring. Thus Nelson's question asks for the smallest \(k\) value such that the plane can be properly \(k\)-colored. This value is known as the chromatic number of the Euclidean plane, and is denoted by \(\chi(\mathbb{R}^{2})\). Immediately after the question was raised the following easy-to-get bounds were established: \[4\leq\chi(\mathbb{R}^{2})\leq 7.\] The lower bound is due to Moser [17] who constructed a unit-distance graph (that is a graph whose edges connect vertices unit distance apart) with chromatic number 4. The upper bound is due to Isbell [10] who considered a tilting of the plane by translates of a regular hexagon with diameter slightly less than one and defined a periodic proper 7-coloring shown in Figure 1. Despite the numerous attempts to improve these bounds, only little progress was made for more that 60 years - for a historical survey on the problem see [20]. However, in 2018 mathematicians were shaken when biologist de Grey [9] constructed a 5-chromatic unit-distance graph, proving that the chromatic number of the plane is at least 5. Shortly afterwards Exoo and Ismailescu [7] independently published another proof. The problem has regained a lot of attention since the breakthrough and a Polymath project was launched with the main goal of creating a human-verifiable proof of the new result. Although the proofs are still relying on computers, quite some progress has been made: while the distance graph published by de Grey had a total of 1581 vertices, the current known smallest example consists only 509 [18]. ###### Abstract We consider the problem of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point on the unit circle of a \(C\)-dimensional unit circle. We show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. We also show that the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point is the value of the \(C\)-norm of an \(x\in\mathbb{R}^{2}\) point. in the cases of regular polygons with \(8\), \(10\), and \(12\) vertices. Together with the Euclidean case these are the only known examples of a normed plane with chromatic number at least \(5\). Table 1 summarizes the mentioned results: In this note we extend Chilakamarri's result for regular octagons to regular polygons with at most \(22\) vertices by giving a simple lattice-sublattice coloring scheme that uses only \(6\) colors. It also slightly strengthens the result of Chilakamarri as our colorings are regular: We call a proper \(k\)-coloring of \(\mathbb{R}^{n}\) with color classes \(C_{1}\), \(C_{2}\)\(\ldots C_{k}\)_regular_, if there exist vectors \(v_{1}\ldots,\)\(v_{k}\) such that \(C_{i}=C_{1}+v_{i}\) for \(i=1\ldots k\), that is the color classes are translates of each other. Now we state our main theorem: **Theorem 1**.: _Let \(C\) be a regular polygon with an even number of vertices. In case \(C\) has at most \(22\) vertices then there exists a regular proper \(6\)-coloring of the Minkowski plane equipped with the \(C\)-metric such that no points unit \(C\)-distance apart are identically colored. Hence,_ \[\chi(\mathbb{R}^{2},C)\leq 6\] _holds for any regular \(2k\)-gon \(C\) where \(k\leq 11\)._ It follows that in the case of a regular decagon and dodecagon - similarly to the octagon's case - the chromatic number is either \(5\) or \(6\). \begin{table} \begin{tabular}{l|c} Unit circle \(C\) & \(\chi(\mathbb{R}^{2},C)\) \\ \hline \begin{tabular}{l} Parallelogram, centrally symmetric hexagon (see [2]) \\ \end{tabular} & \begin{tabular}{l} 4 \\ 5 or 6 \\ 5, 6 or 7 \\ 5, 6 or 7 \\ 5, 6 or 7 \\ 4, 5, 6 or 7 \\ \end{tabular} \\ \end{tabular} \end{table} Table 1: Possible values of the chromatic numbers of Minkowski planes Figure 2: Chilakamarri’s proper \(6\)-coloring of the Minkowski plane equipped with the regular octagon metric In Section 2 we describe the coloring scheme used in all cases and give some details and figures about the proofs. As an example, in Section 3 all computations are given in the regular dodecagon's case and calculations for the 22-gon can be found in the Appendix. Finally, in Section 4 we describe a consequence of Theorem 1 that considers a closely related asymmetric Ramsey-type question raised by Szlam [21]. ## 2 The coloring scheme Let \(C\) be a regular octagon first and define a symmetric convex hexagon \(H\) inscribed in \(C/2\) as follows: Choose two opposite sides of \(C/2\) and form a hexagon using their four endpoints and two additional boundary points of \(C/2\). The choice of the additional points can be made in various ways, here we simply chose the ones that halve the boundary line of \(C/2\) connecting the chosen sides. Denote the vertices of \(H\) by \(A_{i}\) (\(i=1\dots 6\)) in a clockwise order as shown in Figure 3. To avoid unit \(C\)-distance in \(H\), remove the boundary points lying between the points \(A_{1}\) and \(A_{4}\), including \(A_{4}\) but not \(A_{1}\). In this way no antipodal point pair is monochromatic. For simplicity call the resulting half-open hexagon still \(H\). Now consider a tiling of the plane by translates of \(H\) and assign colors \(1\) through \(6\) periodically as shown in Figure 4. We can assume that the centers of the hexagons form a lattice, that we denote by \(\mathcal{L}\). Figure 4: A tiling of the plane with translates of hexagon \(H\) defines a periodic \(6\)-coloring Figure 3: The centrally symmetric hexagons inscribed in \(C/2\) in the case of the regular decagon Note that the main difference between this coloring scheme and Chilakamarri's general 7-coloring is that here we can take advantage of \(C\) not being strictly convex: in the direction perpendicular to the sides shared with \(C/2\) the monochromatic hexagons can be placed such that they are separated by only one differently colored hexagon. Now we justify that our coloring is proper: as mentioned earlier a unit \(C\)-distance is not realized within the hexagons. All is left is to check that two hexagons of the same color are too far from each other to determine a point pair unit \(C\)-distance apart. As the color classes are congruent, it is enough to verify the statement for one specific color class, say the class of red points. We can also assume that one of the red hexagons has the origin as its center, thus the set of centers of red hexagons form a sublattice \(\mathcal{L}^{\prime}\). By the symmetry of \(C\) it is enough to show that polygons \(\mathcal{L}^{\prime}+C/2\oplus H\) form a packing, where \(\oplus\) denotes the Minkowski sum of the two polygons, that is: \[C/2\oplus H=\{c+h\ |\ c\in C/2,\ h\in H\}.\] Straightforward calculations finish the proof. Without loss of generality we can assume that \(C\) has circumradius one. Let \(v_{1}\) and \(v_{2}\) denote the basis vectors of the lattice \(\mathcal{L}^{\prime}\) where we can assume that \(v_{1}\) is perpendicular to the sides shared with \(C/2\). Then for any vector \(\lambda\in\mathcal{L}^{\prime}\), polygons \(\lambda+C/2\oplus H\) and \(\lambda\pm v_{1}+C/2\oplus H\) are trivially disjoint. From definition \(H+C/2\subseteq C\) so \(H+C/2\) also has circumradius at most one. Hence it is enough to check that with the exception of \(\underline{0}\) and \(\pm v_{1}\) any lattice vector has Euclidean length at least two. This obviously holds (see Figure 5), thus the coloring is indeed proper. Now consider regular polygons with greater number of vertices. We show that almost the same coloring scheme works for all the remaining cases: Two sides of \(H\) can always be two opposite sides of \(C/2\), we only have to be careful with the choice of the remaining two vertices. For the case of a regular 10- and 12-gons choosing the halving points on the boundary line of \(C/2\) still works, we can simply define hexagon \(H\) as shown in Figure 6. However, in the remaining cases we had to flatten the hexagons in order to get a proper coloring. In the case of regular 14-, 16- and 18-gons some other vertices of \(C/2\) were chosen. But in the final two cases only non-vertex points seemed to be working: for \(n=20\) bisectors of some other sides were chosen, and for \(n=22\) we divided one of the sides in the ratio \(0.68:0.32\). For details see Figure 7. Proving that the colorings defined by the above hexagons are proper needs more and more computations as the number of vertices grows. Since all cases are similar, we will only give the details of the calculations in the regular dodecagon's case, in Section 3 and in the regular 22-gon's case in the Appendix. Figure 6: The centrally symmetric hexagons inscribed in \(C/2\) chosen in the case of the regular 10- and 12-gon Figure 7: The centrally symmetric hexagon \(H\) inscribed in \(C/2\) chosen for the regular 14-, 16-, 18-, 20- and 22-gon The regular dodecagon's case Now we present the details of the proof in the case of the regular dodecagon. Consider the regular dodecagon centered at the origin with circumradius \(2\), whose vertices are: \[\big{(}\pm 1,\pm\sqrt{3}\big{)},\big{(}\pm\sqrt{3},\pm 1\big{)},\big{(}\pm 2,0 \big{)},\big{(}0,\pm 2\big{)}.\] Let \(H\) be the symmetric hexagon inscribed in \(C/2\) as defined in Section 2: take two opposite sides of \(C/2\), for example the sides parallel to vector \((2-\sqrt{3},1)\) and choose the two additional points such that they halve parts of the boundary line of \(C/2\) between the chosen sides. Denote these six vertices by \(A_{i}\) (\(i=1\ldots 6\)) in a clockwise order as shown in Figure 8. As before, let \(H\) be the half-open hexagon defined by the points \(A_{i}\) that does not contain the line segment connecting the points \(A_{1}\) and \(A_{4}\) and the point \(A_{1}\) itself. Therefore the hexagonal tiling of the plane with hexagon \(H\) is the packing by Voronoi regions of the lattice \(\mathcal{L}\) spanned by vectors \(\left(\frac{1-2\cdot\sqrt{3}}{4},\frac{4+\sqrt{3}}{4}\right)\) and \(\left(\frac{2+\sqrt{3}}{2},-\frac{1}{2}\right)\). The basis vectors of the sublattice \(\mathcal{L}^{\prime}\) corresponding to the single color class containing the hexagon centered at the origin are: * \(v_{1}=\big{(}\frac{3-6\cdot\sqrt{3}}{4},\frac{12+3\cdot\sqrt{3}}{4}\big{)}\), * \(v_{2}=\big{(}2+\sqrt{3},-1\big{)}\). As mentioned in Section 2 what we need to show is that polygons \(\mathcal{L}^{\prime}+C/2\oplus H\) form a packing. Figure 8: \(H\) hexagon inscribed in \(C/2\) The vertices of \(C/2\oplus H\) are: \[B_{1}=\Bigg{(}\frac{1}{4},\frac{6+\sqrt{3}}{4}\Bigg{)},\ B_{2}=\Bigg{(}\frac{3}{4},\frac{2+3\cdot\sqrt{3}}{4}\Bigg{)},\ B_{3}=\Bigg{(}\frac{1+2\cdot\sqrt{3}}{4}, \frac{4+\sqrt{3}}{4}\Bigg{)},\ B_{4}=\Bigg{(}\frac{2+\sqrt{3}}{2},\frac{1}{2} \Bigg{)},\] \[B_{5}=\Big{(}2,0\Big{)},\ B_{6}=\Bigg{(}\sqrt{3},-1\Bigg{)},\ B_{7}=\Bigg{(} \frac{1+\sqrt{3}}{2},-\frac{1+\sqrt{3}}{2}\Bigg{)},\ B_{8}=\Bigg{(}\frac{1}{4},-\frac{2+3\cdot\sqrt{3}}{4}\Bigg{)}\ \text{etc.}\] The coordinates of the remaining vertices can be obtained by symmetry - see Figure 9. As the coloring is regular, it is enough to pick hexagon \(H\) centered at the origin and show that \(H\oplus 1/2C\) is disjoint from \(\lambda^{\prime}+H\oplus 1/2C\) for all \(\lambda^{\prime}\not\equiv 0\) in \(\mathcal{L}^{\prime}\). By definition \(H\oplus C/2\) has circumradius \(2\). Inside the circle of radius \(4\) centered at the origin there are \(4\) lattice points of \(\mathcal{L}^{\prime}\) besides the origin and by symmetry we only have to check \(2\) of them and the corresponding hexagons, namely: * \(H_{1}:=H+v_{2}\) and * \(H_{2}:=H+v_{1}+v_{2}\). Figure 9: Minkowski sum of \(H\) and \(C/2\) \(H\) and \(H_{1}\) are separated by exactly one differently colored hexagon which is enough as \(v_{2}\) is perpendicular to the common sides of \(H\) and \(C/2\). All is left is to give a line that separates \(H\) from \(H_{2}\). For example consider the line \(l\) defined by equation: \[y=-\frac{2+\sqrt{3}}{3}x+\frac{5+2\cdot\sqrt{3}}{3}.\] It is straightforward to check that \(l\) goes through two parallel sides of \(H\oplus C/2\) and \(H_{2}\oplus C/2\) (which are on the opposite sides of their centers) and the remaining vertex points of \(H\oplus C/2\) are below line \(l\), while all of the remaining vertex points of \(H_{2}\oplus C/2\) are above it (see Figure 10). Therefore \(H\oplus C/2\) and \(H_{2}\oplus C/2\) are disjoint, thus the coloring is proper. Figure 10: Line \(l\) separates \(H\oplus C/2\) and \(H_{2}\oplus C/2\) We remark that in the presented example one can define hexagon \(H\) in many different ways as the coloring scheme is quite flexible in this case. However, as the number of vertices increases, the range of possible choices narrows down quickly. For example in the case of the regular 22-gon we have to be really carefull with the definition of hexagon \(H\): as Figure 11 shows, our coloring is almost rigid. ## 4 An asymmetric Ramsey-type problem Another direction for generalizing the Hadwiger-Nelson problem is to replace the pair of points at unit distance by another finite point configuration. Moreover we can look for different configurations in each color class. Solving a problem raised in [5] Juhasz [12] showed that in any red-blue coloring of the plane there are either two red points distance one apart or there is a blue congruent copy of any configuration with at most 4 points. In the opposite direction Juhasz also showed that her theorem does not remain true if we replace 4 by 12. Csizmadia and Toth [4] later improved the above result, they proved that 4 can not even be increased to 8. In the rest of the paper we are interested in the following variation of the question above: is it true for a given point configuration \(K\subseteq\mathbb{R}^{n}\) that in any red-blue coloring of the \(n\)-dimensional Euclidean space there are either two red points distance one apart or there is a blue translate of configuration \(K\)? Let \(k_{n}\) denote the largest value \(k\) such that the answer is 'yes' for all configuration \(K\) of at most \(k\) points. Note that the 2-dimensional case is not the same as it was in the original problem since 'congruent copy' has been replaced by _'translate'_. This variant of the problem was first considered in [21] by Szlam who showed that \(k_{n}\) grows exponentially in the number of dimensions. More precisely, Szlam's theorem states that \(\chi(\mathbb{R}^{n})\), the chromatic number of the \(n\)-dimensional Euclidean space (i. e. the smallest number of colors that are needed to color \(\mathbb{R}^{n}\) so that no two points Figure 11: In the 6-coloring of the Minkowski plane equipped with the regular 22-gon metric monochromatic hexagons get dangerously close together of the same color determine unit distance) provides a lower bound on the value of \(k_{n}\). For the sake of completeness, we include his short proof as well. **Lemma 1** (Szlam I. [21]).: _Assume that there exists a \(k\)-point configuration \(K\) and a red-blue coloring of \(\mathbb{R}^{n}\) such that the red color class avoids unit distance and the blue color class avoids all translates of \(K\). Then \(\mathbb{R}^{n}\) can be properly \(k\)-colored that is none of the \(k\) color classes contains unit distance. Hence \(k_{n}+1\geq\chi(\mathbb{R}^{n})\)._ Proof.: Assume that we are given a configuration \(K=\{a_{1},\dots a_{k}\}\) and a red-blue coloring of \(\mathbb{R}^{n}\) with the desired properties. Then, for each \(x\in\mathbb{R}^{n}\) there is at least one index \(i\) such that \(x+a_{i}\) is red. Now let us color the point \(x\) with color number \(i\): as there are no red points unit distance apart, this indeed defines a proper \(k\)-coloring. The famous theorem of Frankl and Wilson [8] gives an exponential lower bound for the chromatic number of the Euclidean space: it states that in case \(n\) tends to infinity \[\chi(\mathbb{R}^{n})\geq(1.2+o(1))^{n}.\] Hence, by Lemma 1 if \(n\) tends to infinity, every red-blue coloring of \(\mathbb{R}^{n}\) contains a blue translate of all \(k\) point configuration when \(k<(1.2+o(1))^{n}\). Szlam also a gave a partial converse to Lemma 1 that considers only regular colorings: **Lemma 2** (Szlam II. [21]).: _Assume that \(\mathbb{R}^{n}\) can be properly \(k\)-colored by a regular coloring, with color classes \(C_{i}=C_{1}+v_{i}\) (for \(i=1\dots k\)). Then there exists a red-blue coloring of \(\mathbb{R}^{n}\) and a \(k\)-point configuration, namely \(K=\{v_{1},v_{2},\dots v_{k}\}\) such that the red color class avoids unit distance and the blue color class avoids all translates of \(K\)._ Since Szlam's original paper was published many applications and generalizations of his work were considered (see for example [1, 11, 13, 16]). Here we need the analogous result considering Minkowski spaces: Let \(k_{n}(C)\) denote the largest \(k\) value such that in any red-blue coloring of the Minkowski space determined by \(C\) there are either two red points distance one apart or there is a blue translate of any configuration with at most \(k\) points. An easy observation is that both Lemma 1 and Lemma 2 can be extended to Minkowski spaces (as noted e.g. in [1]). As the \(6\)-colorings described in Theorem 1 are a regular, we immediately get the following corollary: **Corollary 1**.: _Let \((\mathbb{R}^{2},C)\) be a Minkowski plane whose unit circle is a regular polygon with an even number of vertices. In case \(C\) has at most \(22\) vertices then there exists a red-blue coloring of the plane and a configuration \(K\) of \(6\) points such that there is no red point pair unit \(C\)-distance apart, and the blue color class avoids all translates of \(K\)._ We finish the paper with a small remark on Szlam's results: We noticed that although the proof of Lemma 2 is short and straightforward, it can be a bit misleading. To see its inconvenience notice how the proper coloring defined in Lemma 1 is not regular: color classes are generated by covering the plane with translates of the unit distance avoiding red set. Call a coloring with such structure subregular. More precisely we call a proper \(k\)-coloring with color classes \(C_{1}\,\dots\,C_{k}\)_subregular_ if there exist vectors \(v_{1}\,\dots\,v_{k}\) such that \(C_{i}\) is a subset of \(C_{1}+v_{i}\). We show that Lemma 2 can be extended to subregular colorings in a very natural way: **Theorem 2**.: _Let \((\mathbb{R}^{n},C)\) be an \(n\)-dimensional Minkowski space. Assume \(\mathbb{R}^{n}\) can properly be \(k\)-colored by a subregular coloring defined by a \(C\)-unit distance avoiding set \(C_{1}\) and vectors \(v_{1}\), \(v_{2}\,\dots\,v_{k}\). Then there exists a red-blue coloring of \(\mathbb{R}^{n}\) and a \(k\)-point configuration, namely \(K=\{-v_{1},-v_{2},\dots-v_{k}\}\) such that the red color class avoids unit \(C\)-distance and the blue color class avoids all translates of \(K\)._ Proof.: Let the points of \(C_{1}\) be colored red, and color all the remaining points blue. As promised, let us consider the configuration \(K=\{-v_{1},-v_{2},\ldots,-v_{k}\}\). We wish to show that for an arbitrary vector \(m\) color class \(C_{1}\) contains at least one point of \(K+m\). Without loss of generality we can assume \(v_{1}\equiv 0\). Hence if \(m\in C_{1}\) there is nothing to prove. Assume that \(m\notin C_{1}\). In this case there exists an index \(i\) such that \(m\in C_{1}+v_{i}\) which leads to \(-v_{i}+m\in C_{1}\). It follows that for all \(n\) and \(C\) the value \(k_{n}(C)\) is exactly the smallest number \(k\) such that there exists a subregular \(k\)-coloring of \((\mathbb{R}^{n},C)\). As we have seen, known proper colorings of Minkowski planes are usually regular. In the 3-dimensional Euclidean a proper 15-coloring was defined in [19] and independently in [3] which give the best upper bound for the chromatic number of \(\mathbb{R}^{3}\). Although these colorings are not rigorously regular, they can be turned into regular colorings in a trivial way. However, in higher dimensions proper colorings are typically only subregular. The best known upper bound on the chromatic number of the Euclidean space was established by Larman and Rogers [15] who identified a subregular proper coloring of \(\mathbb{R}^{n}\) with \((3+o(1))^{n}\) color classes, meaning \(k_{n}<(3+o(1))^{n}\). By Lemma 2 this result is tight up to the constant factor. For Minkowski spaces the analogous theorem with \((4+o(1))^{n}\) color classes was proved by Kupavskii [14], meaning \(k_{n}(C)<(4+o(1))^{n}\). ## Acknowledgement I would like to express my gratitude to Geza Toth for his great help and for his many helpful remarks. I am also thankful to Dan Ismailescu for informing us about the current state of the problem.
2306.17454
Ghostly galaxies: accretion-dominated stellar systems in low-mass dark matter halos
Wide-area deep imaging surveys have discovered large numbers of extremely low surface brightness dwarf galaxies, which challenge galaxy formation theory and, potentially, offer new constraints on the nature of dark matter. Here we discuss one as-yet unexplored formation mechanism that may account for a fraction of low surface brightness dwarfs. We call this the `ghost galaxy' scenario. In this scenario, inefficient radiative cooling prevents star formation in the `main branch' of the merger tree of a low mass dark matter halo, such that almost all its stellar mass is acquired through mergers with less massive (but nevertheless star-forming) progenitors. Present-day systems formed in this way would be `ghostly' isolated stellar halos with no central galaxy. We use merger trees based on the Extended Press-Schechter formalism and the COCO cosmological N-body simulation to demonstrate that mass assembly histories of this kind can occur for low-mass halos in Lambda-CDM, but they are rare. They are most probable in isolated halos of present-day mass ~4x10^9 M_sun, occurring for ~5 per cent of all halos of that mass under standard assumptions about the timing and effect of cosmic reionization. The stellar masses of star-forming progenitors in these systems are highly uncertain; abundance-matching arguments imply a bimodal present-day mass function having a brighter population (median M_star ~3x10^6 M_sun) consistent with the tail of the observed luminosity function of ultra-diffuse galaxies. This suggests observable analogues of these systems may await discovery. We find that a stronger ionizing background (globally or locally) produces brighter and more extended ghost galaxies.
Chung-Wen Wang, Andrew P. Cooper, Sownak Bose, Carlos S. Frenk, Wojciech A. Hellwing
2023-06-30T07:58:48Z
http://arxiv.org/abs/2306.17454v3
# Ghostly galaxies: accretion-dominated stellar systems in low-mass dark matter halos ###### Abstract Wide-area deep imaging surveys have discovered large numbers of extremely low surface brightness dwarf galaxies, which challenge galaxy formation theory and, potentially, offer new constraints on the nature of dark matter. Here we discuss one as-yet unexplored formation mechanism that may account for a fraction of low surface brightness dwarfs. We call this the 'ghost galaxy' scenario. In this scenario, inefficient radiative cooling prevents star formation in the'main branch' of the merger tree of a low mass dark matter halo, such that almost all its stellar mass is acquired through mergers with less massive (but nevertheless star-forming) progenitors. Present-day systems formed in this way would be 'ghostly' isolated stellar halos with no central galaxy. We use merger trees based on the Extended Press-Schechter formalism and the COCO cosmological N-body simulation to demonstrate that mass assembly histories of this kind can occur for low-mass halos in \(\Lambda\)CDM, but they are rare. They are most probable in isolated halos of present-day mass \(\sim 4\times 10^{9}\,\mathrm{M}_{\odot}\), occurring for \(\sim 5\) per cent of all halos of that mass under standard assumptions about the timing and effect of cosmic reionization. The stellar masses of star-forming progenitors in these systems are highly uncertain; abundance-matching arguments imply a bimodal present-day mass function having a brighter population (median \(M_{\star}\sim 3\times 10^{6}\mathrm{M}_{\odot}\)) consistent with the tail of the observed luminosity function of ultra-diffuse galaxies. This suggests observable analogues of these systems may await discovery. We find that a stronger ionizing background (globally or locally) produces brighter and more extended ghost galaxies. ## 1 Introduction Deep imaging observations in the Local Group, around other nearby galaxies and in galaxy clusters have demonstrated that a large population of very faint galaxies exists below the surface brightness detection limit of current wide-area surveys (e.g. Trentham et al., 2001). These galaxies can provide a low-redshift probe of early galaxy formation, cosmic reionization and the nature of the dark matter. Comparisons of low surface brightness (LSB) dwarfs with theoretical predictions have concentrated on the satellites of Milky Way-like galaxies. With deeper all-sky surveys, improved redshift-independent distance estimates and higher resolution cosmological volume simulations, it will soon be possible to study this important population in the field, over a much larger volume, and to make more robust statistical comparisons with models. Observations of so-called ultra-diffuse galaxies (UDGs; e.g. van Dokkum et al., 2015; Koda et al., 2015; Torrealba et al., 2016; Torrealba et al., 2019) have revived interest in the properties and origins of the LSB dwarf population. It is not clear that the size and surface brightness criteria used to define UDGs1 pick out a distinct population of objects that forms in a different way to other LSB dwarfs (e.g. Van Nest et al., 2022). Many formation scenarios for LSB dwarfs have been explored with the motivation of explaining the UDG. These range from the relatively unremarkable high angular momentum tail of standard \(\Lambda\)CDM galaxy formation (Amorisco and Loeb, 2016), to astrophysical processes that may only be relevant at very small scales (Jiang et al., 2019). Since many diffuse galaxies have been discovered in clusters (e.g. van der Burg et al., 2017), scenarios involving tidal interactions and other effects of dense environments have been considered in most detail (e.g. Jones et al., 2021), although extremely low surface brightness systems also exist in the field (Sales et al., 2020; Barbosa et al., 2020). The relative contributions of these different processes to the observed LSB dwarf population remains to be determined. In this paper we describe a straightforward mechanism that could give rise to a fraction of the LSB field population. In brief, we suggest some diffuse galaxies may result from the tidal disruption of one or more satellites in a dark matter halo that does not form a central galaxy. We provide a fuller explanation of this idea in the next section. We call systems formed in this way 'ghost galaxies'; effectively they are galaxies in which all the stellar mass is associated with an accreted stellar halo component2. Compared to typical galaxies of similar stellar mass, such galaxies would naturally be more extended and reside in more massive halos (subject to caveats that we explore below). As we demonstrate, predictions for the luminosity function and halo mass distribution of such galaxies depend strongly on the degree of heating of the intergalactic medium by the cosmic UV background. Stronger heating, somewhat counter-intuitively, results in more (but fainter) systems of this type, occupying a wider range of halo mass. Footnote 2: The name was inspired by Lynden-Bell & Lynden-Bell (1995). To our knowledge, this scenario has not yet been considered explicitly in the literature, although it is a corollary of other well-known aspects of dwarf galaxy formation. It is effectively inevitable in the \(\Lambda\)CDM model. It does not involve any new theoretical concepts, beyond those already known to be essential to the current understanding of galaxy formation in low-mass dark matter halos. Given the emphasis on UDGs in the recent literature, we feel it is important to emphasise the following two points. First, we do not argue that the ghost galaxy scenario is responsible for all LSB dwarf galaxies. Indeed, we demonstrate that this cannot be the case. Second, we do not argue that it produces all (or even any) of the known objects classified as UDGs. Instead, our aim is only to estimate how common these ghostly galaxies are, and whether they are likely to be observable, under some simple but plausible assumptions about their likely stellar masses. We focus on the field; in principle ghost galaxies could also occur in clusters, although our results suggest (perhaps surprisingly) that they are less likely to be found in high density regions. We proceed as follows. In section 2 we elaborate on the concept of ghostly galaxies. In section 3 we describe our merger-tree based methods to quantify the probability of ghost galaxy formation in halos of different masses. Section 3 is the result and comparison of different methods. We summarise our findings in section 4. Throughout, for consistency with the COCO \(N\)-body simulation, we use the WMAP7 cosmological parameters with \(h_{0}=0.704,\Omega_{0}=0.272,\Omega_{b}=0.04455,n_{s}=0.967\) and \(\sigma_{8}=0.81\). ## 2 The origin of ghostly galaxies ### Limits on low-mass galaxy formation A cornerstone of galaxy formation theory in CDM cosmogonies is that galaxies cannot form through dissipative collapse in dark matter halos with present-day virial mass much less than \(\sim 10^{10}\,\mathrm{M}_{\odot}\). This idea underpins the concept of a 'ghost galaxy', which we introduce in the next subsection. We first give a brief recap of the two fundamental processes that determine the limiting halo mass. For a more complete review, we refer the reader to Benitez-Llambay & Frenk (2020). Star formation requires a reservoir of dense, cold (neutral) baryons to accumulate in dark matter halos. This accumulation is expected to follow from the radiative cooling and subsequent inflow of a quasi-hydrostatic atmosphere in virial equilibrium with the dark matter (White & Rees, 1978; White & Frenk, 1991). The ability of a gravitational potential to confine gas from the intergalactic medium (IGM) with an ambient temperature \(T_{\mathrm{IGM}}\) can be expressed as a threshold in virial temperature, \(T_{\mathrm{vir}}\); halos with \(T_{\mathrm{vir}}<T_{\mathrm{IGM}}\) cannot accumulate a virialized overdensity of baryons (baryons are said to 'evaporate' from such halos). In the early Universe, where the ambient IGM is neutral, \(T_{\mathrm{IGM}}\) is low. However, in halos that can accumulate baryons, radiative cooling can only proceed efficiently if the equilibrium temperature of those baryons (i.e. the virial temperature of the halo, \(T_{\mathrm{vir}}\)) is high enough to maintain them in collisional ionisation equilibrium at their equilibrium density (White & Rees, 1978). If this is not the case, the gas will remain stable against radiative cooling and condensation. A critical virial temperature, \(T_{\mathrm{vir,c}}\sim 10^{4}\,\mathrm{K}\), can be associated with this limit, corresponding to the temperature at which atomic hydrogen is ionized in collisional equilibrium. This 'atomic hydrogen cooling floor' puts a stringent limit on the population of dark matter halos that can support galaxy formation in the early Universe, when the intergalactic medium (IGM) is mostly neutral. Although simplistic3, \(T_{\rm vir,c}\sim 10^{4}\,\)K serves well enough to separate halos where gas can cool from those where it cannot, in the absence of molecular hydrogen or a photoionizing background. Footnote 3: This treatment neglects other cooling and heating processes that are relevant in the early universe, in particular, molecular cooling and interactions with CMB photons. These processes may be important in the formation of the first stars and galaxies; see for example Benson (2010). Following cosmic reionization, photo-heating by the UV background increases the IGM temperature, making it it harder for low-mass halos to accumulate baryons. The UV background also acts to suppress radiative cooling in gas confined by halos. The combination of these two effects can be modelled as a rapid increase in the characteristic \(T_{\rm vir,c}\) for galaxy formation after reionization (Ikeuchi, 1986; Rees, 1986; Couchman & Rees, 1986; Kauffmann et al., 1993; Thoul & Weinberg, 1996; Gnedin, 2000; Benson et al., 2002b, a; Hoeft et al., 2006; Okamoto et al., 2008). In detail, the effects of reionization on the accumulation and condensation of baryons are complex. The strength of the UV background and its interaction with the IGM are redshift-dependent, and also density-dependent; both galaxy formation and reionization proceed more rapidly in regions of higher density (e.g. Font et al., 2011). Predictions for this dependence are entangled with those for the rate of formation of the galaxies and quasars that give rise to the UV background. A self-consistent treatment requires radiative transfer and the resolution of sources of UV emission (and the surrounding interstellar medium, ISM) in low-mass halos, both of which are extremely computationally expensive in cosmological volume simulations. Hydrodynamical models of galaxy formation typically approximate reionization as an instantaneous and universal heating of the IGM. In semi-analytic models, the effect of reionization on the confinement and condensation of gas can be parameterized as a threshold in halo virial velocity, \(V_{\rm cut}\) (equivalent to \(T_{\rm vir,c}\)), below which no cooling takes place after \(z_{\rm reion}\)(e.g. Benson et al., 2003; Bower et al., 2006). This is the framework we use in this paper. Both \(V_{\rm cut}\) and \(z_{\rm reion}\) are usually taken to be universal parameters, motivated by the results of hydrodynamical simulations. Font et al. (2011) present a self-consistent semi-analytic treatment of the evolution of the UV background, demonstrating that the effect of local reionization (for Milky Way-like dark matter halos) can be well-approximated by adjusting \(V_{\rm cut}\) and \(z_{\rm reion}\). The effects described above imply the existance of a distinct population of 'fossil' dwarf galaxies, associated with halos that exceed the cooling threshold before reionization but not afterwards (Bullock et al., 2000; Benson et al., 2002a; Bovill & Ricotti, 2011; Font et al., 2011; Bose et al., 2018). These fossil galaxies have been identified with the ultra-faint satellites of the Milky Way (see e.g. McConnachie, 2012; Simon, 2019). Simulations including realistic treatments of reionization predict bimodal satellite luminosity functions for Milky Way analogues, with one peak (very low luminosities but large numbers) corresponding to the fossils, and a second peak (brighter but fewer in number) corresponding to halos that exceed \(T_{\rm vir,c}\) after reionization (Font et al., 2011). A fraction of dwarf galaxy host halos with particularly low late-time growth rates may pass below the cooling threshold for the first time at low redshift, suppressing their recent star formation (e.g. Pereira-Wilson et al., 2023). The properties and abundance of these populations, particularly the ultra-faint Milky Way satellites, currently provide the strongest observation constraint on the effective value of \(V_{\rm cut}\), as well as a somewhat indirect constraint on \(z_{\rm reion}\), complementary to measurements of the temperature and ionization of the IGM from the CMB and quasar absorption spectra (e.g. Bose et al., 2018). Predictions for the fossil/ultra-faint population have received considerable attention in the literature because they are also strongly affected by plausible variations in the dark matter power spectrum on scales that are otherwise unconstrained (e.g. Sawala et al., 2015, and references therein). ### The ghostly galaxy scenario Benitez-Llambay & Frenk (2020) describe the population of halos massive enough to retain their complement of baryons after reionization, but not massive enough to cool those baryons to the densities required for star formation at any epoch. Such halos are star-free, but potentially gas-rich. Using a similar approach, we consider a different subset of halo merger trees, in which the _main branch_ (defined by the chain of most-massive halo progenitors traced back from the halo at \(z=0\)) remains below \(V_{\rm cut}\) at all epochs, and hence does not host any in situ star formation, but one or more _minor branches_ do exceed either the hydrogen cooling threshold (before reionization) or \(V_{\rm cut}\) (after reionization), and hence _can_ form stars. By construction, the stars formed in the minor branch are later accreted onto the star-free main branch. Fig. 1 is a cartoon of this scenario. The essential point concerns the fate of the stars formed in the minor branches, after they merge with the main branch. If a galaxy forms in the main branch, stars accreted from minor branches would constitute its 'accreted stellar halo' at \(z=0\). If a galaxy does not form _in situ_ in the main branch, those accreted 'halo' stars will comprise the entirety of the stellar mass of the galaxy. Without a central concentration of in situ stars, those halos would not be distinguished as a separate structural component. Such objects would, in principle, have the characteristic features associated with stellar halos: high velocity dispersion, low concentration and low surface brightness (Amorisco, 2017). As we will confirm below, ghostly galaxies are plausible in \(\Lambda\)CDM, but expectations for their cosmic abundance, masses and sizes are not at all obvious. Expectations for these properties will, almost by definition, be closely related to those for the stellar halos of regular dwarf galaxies, which have been studied in recent work by Kado-Fong et al. (2020), Deason et al. (2021) and Ricotti et al. (2022, who also use the term 'ghostly' to refer to such halos). We consider this relationship further in section 3.5. ## 3 Frequency of Ghostly Galaxies To make quantitative predictions for the cosmic abundance of ghost galaxies, we apply two criteria (described below) to large numbers of halo merger trees. We obtain samples of merger trees using two different methods: a Monte-Carlo approach, based on the extended Press-Schechter (EPS) formalism (Lacey and Cole, 1993) as implemented in the code of Parkinson et al. (2007)4, and a high-resolution \(N\)-body simulation, Copernicus Complexio (COCO, Hellwing et al., 2016), with a dark matter particle mass of \(1.135\times 10^{5}\,h^{-1}\,\mathrm{M}_{\odot}\) and Plummer-equivalent gravitational softening scale \(230\,h^{-1}\,\mathrm{pc}\). Footnote 4: [https://astro.dur.ac.uk/](https://astro.dur.ac.uk/)\(\sim\)cole/merger_trees/ \(N\)-body simulations explicitly model the gravitational dynamics of structure formation and therefore provide more accurate (and detailed) predictions for halo mass assembly histories. Among other factors, they account for environmental effects (for example on halo growth rates and structure, e.g. Hellwing et al., 2021) and the survival of self-bound substructures within dark matter halos. However, the computational efficiency of the Press-Schechter approach allows for a much larger sample of trees to be constructed. This is particularly relevant in our case, because we are concerned with the smallest star-forming halos, which require a high resolution (and hence necessarily small-volume) \(N\)-body simulation. Since we are studying a rare subset of these halos, restricting our analysis to the simulation alone would be a significant statistical limitation. The EPS code provided by Parkinson et al. (2007) uses a Monte Carlo algorithm to generate merger trees consistent with a given initial matter power spectrum and cosmological parameters. We use the tabulated power spectrum from which the COCO initial conditions were generated, and the same cosmological parameters as COCO. The Parkinson et al. (2007) algorithm includes additional tuning parameters to better match the statistics of EPS trees to those of trees obtained from \(N\)-body simulations. We set the values of these parameters following Table 2 of Benson et al. (2019): \(G=0.635,\gamma_{1}=0.176\) and \(\gamma_{2}=0.041\). Figure 1: A cartoon of the ‘ghost galaxy’ formation scenario. Time runs down the page. Dark matter halos above and below the hydrogen cooling threshold are indicated by solid and dotted circles respectively, with radii indicating their mass. Two branches in the merger tree of a present-day halo merge at \(z_{\mathrm{merge}}\). At this time, the less massive (minor) progenitor contains stars, whereas the more massive (main) progenitor does not. As sketched in the inset graph, this is possible if the minor branch grows faster at higher redshift, briefly exceeding the hydrogen cooling threshold up to a time \(z_{\mathrm{above}}\) (thick lines). The main branch is always below this threshold. The (possible) result at the present day is a dark matter halo that is under-luminous for its mass and contains only a low surface brightness, stellar halo-like component, comprising the stellar debris of the accreted minor branch. The merger trees from the COCO simulation were constructed using the group-finding and 'DHalo' linking procedures described by Jiang et al. (2014). In detail, the algorithms used to identify bound structures in an \(N\)-body simulation, and link them between snapshots, are not straightforward. They require work-arounds for the effects of limited numerical resolution, which may differ between group-finding algorithms (for example, the difficulty of identifying subhalos as they pass through the centres of their hosts can create artificial 'breaks' in trees). They may also involve somewhat arbitrary choices (for example, regarding the treatment of subhalos that escape their hosts). The DHalo procedure is designed to be robust against many of these issues. Nevertheless, there may be edge-cases that have not been accounted for, which may be more apparent towards the resolution limit and in higher resolution simulations such as COCO. This provides more motivation for our comparison with Press-Schechter trees, which, although more limited in some respects, have the advantage of a clear and consistent operational definition. ### Identifying ghosts A merger tree comprises a set of _nodes_ (representing virialized dark matter halos) identified at a series of discrete timesteps (_snapshots_) ranging from \(z=0\) to \(z\sim 20\). Nodes are linked by pointers to their _descendant_ (one-to-one, forwards in time) and _progenitors_ (one-to-many, backwards in time). We identify trees associated with ghost galaxies by traversing these pointers and applying the two criteria described below to all the nodes in the tree. #### 3.1.1 Main branch criterion Each tree has a single _root node_, which corresponds to an isolated dark matter halo at \(z=0\). The main branch of a tree is defined by the chain of most massive progenitors traced backwards in time from the root node. Here'mass' refers to the total virial mass of the system (baryons and dark matter), taken to be equivalent to \(M_{200}\), the mass enclosed by a density contour at 200 times the critical density for closure. The main branch, in principle, corresponds to the central potential that dominates the system at \(z=0\). A necessary condition for the formation of a ghost galaxy is that no stars should form in a cooling flow that would produce a compact stellar system deeply embedded in the present-day potential (i.e. there should be no 'in situ' component). Our _first criterion_ is therefore that no main branch node should exceed the following virial temperature thresholds: \[T_{200,\mathrm{cut}}\sim 10,000\,\mathrm{K} (z>z_{\mathrm{reion}}) \tag{1}\] \[T_{200,\mathrm{cut}}\sim 32,000\,\mathrm{K} (z<z_{\mathrm{reion}}) \tag{2}\] where throughout we take \(z_{\mathrm{reion}}=10\) as fiducial choice of the redshift of reionization. As described above, and in more detail by Benitez-Llambay and Frenk (2020), these thresholds correspond to the limits on cooling imposed (before reionization) by the temperature at which atomic hydrogen in equilibirum with dark matter halos can be ionized, and (after reionization) by the higher temperature of the IGM due to the cosmic UV background, which reduces the cooling efficiency of virialized gas and prevents baryons accreting onto low-mass halos. The exact values of the thresholds depend on a number of assumptions about the thermal physics of the IGM and the virialized gas, and about the strength and effects of the ionizing background. We implement these thresholds in the same way as the Galform semi-analytic model (Cole et al., 2000; Bower et al., 2006; Font et al., 2011; Lacey et al., 2016), parameterised by \(z_{\mathrm{reion}}\) and a threshold circular velocity, \(V_{\mathrm{cut}}\), which encapsulates the effects of IGM heating (Okamoto et al., 2008). At \(z<z_{\mathrm{reion}}\), no cooling, and hence no star formation, can occur in halos with virial velocity \(V_{200}=(GM_{200}/R_{200})^{1/2}<V_{\mathrm{cut}}\). Following Font et al. (2011), we assume a fiducial value of \(V_{\mathrm{cut}}=30\,\mathrm{km}\,\mathrm{s}^{-1}\), which corresponds to \(T_{200,\mathrm{cut}}\simeq 31,650\,\mathrm{K}\). At \(z>z_{\mathrm{reion}}\), we assume a fiducial cooling floor of exactly \(10,000\,\mathrm{K}\). In later sections, we explore variations of these values. #### 3.1.2 Minor branch criterion Where a main branch node has more than one progenitor, the less massive progenitors correspond to the endpoints of minor branches. Each minor branch merging onto the main branch comprises an independent hierarchy of less massive progenitor branches. In the Press-Schechter formalism, minor branches are interpreted as losing their separate identity as soon as they merge into a more massive branch. Our _second criterion_ for a tree to be associated with a ghost galaxy is therefore that star formation can occur in _at least one_ of its minor branches, i.e. that at least one node in any minor branch exceeds the cooling threshold (Equation 1 or 2, depending on the redshift of the node). We do not require that this occurs before reionization, that it occurs in only one minor branch (there may be multiple ghost-galaxy progenitors in a single tree), or that it occurs only in minor branches merging directly onto the main branch, rather than at deeper levels of the hierarchy. In this simple formulation, all the stellar mass that comprises the ghost galaxy at \(z=0\) forms in the minor branch when they exceed the cooling threshold. Later, when those branches merge onto the main branch of their tree, those stars are distributed on weakly-bound orbits in the main branch potential. Note that it is not a contradiction for a minor branch node to be more massive than the main branch node at any snapshot, except the snapshot immediately before the two branches merge. In such cases, the main branch progenitor (by construction) must subsequently grow more rapidly, such that it is more massive when the two branches merge. Our two criteria introduce the following characteristic times (redshifts) and masses, which we refer to throughout the paper: * \(z_{\rm above}\), the _lowest_ redshift (latest time) at which a minor branch exceeds the cooling threshold; * \(m_{\rm above}\), the mass of the minor branch at \(z_{\rm above}\); * \(z_{\rm merge}\), the redshift at which the minor branch is last identified in the merger tree; * \(m_{\rm merge}\), the mass of the minor branch at \(z_{\rm merge}\); * \(M_{\rm merge}\), the mass of the main branch at \(z_{\rm merge}\). In practice, the mass associated with minor branches may survive as a self-bound subhalo orbiting within the virial radius of the main branch for some time after \(z_{\rm merge}\). This is ignored in the Press-Schechter formalism but modelled explicitly in the \(N\)-body case. The definition of 'DHalos' in the tree-building procedure of Jiang et al. (2014) attempts to match the Press-Schechter definition of a 'halo'. We leave this complication aside for now, and consider only 'halos' in the Press-Schechter sense. Later we explore merger trees based on subhalos, rather than DHalos. Operationally, halo mass is defined within an overdensity contour 200 times the critical density for closure, which we treat as a proxy for virial mass. #### 3.1.3 Main and minor branch growth histories Fig. 2 shows an example of the growth history of the main branch and the minor branch in the \(N\)-body merger tree of a halo that we identify with a potential ghostly galaxy. The solid black line shows the virial mass equivalent to \(T_{\rm crit}\), the threshold virial temperature for cooling, which reaches \(M_{\rm crit}\approx 10^{10}\,h^{-1}{\rm M}_{\odot}\) at \(z=0\). The sharp transition in this curve at \(z=10\) corresponds to our fiducial treatment of reionization, as described above. The main branch of the example Figure 3: The shaded grey region (same in both panels) shows the 10 – 90\({}^{\rm th}\) percentile range of mass growth histories for the main branches of all halos in the COCO simulation with \(M(z=0)\simeq 10^{9.5}\,{\rm M}_{\odot}\). Shaded orange and blue regions, repeated from Fig. 2, show the corresponding range for main (left) and minor (right) branches in the subset of these halos that host ghost galaxies. Figure 2: Mass growth histories of relevant merger tree branches from the COCO simulation with \(M(z=0)\simeq 10^{9.5}\,{\rm M}_{\odot}\). Red and blue solid lines show the evolution of the main and minor branches, respectively, in a randomly chosen tree of this mass that meets our ‘ghost galaxy’ criteria. The solid black line shows the halo mass corresponding to the cooling threshold described in the text. The sharp increase in the threshold mass at \(z=10\) corresponds to our fiducial model of reionization (\(V_{\rm cut}=30\,{\rm km\,s^{-1}}\)). We associated this tree with a ghost galaxy at \(z=0\), because the minor branch crosses the cooling threshold (up to \(z_{\rm above}=10\), \(m_{\rm above}\sim 10^{8}\,h^{-1}{\rm M}_{\odot}\)) while the main branch does not. The two branches merge at \(z_{\rm merge}\approx 2.4\) (dashed line), at which point their mass ratio is \(m_{\rm merge}/M_{\rm merge}\sim 10\) per cent. Shaded areas show the 10\({}^{\rm th}\) to 90\({}^{\rm th}\) percentile range of mass histories for main branches (red) and minor branches (blue) in all such trees, for this choice of final mass. tree (red line) corresponds to a system with present-day virial mass \(M_{200}(z=0)\simeq 10^{9.5}\,h^{-1}{\rm M}_{\odot}\). Its mass5 is less than \(M_{\rm crit}\), not only at \(z=0\), but at all redshifts, in line with our first criterion. Conversely, one of the minor branches of this tree (blue line) grows more quickly than the main branch progenitor at high redshift and briefly exceeds the cooling threshold before reionization (\(z_{\rm above}\simeq z_{\rm reion}=10\)). The two branches merge at \(z_{\rm merge}\approx 2.4\) (dashed line). The mass ratio of the two branches at \(z_{\rm merge}\) is approximately \(10:1\). Footnote 5: Mass growth histories in \(N\)-body simulations may not be strictly monotonic, as in this case. This is one of the many issues with the implementation of \(N\)-body tree-building algorithms we refer to in Section 2. For comparison, the shaded regions in Fig. 2 show the envelope of histories for main branches and minor branches in all COCO \(N\)-body trees with \(M_{200}(z=0)\simeq 10^{9.5}\,h^{-1}{\rm M}_{\odot}\) that meet both our ghost galaxy criteria. The random example we have chosen exaggerates the difference in the growth rates of the two branches at \(z<10\), but is otherwise typical. Note that the main branch distribution narrows towards \(z=0\) by construction, whereas the minor branch distribution broadens as the number of surviving minor branches decreases. It is clear that, for this choice of final mass, effectively all the minor branches exceed the threshold mass at \(z>10\), and not at lower redshift. Note also, however, that \(M_{200}(z=0)\) in this example is lower than the maximum of \(\simeq 10^{10}\,M_{\odot}\) set by our main branch criterion. The left panel of Fig. 3 contrasts the formation histories of ghost galaxy main branches (orange) and those of other halos with the same present-day virial mass, in this case \(M_{200}(z=0)\simeq 10^{9.5}\,{\rm M}_{\odot}\). The grey envelope includes main branch histories that cross the threshold, before reionization and at later times up to \(z\sim 2\) (these are a very small population for this choice of present-day mass), as well as histories that never cross the threshold. The orange envelope of the ghost main branches shows clearly that they are drawn from the latter population. The blue region in the right-hand panel shows the corresponding growth rates of the star-forming minor branches of the ghosts. Essentially by construction, these minor branches are more massive than the ghost main branches at high redshift, but have substantially slower growth rates and hence lower masses at low redshift6. Footnote 6: We note that these differences in assembly history imply that, at higher present day halo masses, the halos of ghosts may be increasingly extreme outliers in the concentration-mass relation. ### Ghost galaxy fractions Fig. 4 shows how often the above criteria are satisfied as function of the virial mass of the main branch at \(z=0\). We determine the fraction \(f_{\rm ghost}=N_{\rm ghosts}/N_{\rm total}\) in mass bins of width \(10^{9}\,h^{-1}\,{\rm M}_{\odot}\). For the EPS trees, this fraction is computed using \(N_{\rm total}=100,000\) trees of the same final mass. For COCO, final masses are distributed across the mass bins according to the halo mass function in the simulation volume (with 17,642 trees in the range \(1\times 10^{9}<M_{200}<2\times 10^{9}\,h^{-1}\,{\rm M}_{\odot}\) and 482 trees in the range \(9\times 10^{9}<M_{200}<1\times 10^{10}\,h^{-1}\,{\rm M}_{\odot}\)). Cases in Figure 4: The fraction of merger trees for dark matter halos of a given mass that host ghost galaxies at \(z=0\), for \({\rm V}_{\rm cut}=30\,{\rm km\,s^{-1}}\). The result based on EPS trees is shown in green and the result based on trees from the COCO simulation in blue. The probability of finding ghosts in the field peaks at \(\sim 5\) per cent around a halo mass \(\approx 4\times 10^{9}h^{-1}\,{\rm M}_{\odot}\). Figure 5: The effects of changing \({\rm V}_{\rm cut}\) from \(30\,{\rm km\,s^{-1}}\) (orange line, repeating Fig. 4) to \(40\,{\rm km\,s^{-1}}\) (blue line). A higher cooling threshold increases the peak fraction of ghost galaxies (to e.g. \(\sim 20\) per cent of all halos at \(\approx 8\times 10^{9}h^{-1}\,{\rm M}_{\odot}\)) and broadens the mass range of hosts. which the ghost galaxy at \(z=0\) has multiple minor-branch progenitors are counted several times in Fig. 4. We discuss the treatment of multiple ghost progenitors further below in the context of predictions for the stellar mass function. Fig. 4 shows that, with our fiducial treatment of reionization, ghosts are most likely in halos of present-day mass \(4-5\times 10^{9}\,h^{-1}M_{\odot}\), in which range \(f_{\rm ghost}\simeq 5\) per cent. The peak in the distribution reflects the interaction of our two criteria. The fraction of main branches meeting the first criterion falls with increasing final mass, whereas the fraction of minor branches meeting the second criterion rises with increasing final mass. The results from the much smaller sample of trees in COCO agree well with the EPS predictions, perhaps showing a small offset towards lower mass. We conclude that EPS trees provide a sufficiently robust description of the \(N\)-body results for our purposes in this paper7; since the EPS method provides much larger samples, we refer mainly to the EPS results in the following discussion. We will discuss dynamical insights from the \(N\)-body trees in Sections 3.4 and 3.5. Footnote 7: The EPS approach requires tuning to reproduce the mass assembly histories of N-body trees accurately. We examined an alternative version of our EPS code with additional parameters to improve the overall match to the conditional mass functions of low mass progenitors in COCO at high redshift. However, although this change improves the overall correspondence between the EPS and N-body trees, we found it significantly under-predicts the (already small) number of progenitors with the highest mass ratios at high redshift in COCO. This has a particularly strong impact on our predictions for ghost galaxies in COCO. #### 3.2.1 Changing \(V_{\rm cut}\) We now consider how variations in the parameters in our simplified model of the cooling threshold (\(V_{\rm cut}\)) and its redshift evolution (\(z_{\rm cut}\)) affect predictions for the fraction of ghost halos at a given \(z=0\) virial mass. Fig. 5 shows equivalent results for an alternative model of reionization with \(V_{\rm cut}=40\,{\rm km\,s^{-1}}\), in which baryon accretion and cooling are suppressed in more massive halos after reionzation. A higher \(V_{\rm cut}\) may correspond either to a more intense cosmic UV background overall, or to the more local enhancement of the UV background in dense regions (in which case the effective value of \(z_{\rm reion}\) would also be higher, but we ignore that here for simplicity). Font et al. (2011) found \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) to be an appropriate value for the simplified global reionization model we use here. They determined this by comparing the observed satellite luminosity function of the Milky Way to the predictions of a more detailed semi-analytic model of heating by the local and global cosmic UV background. This calibration may be sensitive to other aspects of the treatment of galaxy formation in the model, such as the strength of feedback and the escape fraction of ionizing photos, and also to uncertainties in the Milky Way's satellite LF and total mass. There is a striking difference between our predictions for \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) (repeated for reference in Fig. 5 as an orange line) and \(V_{\rm cut}=40\,{\rm km\,s^{-1}}\). The fraction of trees resulting in ghost galaxies for \(V_{\rm cut}=40\,{\rm km\,s^{-1}}\) peaks at \(f_{\rm ghost}\simeq 20\) per cent, around \(\simeq 9\times 10^{8}\,h^{-1}M_{\odot}\), and has a much broader distribution, with a tail to \(\sim 3\times 10^{10}\,h^{-1}{\rm M_{\odot}}\). Stronger IGM heating (i.e. a higher threshold mass at \(z<z_{\rm reion}\)) makes it less likely that main branches of a given final mass will meet our first criterion8, but does not affect the probability of minor branches meeting our second criterion at \(z>z_{\rm reion}\). Of course, stronger heating greatly reduces the fraction of minor branches that exceed the threshold at \(z<z_{\rm reion}\). Consequently, as we discuss in the next section, stronger reionization creates Figure 6: Histogram of minor branch halo masses \(m_{\rm above}\), measured at \(z_{\rm above}\), the lowest redshift (latest time) at which the branch exceeded the cooling threshold mass (for \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\)). The black dashed line shows the threshold mass for star formation at \(z_{\rm reion}=10\). Orange and blue lines correspond to minor branches which cross the threshold before and after \(z_{\rm reion}\), respectively. The bimodality arises because there is a population of halos that exceeds the threshold before reionization at \(z_{\rm reion}\), but not afterwards. The sharp lower mass limit corresponds to the least massive halos that exceed the threshold at \(z_{\rm reion}\). The gap is created by the large instantaneous increase in the threshold at \(z_{\rm reion}\) (see Fig. 2) as explained in the text. more ghosts and associates them with more massive halos (and hence, potentially, more extreme surface brightnesses), but it also limits their maximum luminosity. Fig. 6 shows (for our fiducial \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) model) the histogram of \(m_{\rm above}\), the halo masses of the star-forming minor branches in the ghost galaxy trees at \(z_{\rm above}\) (when they were last above the cooling threshold mass). In the next section, we will use a simple function of this mass to assign stellar masses to the ghosts. We see two peaks in this halo mass distribution, corresponding to branches in which star formation is truncated by reionization (blue) and branches which exceed the cooling threshold at lower redshift (orange). The gap corresponds to halos that find themselves crossing below the instantaneously increased threshold at \(z_{\rm reion}\), but grow above it again at a lower redshift (\(z\sim 7\) for the most massive). For these branches, as the mass of the branch at \(z_{\rm reion}\) approaches the threshold mass at that time, it becomes increasingly unlikely that the branch will _not_ cross above the threshold again later and reach a much greater maximum mass. The high mass peak is modulated, however, by the requirement of merging with a permanently dark main branch9. Footnote 9: In the case of MW satellites, discussed in the next paragraph, the modulating effect on the high-mass part of the distribution is the requirement of merging with a Milky Way mass halo. A similar bimodality arises in predictions for the Milky Way satellite luminosity function (e.g. Bovill and Ricotti, 2009; Li et al., 2010; Font et al., 2011; Bose et al., 2018). In that context, galaxies associated with halos that do not accrete or cool gas after reionization are usually called 'ultra faints' or'reionization fossils' (Bovill and Ricotti, 2009). Most known ultra-faints are satellites of the Milky Way and M31, although it is likely they also exist in the field (e.g. Sand et al., 2022). Although essentially all ghost galaxies are fossils, not all fossil galaxies are ghosts. Stars in typical fossil galaxies should be deeply embedded in a potential similar to that in which they formed, and hence should have a compact density profile; conversely, stars in ghosts are expected to comprise dynamically hotter, more diffuse systems at \(z=0\), because they form in branches which merge into the 'dark' central potential (e.g. Amorisco, 2017). The higher mass peak is absent for \(V_{\rm cut}=40\,{\rm km\,s^{-1}}\), and the amplitude of the lower mass peak increases, for reasons discussed above. In a model with weaker IGM heating (e.g. \(V_{\rm cut}=20\,{\rm km\,s^{-1}}\)), minor branches can cross the threshold more easily after reionization, but the present-day mass range of trees satisfying the first criterion (main branch never cools) greatly reduces the number of ghost trees overall, as well as the maximum masses of minor branches associated with those trees. The overall qualitative result is that ghost galaxies are not expected in significant numbers for \(V_{\rm cut}<30\,{\rm km\,s^{-1}}\) and any that do form are unlikely to be detectable. The absence (or low abundance) of ghost galaxies would therefore imply weak IGM heating during reionization. Conversely, large numbers of faint ghost galaxies would imply stronger reionization, either globally or locally in particular regions. This simple picture is, of course, subject to a great deal of uncertainty regarding the luminosity and structure of ghost galaxies, their detectability, and the ease with which they can be separated from 'ordinary' dwarf galaxies. Benitez-Llambay and Frenk (2020) use a hydrodynamical simulation to calibrate a more complete semi-analytic model for the evolution of the characteristic temperature of the IGM accreted by halos after reionization. This model is equivalent to a redshift-dependent variation in \(V_{\rm cut}\) from \(\approx 20\,{\rm km\,s^{-1}}\) at \(z=10\) to \(\approx 25\,{\rm km\,s^{-1}}\) at \(z=0\). We have examined the abundance of ghosts with this redshift dependent \(V_{\rm cut}\), fixing \(z_{\rm reion}=10\). The lower mean value of \(V_{\rm cut}\) reduces the typical halo mass of the ghosts, and the dispersion around that mass, as discussed above. The modest increase of \(V_{\rm cut}\) with redshift allows relatively more minor branches in ghost trees to host star formation at lower redshift, but makes it harder for main branches to remain below the threshold. The overall effect is to boost the fraction of ghosts among the least massive halos in the mass range. The fractions in this case are comparable to those in our fiducial \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) model (up to \(\sim 10\) per cent at \(M_{200}\sim 2\times 10^{9}\,h\,{\rm M_{\odot}}\)). The cooling threshold before reionization can also be varied. Our fiducial choice of \(T_{200,{\rm cut}}=10,000\,{\rm K}\) is equivalent to \(V_{\rm cut}\simeq 17\,{\rm km\,s^{-1}}\). We do not explore changes in \(T_{200,{\rm cut}}\) in detail because we consider its uncertainty to be less significant that that associated with the treatment of reionization. For example, Benitez-Llambay and Frenk (2020) use \(V_{\rm cut}\simeq 13\,{\rm km\,s^{-1}}\) (5700 K) for their preferred model. In general, lower \(T_{200,{\rm cut}}\) reduces the number of star-free main branches, while higher values increase the number of dark main branches but reduce the number of star-forming minor branches. #### 3.2.2 Changing \(z_{\rm cut}\) The effective global redshift of reionization (defined as the epoch at which the ionized fraction is 50 per cent) is restricted by observations to an interval \(8\lesssim z_{\rm reion}\lesssim 10\)(e.g. Planck Collaboration et al., 2016). Our \(z_{\rm cut}\) parameter corresponds to the time at which gas accretion and cooling are significantly affected by the UV background. These definitions are similar but not identical. The un certainty in the choice of \(z_{\rm cut}\) is therefore greater than the uncertainty in observational estimates of \(z_{\rm reion}\). Fig. 7 shows how different choices of \(z_{\rm cut}\) change the results shown in Fig. 5. Relative to our fiducial choice of \(z_{\rm cut}=10\), assuming \(z_{\rm cut}=6\) greatly reduces the typical mass of halos hosting ghost galaxies at \(z=0\), and makes the fraction of these halos at a given mass less sensitive to the value of \(V_{\rm cut}\). Adopting \(z_{\rm cut}=12\) increases the typical host mass slightly, but also reduces the fraction of hosts at a given mass. The reduction is greater for lower \(V_{\rm cut}\). These effects follow from the two requirements that define ghost galaxy hosts and the typical shape of halo mass accretion histories. If \(z_{\rm cut}\) is low, all but the least massive and slowest-growing main branches pass above the threshold at high redshift. The ghost population is maintained by the increase in the fraction of (low mass) minor branches associated with these trees that can pass above the threshold. Conversely, if \(z_{\rm cut}\) is high, more main branches remain below the threshold, but fewer low-mass minor branches exceed it. This depletes the (dominant) population of ghosts associated with minor branches that are truncated by reionization. The fraction of the most massive halos hosting ghosts is still determined by \(V_{\rm cut}\) at relatively low redshift, and hence less affected by a higher \(z_{\rm cut}\). Our fiducial choice of \(z_{\rm cut}=10\) therefore (approximately) maximises the fraction of relatively massive halos that contain ghost galaxies. ### Estimating the stellar mass function To investigate the cosmic abundance of ghostly galaxies and the potential for their detection in surveys, we use a simple prescription to derive their stellar mass function from their halo mass. For this analysis we only consider trees generated with the EPS method, which yields predictions comparable to those of our \(N\)-body simulation but for a much larger sample of trees. In linearly spaced halo mass bins of width \(1\times 10^{9}\), we generate 1 million EPS trees. We assign each tree a fractional weight in order to recover the same volume density of halos as COCO in the same mass bin. The halo mass function of ghost galaxies is then simply the convolution of the halo mass function with the results shown in Fig. 5. The star formation efficiencies of low mass halos at \(z\gtrsim 3\) are highly uncertain. Fits to observed luminosity functions based on variants of the abundance matching ansatz, as in Behroozi et al. (2019), suggest that the average stellar mass-halo mass (SMHM) relation is redshift-dependent. High-resolution hydrodynamical simulations of low-mass halos (Sawala et al., 2015) also show considerable scatter in their star formation efficiencies. The scatter reflects the stochastic assembly and thresholds on cooling we have described above, and is likely increased by the complex interaction of star formation and feedback. Given this uncertainty, we prefer to take an simple, easily understood and reproducible approach that can be updated in more detailed work, or when better constraints on high redshift star formation are available. We assume that the stellar mass associated with a halo can be derived from the virial mass \(m_{\rm above}\) (defined at \(z_{\rm above}\), the latest time at which the merger tree branch associated with the ghost exceeds the cooling threshold). We use two methods to convert \(m_{\rm above}\) to a stellar mass, \(M_{\star}\). The first is to assume that a universal fraction of available baryons is converted to stars, with no dependence on \(m_{\rm above}\) or \(z_{\rm above}\). The second is to obtain a star formation efficiency from the redshift-dependent SMHM relations given in Appendix J of Behroozi et al. (2019). These functions yield a formation efficiency that increases with redshift and with mass at a fixed redshift (in the low mass regime). We caution that the Behroozi et al. (2019) SMHM relations are not well constrained below \(M_{200}\sim 10^{8}\,{\rm M}_{\odot}\) or at redshifts \(3\lesssim z\lesssim 10\). They are essentially unconstrained at higher redshift. These are the regimes of interest for ghost galaxies. Recent work by Wang et al. (2021) showed the linear extrapolation of the low mass end of the SMHM relation from UniverseMachine agrees with the Milky Way satellites, but the inferred star formation histories do not. Contrasting this poorly constrained but somewhat more'realistic' approach with the simplistic assumption of a fixed efficiency illustrates the basic effects of redshift and mass dependence. Figure 7: As Fig. 5, but showing the effects of varying \(z_{\rm cut}\). Fig. 8 shows the stellar mass function of ghost galaxies for our fiducial \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) model, assuming 1 percent of the available baryonic mass at \(z_{\rm above}\) is converted to long-lived stars, i.e. \(M_{\star}=0.01\times(\Omega_{b}/\Omega_{m})m_{\rm above}\). As we discuss below, although this is likely to be a significant overestimate, it shows the overall behaviour clearly. The bimodality of the stellar mass function follows directly from the bimodality of \(m_{\rm above}\) shown in Fig. 2 (note that the mass bins in the two figures are different), with the higher-mass peak corresponding to merger tree branches that fall below the cooling threshold for the last time _after_ reionization. Lines of different color show the contributions from different bins of present day virial mass. Lower halo mass bins contribute galaxies of lower stellar mass, reflecting the requirement that the minor branch of the tree in which the ghost forms must exceed the cooling threshold. The range of stellar masses increases with present-day mass, reflecting a wider range of threshold crossing times. The simplistic assumption of a high and constant star formation efficiency therefore results in a population of ghost galaxies with a peak cosmic abundance of one object with mass \(2.5\lesssim M_{\star}\lesssim 4.5\times 10^{6}\,{\rm M}_{\odot}\) (comparable to the mass of a classical Milky Way satellite) per \(5^{3}\,{\rm Mpc^{3}}\) volume (\(h=0.71\)), and a significantly higher density of objects with masses comparable to the ultra-faint Milky Way satellites. The abundance of the most massive ghosts is therefore (in this optimistic estimate) similar to that of Milky Way-mass dark matter halos (\(\sim 10^{12}\,{\rm M}_{\odot}\), \(\sim 10^{-2}\,{\rm Mpc^{-3}}\,{\rm dex^{-1}}\)). Although the conditions that give rise to ghost galaxies are rare, this is compensated by the fact that halos in the relevant mass range are relatively numerous. Of course, 'ordinary' dwarfs with stellar masses in the range of Fig. 8 may be more than an order of magnitude more numerous than this (likely overestimated) prediction for ghost galaxies. In the stellar mass range \(\sim 10^{9}<M_{\star}<10^{11}\,{\rm M}_{\odot}\), the luminosity function of field galaxies is approximately constant at \(\sim 10^{-2}\,{\rm Mpc^{-3}}\,{\rm dex^{-1}}\); it then increases by approximately an order of magnitude as \(M_{\star}\) decreases from \(\sim 10^{9}{\rm M}_{\odot}\) to \(\sim 10^{7}\,{\rm M}_{\odot}\)(e.g. Wright et al., 2017; Bullock & Boylan-Kolchin, 2017). It may be possible, however, to distinguish ghosts as outliers in the plane of magnitude and size or surface brightness, as we discuss in the following section. Fig. 9 shows a somewhat more realistic estimate based on the mass- and redshift-dependent SMHM relations of Behroozi et al. (2019), again for our \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) model. Compared to Fig. 8, the stellar mass range is shifted to lower masses by 1.5 dex, and the higher mass peak is significantly narrower. The abundance of galaxies in this peak is similar to the previous estimate. These differences simply reflect the lower star formation efficiency inferred by Behroozi et al. (2019) compared to the simple assumption of 1 per cent of available baryons. The amplitude of the Behroozi et al. (2019) SMHM relation at \(M_{\rm vir}\sim 10^{8}\,{\rm M}_{\odot}\) decreases by two orders of magnitude from \(z=10\) to \(z=1\). This reduces the stellar masses in more massive ghost galaxy halos (which cross the threshold at lower redshift on average) by a Figure 8: The stellar mass function of ghost galaxies for our \(V_{\rm cut}=30\,{\rm km\,s^{-1}}\) model, assuming 1 percent of available baryons are converted to stars at \(z_{\rm above}\). Colors show the contribution of different bins of present-day virial mass, in the range \(10^{9}-10^{10}\,{\rm M}_{\odot}\). The bimodality follows directly from the distribution of \(m_{\rm above}\) shown in Fig.6. Figure 9: The \(z=0\) stellar mass function of ghosty galaxies (dashed black line) for our \(V_{\rm cut}=30{\rm km\,s^{-1}}\) model assuming the SMHM relation of Behroozi et al. (2019). Solid lines separate the contributions from galaxies comprising only one ghost (blue) and formed by the merging of multiple ghosts (orange). The typical ghost stellar mass is reduced compared to the assumption of a constant galaxy formation efficiency in Fig.8. larger factor, relative to Fig. 8. The steepening slope of the Behroozi et al. (2019) SMHM relations over the same range of redshift also contributes to the narrower high mass peak. According to this prediction, the most massive and hence readily detectable ghosts have stellar masses \(\approx 10^{5}\,\mathrm{M_{\odot}}\). This is slightly less massive than the faintest classical Milky Way satellites, such as Draco and Ursa Minor, \(M_{V}\approx-8\)(McConnachie, 2012), and comparable to the ultra-faints or brighter globular clusters. In the \(V_{\mathrm{cut}}=30\,\mathrm{km\,s^{-1}}\) model, the ghost population is dominated by trees producing only a single ghost. Fig. 10 shows the mass function for our \(V_{\mathrm{cut}}=40\,\mathrm{km\,s^{-1}}\) model, in which the number of trees producing multiple ghosts is slightly greater than that of trees producing only one (except at the very lowest masses). This suggests that stronger effective reionization not only results in more massive ghost galaxies, but also, potentially, a larger fraction with multiple structural components and stellar populations. ### Merger mass ratios and times Fig. 11 shows the distribution of halo mass ratios (\(M_{\mathrm{merge}}/m_{\mathrm{merge}}\), see Sec. 3.1.2) and lookback times (corresponding to \(z_{\mathrm{above}}\)) for ghost galaxy progenitors merging into their dark main branches in our EPS trees (red lines). This gives an impression of the likely degree of similarity between the dynamics of ghost galaxies and 'ordinary' dwarf galaxies. We find that half of all mergers between ghost galaxies and their dark main branches in COCO occur before \(z=9\), and almost all are near equal-mass mergers, with only 10 per cent involving a halo mass ratio greater than \(2:1\) This suggests that they may not be significantly more extended than their counterparts with star-forming main branches of similar final mass (e.g. Amorisco, 2017). Only that fraction of progenitors with high mass ratios are likely to be much more diffuse (stellar halo-like). Higher mass ratios naturally correlate with later mergers. A model with \(V_{\mathrm{cut}}=40\,\mathrm{km\,s^{-1}}\) (dashed lines) results in a slightly larger fraction high mass ratio mergers at relatively higher redshift. We provide another simple estimate of the dynamical similarity of ghosts and normal dwarf galaxies in the next section. Our \(N\)-body merger trees show a similar distribution of merger times defined in an analogous way (mergers between independent halos as defined by the DHalo algorithm), perhaps with a slight tendency towards fewer late mergers. The \(N\)-body trees allow for the the ghost progenitor to be tracked as a subhalo after \(z_{\mathrm{merge}}\). We find the distribution of merger times for these subhalos is not significantly different from that of the DHalo, likely because the mass ratios are low and the halos involved are relatively close to the resolution limit to start with10. We do not show the merger mass ratio distribution from the \(N\)-body simulation because we find that the partition of mass between the two halos in the timestep(s) immediately before the merger suffers from what appears to be a systematic effect of the halo-finding algorithm, such that the mass of the minor branch at that time is often underestimated (this can be seen in Fig 2). Overall, the \(N\)-body trees support our conclusions based on the EPS trees. Footnote 10: In COCO, a halo of \(10^{8}\,\mathrm{M_{\odot}}\) is resolved with \(\sim 600\) particles and will therefore fall below the \(\sim 10\) particle halo-finding limit when it has been stripped to \(\sim 2\) per cent of its initial mass. ### Density profiles We now estimate the stellar mass surface density profiles of brighter ghost galaxies using a technique similar to that of Deason et al. (2021, D21). D21 explored the stellar mass accretion histories of present day dwarfs assuming four different models of the SMHM relation and mass thresholds for galaxy formation. Their CDM model A1 corresponds to a similar set of assumptions to the \(V_{\mathrm{cut}}=30\,\mathrm{km\,s^{-1}}\) model we use to predict stellar masses (in our case based on the Behroozi et al. 2019 SMHM relation). D21 used a simplified 'particle tagging' procedure to predict the density profiles of the accreted stellar halos of dwarf galaxies in halos of different Figure 10: The \(z=0\) stellar mass function of ghostly galaxies predicted using the Behroozi et al. (2019) SMHM relation, as in Fig.9, but here for a model assuming stronger IGM heating due to reionization, \(V_{\mathrm{cut}}=40\,\mathrm{km\,s^{-1}}\). Again, solid lines separate the contributions from single (blue) and multiple (orange) ghost trees, and the total is show by the black dashed line. The typical stellar masses of ghosts are higher than in Fig. 9, and the number of trees with multiple ghosts exceeds the number of trees with a single ghost at most masses. present-day mass. These stellar halos are the analogues of our ghost galaxies, for systems in which star formation occurs in the main branch (i.e. the vast majority of dwarf galaxies). The results of D21 therefore already provide some insight into the likely structural properties of ghost galaxies. As described in detail by Amor sico (2017), mergers with low mass ratios (total mass, since stars are dynamically insignificant in these systems) produce remnants that have similar structure to their progenitors. Extended halo components are thus built mainly through a succession of lower mass ratio mergers. A steeply falling SMHM relation then necessarily implies that dwarf stellar halos are extremely faint. D21 conclude that they are likely undetectable (\(\lesssim 30\,\mathrm{mag\,arcsec^{-2}}\)) even in stacks of \(\sim 100\) systems in the full-depth Rubin LSST survey (Ivezic et al., 2019) unless stars are able to form efficiently in significantly lower mass halos than expected from the standard cooling threshold arguments (section 3.1). We have carried out a similar experiment to predict the surface brightness profiles of ghost galaxies, using the stellar mass estimates discussed in the previous section together with a simplified particle tagging technique applied to merger trees we identify with ghosts in the COCO simulation. We first identify \(z_{\mathrm{above}}\) as the characteristic time for star formation, then simply select a fixed fraction, \(f_{\mathrm{mb}}\), of the most bound dark matter particles in the ghost galaxy branch halo at this time, in rank order of binding energy. For example, taking \(f_{\mathrm{mb}}=1\) per cent, we select the top 1 per cent most bound particles in the halo. We distribute the stellar mass given by the Behroozi et al. (2019) relation uniformly among these particles. We then recover a surface mass density profile for the ghost galaxy at \(z=0\) from the distribution of its tagged particles. A 'continuous' particle tagging approach would allow for diffusion in the orbits of stars formed at different times (e.g. Le Bret et al., 2015; Cooper et al., 2017). However, we cannot use this approach because our stellar mass estimates are based on the halo mass at a single point in time, \(z_{\mathrm{above}}\). Instead we tag all of the stellar mass to the halo at \(z_{\mathrm{above}}\), effectively assuming all star formation to occur at this time. The constraints on an appropriate \(f_{\mathrm{mb}}\) are very weak (see Cooper et al., 2017, for a detailed discussion). We therefore make two sets of predictions for \(f_{\mathrm{mb}}=1\) per cent and 10 per cent, to span a plausible range of possibilities. Lower \(f_{\mathrm{mb}}=1\) values produce more concentrated initial density profiles. Fig. 12 shows the resulting stellar mass surface density profiles of the most massive ghost galaxies in our \(V_{\mathrm{cut}}=30\) (solid) and \(40\,\mathrm{km\,s^{-1}}\) (dashed) models. We compare these to profile shapes and scale lengths for a variety of observed dwarf galaxies. We restrict our comparison to the most massive ghost galaxies in our model, because they are most likely to be observable beyond the Local Group, and also because the half-mass radii of less massive ghosts approach the spatial resolution limit of COCO (\(230\,h^{-1}\,\mathrm{pc}\)). We note that all our ghost galaxies are isolated at \(z=0\), in the sense that they are identified as independent systems by the COCO halo-finding algorithm, although we do not track their individual interaction histories to check if they were satellites at earlier times, nor do we examine their larger-scale environment. However, our sample of observed systems for comparison includes both isolated dwarfs and satellites, as follows. Tucana B is an isolated dwarf in the Local Group with an 'ultra faint' luminosity of \(\sim 5\times 10^{4}\,\mathrm{L_{\odot}}\) and a half-light radius \(80\pm 40\,\mathrm{pc}\), lacking recent star formation (Sand et al., 2022). Tuc B provides a useful point of reference for an isolated, early-forming'reionization fossil' in a mass range comparable to our predictions for the most massive ghost galaxies, potentially free from effects due to interactions with a more massive galaxy. In this respect, it is notably compact, with a significantly more concentrated profile than ghosts of a similar or greater mass, regardless of our choice of \(f_{\mathrm{mb}}\). Dragonfly-4 (DF-4) is a well-studied UDG that has been claimed to be deficient in dark matter (van Dokkum et al., 2019). Montes et al. (2020) report the surface brightness profile of this object to \(\approx 20\,\mathrm{mag\,arcsec^{-2}}\). They measure a stellar mass of \(3.6\times 10^{7}\,\mathrm{M_{\odot}}\), greater than our ghosts; since we are interested here only in the shape of the surface brightness profile, we simply scale down the amplitude of the profile reported by Montes et al. 2020 by a factor of 10 in Fig. 12. Montes et al. 2020 also find evidence that the galaxy is tidally distorted by interaction with a more massive neighbour. Crater 2 (Torrealba et al., 2016) has the fourth-largest half-light radius among the satellites of the Milky Way, \(1.1\,\mathrm{kpc}\), but a luminosity of only \(M_{V}\approx-8\). Consequently it has an extremely low surface brightness of \(\sim 30\,\mathrm{mag\,arcsec^{-2}}\). It also has an unusually low velocity dispersion, \(\sigma_{\mathrm{los}}\sim 2.7\,\mathrm{km\,s^{-1}}\)(Caldwell et al., 2016). The observed stellar mass suggests a significantly more massive halo, \(V_{c}\sim 20\)-\(30\,\mathrm{km\,s^{-1}}\). Although substantial tidal stripping would explain its low \(\sigma_{\mathrm{los}}\), this explanation is in tension with its large size (Borukhovetskaya et al., 2022). Antlia 2 (Torrealba et al., 2019) is another Milky Way companion with an exceptionally large half light radius of \(2.9\,\mathrm{kpc}\) and a velocity dispersion of \(\sigma_{\mathrm{los}}\sim 5.7\,\mathrm{km\,s^{-1}}\). Like Crater 2, it is as yet unclear whether tidal disruption is a sufficient or unique explanation for these properties (e.g. Ji et al., 2021). We also show profiles for three of the classical Milky Way satellites with comparable luminosity to the brightest ghost galaxies in our model: Draco, Sextans and UMa I, using data from McConnachie (2012). All three have relatively large half-light radii compared to the average for satellites of their stellar mass. Of these, Sext tans (\(M_{\star}\approx 5\times 10^{5}\,\mathrm{M}_{\odot}\)) has the most similar density distribution to our predictions for ghost galaxies. It shows signs of density and velocity substructure that may be evidence of accretion (Cicuendez & Battaglia, 2018). Other dSph satellites, even the more extended examples shown here, appear significantly more compact. Finally, Fig. 13 compares the predictions for ghost galaxies shown in Fig. 12 to the average profiles of 'ordinary' isolated dwarf galaxies without recent star formation, as predicted by our model. To create this comparison sample we select main branches that have crossed the cooling threshold in the past but that have fallen below it again at \(z\geq 0.5\). This requirement is intended to select a sample of 'quenched' dwarfs and hence to exclude (very approximately) those that would be considered young or actively star-forming at the present day. Present-day star-forming dwarfs are not candidate ghost galaxies, by definition. These dwarf galaxies driven to quiescence by the UV background are discussed in detail by Pereira-Wilson et al. (2023). We compute density profiles in the same way as for the ghosts, in this case estimating a stellar mass at the time at which the _main_ branch of the tree is last above the threshold and tagging particles in that branch accordingly. In Fig. 13 we further split the 'ordinary' quenched dwarfs into those that fall below the cooling threshold at \(0.5<z<2\) (younger) and \(z>2\) (older). The older subset is of comparable 'age' to the majority of ghosts. We find that massive ghosts have stellar masses and surface density profiles broadly similar to those of the older quenched dwarfs. They tend towards the largest sizes (lowest central densities) for that population. Younger quenched dwarfs have central densities an order of magnitude greater than ghosts of similar size. ## 4 Conclusions We have explored a 'ghost galaxy' scenario for the formation of field dwarf galaxies. Our results are based on a simple 'threshold' model of galaxy formation applied to dark matter halo merger trees constructed using the EPS method (Parkinson et al., 2007) and the COCO N-body simulation (Hellwing et al., 2016). Star formation is inhibited in halos with virial temperatures below the cooling limit of atomic hydrogen and, at redshifts lower than \(z=z_{\mathrm{reion}}\), in halos with virial velocity \(V_{\mathrm{vir}}<V_{\mathrm{cut}}\). We find the halo mass range, overall number and luminosity function of ghost galaxies are sensitive to the suppressive effect of the UV background. We have examined models with \(V_{\mathrm{cut}}=30\,\mathrm{km}\,\mathrm{s}^{-1}\) ('weaker' reionization) and \(40\,\mathrm{km}\,\mathrm{s}^{-1}\) ('stronger' reionization). Our specific results are as follows: * Ghost galaxies form in a halo mass range approximately \(2\times 10^{9}h^{-1}<M_{\mathrm{vir}}<1\times 10^{10}\,h^{-1}\,\mathrm{M}_{\odot}\) with \(V_{\mathrm{cut}}=30\,\mathrm{km}\,\mathrm{s}^{-1}\) or \(2\times 10^{9}<M_{\mathrm{vir}}<2\times 10^{10}\,\mathrm{M}_{\odot}\) with \(V_{\mathrm{cut}}=40\,\mathrm{km}\,\mathrm{s}^{-1}\) (Fig. 5). * For \(V_{\mathrm{cut}}=30\,\mathrm{km}\,\mathrm{s}^{-1}\), ghost galaxies are most likely to occur in halos with \(M_{\mathrm{vir}}\simeq 4\times 10^{9}\,h^{-1}\,\mathrm{M}_{\odot}\) (\(\approx 5\) per cent of all halos of that mass). For \(V_{\mathrm{cut}}=40\,\mathrm{km}\,\mathrm{s}^{-1}\), they are most likely at \(M_{\mathrm{vir}}\simeq 8\times 10^{9}h^{-1}\,\mathrm{M}_{\odot}\) (\(\approx 20\) per cent of all halos of that mass; Fig. 4). * These characteristic masses and occupation fractions vary in a non-trivial way when the redshift of reionization and the cooling threshold before reionization are adjusted within plausible bounds. With the typical mass accretion rates of halos fixed, given \(V_{\mathrm{cut}}\), the requirement that the main branch must remain below the cooling threshold but at least one minor branch must exceed it leads to a maximum in the fraction (and overall number) of ghosts for a particular \(z_{\mathrm{reion}}\). * By assigning stellar mass to halos following the prescriptions of Behroozi et al. (2019), we predict ghost galaxies have a bimodal luminosity distribution: an 'ultra faint' population that accounts for the majority of systems, and smaller but significant population of brighter objects (Figs. 9 and 10). Analogous to the typical satellite luminosity function of Milky Way-like galaxies, these populations correspond to systems forming stars before and after reionization, respectively (Fig. 6.) * The brighter ghost galaxy population has a characteristic stellar mass \(\gtrsim 10^{5}\,h^{-1}\,\mathrm{M}_{\odot}\) (\(V_{\mathrm{cut}}=30\mathrm{km}\,\mathrm{s}^{-1}\)). This increases by a factor of \(\sim 3\) for \(V_{\mathrm{cut}}=40\mathrm{km}\,\mathrm{s}^{-1}\), in which case the most massive ghost galaxies have masses comparable to fainter 'classical' Milky Way dwarf satellites such as Scarans and Draco. * For \(V_{\mathrm{cut}}=30\mathrm{km}\,\mathrm{s}^{-1}\), the ghost population consists mostly of systems with a single dominant progenitor; for \(V_{\mathrm{cut}}=40\mathrm{km}\,\mathrm{s}^{-1}\), systems with two or more progenitors are equally common (Figs. 9 and 10). * The majority of ghost galaxy progenitors merge with their dark main branches at high redshift, predominantly in mergers that are close to equal mass. A higher value of \(V_{\mathrm{cut}}=40\mathrm{km}\,\mathrm{s}^{-1}\) increases the fraction of mergers with higher mass ratios. * We make a simple estimate of the density profiles of the most massive ghost galaxies using a parti cle tagging prescription in combination with stellar mass estimates from Behroozi et al. (2019). We find that the resulting \(z=0\) ghost systems have half-light radii comparable to the ultra-diffuse galaxy Dragonfly 4 (\(R_{50}\gtrsim 2\) kpc and the unusual faint Milky Way satellite Crater 2, if we assume a larger extent for the stars at the time of their formation (\(f_{\rm mb}=10\%\)). If we assume the initial extent of the population is relatively compact (\(f_{\rm mb}=1\%\)), the ghosts have half light radii similar to the classical MW satellite Sextans (\(R_{50}\sim 1\) kpc). 'Ordinary' dwarf galaxies that form stars at high redshift and later fall below the cooling threshold (e.g Pereira-Wilson et al., 2023) have similar sizes to ghosts, reinforcing our conclusion that ghost galaxies are not a unique or sufficient explanation for the ultra-diffuse dwarf population. We conclude that the ghost galaxy mechanism is a plausible, even likely formation scenario for a fraction of faint field dwarf galaxies in \(\Lambda\)CDM. Such objects, if they exist, would be the most dark-matter dominated virialized systems that could be probed with stellar kinematics. Our findings are related to those of Ricotti et al. (2022), who examine models for stellar halos built up around dwarf galaxies by the accretion of ultra-faint'reionization fossil' progenitors. These stellar halos have the same origin as our proposed ghostly galaxies: indeed, they were dubbed 'ghostly halos' by Bovill and Ricotti (2011). The important distinction between our work and that reported in Ricotti et al. (2022) is that we specifically consider the formation of stellar halos in systems without in situ star formation, which have not previously been examined separately. We also include our consideration of the small (but potentially significant) fraction of accreted progenitors that may form stars after reionization. Ricotti et al. (2022) discuss the tentative evidence that stellar halos exist around some of the more massive dwarf galaxies in the Local Group. They collate observations of surface brightness profiles in six isolated Local Group dwarfs that show outer breaks, suggestive of a stellar halo component. As in our work and that of Deason et al. (2021), they find that predictions for the stellar halos of dwarf galaxies are sensitive to the assumed star formation efficiency before reionization. Through comparison of their Local Group data to models, they estimate a star formation efficiency for the progenitors before reionization consistent with extrapolation of the abundance matching relation of Behroozi et al. (2013). They also find that the density profiles and scale radii of these halos are similar to those of the stars form in situ in the dwarf galaxies. The star-free main branches of ghost galaxies are closely related to the reionization-limited HI clouds (REHLICs) discussed by Benitez-Llambay et al. (2017) and Benitez-Llambay and Frenk (2020). By definition, REHLICs remain below the star formation threshold but are massive enough to retain baryons against photoevaporation by the UV background. In the simulations studied by Benitez-Llambay et al. (2017), REHLICs of mass \(M_{200}\sim 5\times 10^{9}\,\mathrm{M}_{\odot}\) were found to have baryon fractions \(\sim 20\) per cent of the universal value. Although most of this gas is ionized, the more massive REHLICs support neutral cores. Candidate ghost galaxies could therefore correspond to RELHC-like systems, which may be detectable in future HI surveys. However, Benitez-Llambay et al. (2017) found that \(\sim 50\) per cent of all dark matter halos with mass \(M_{200}\approx 2\times 10^{9}\,\mathrm{M}_{\odot}\) are REHLICs (assuming \(z_{\rm reion}\approx 11\)), whereas we predict \(\lesssim 5\) per cent of halos of this mass host ghost galaxies. Thus, although large fraction of ghosts may be REHLICs, very few REHLICs are likely to be ghosts. Naively, ghost galaxies might be expected to have very low surface brightness for their mass, and hence be a potential contributor to the 'ultra diffuse' population. However, our results imply their cosmic abundance is low at masses comparable to known UDGs, such that they are unlikely to be the only or even dominant component of that population. Our simple estimates of their luminosity, size and dynamical state suggest that ghosts may be hard to distinguish from typical dwarf galaxies, at least under standard assumptions about cosmic reionization. More detailed quantitative statements about their observability would require explicit simulations of their star formation histories and dynamics. We nevertheless find one interesting and general result: the abundance and structure of ghostly galaxies is potentially very sensitive to reionization. At the level of the simple prescription we use here, the strongest constraint on the effective heating of the IGM by the cosmic UV background (parameterised by \(V_{\rm cut}\) and \(z_{\rm cut}\)) comes from comparison of models to the low-mass end of the Milky Way satellite stellar mass function (e.g. Benson et al., 2002; Font et al., 2011; Bose et al., 2018). The most appropriate value of \(V_{\rm cut}\) therefore remains somewhat uncertain, not least because it is unclear which simulated satellite populations should be used for comparison to the Milky Way. More significantly, reionization is expected to occur earlier and to produce a locally stronger suppression of cooling due to the ionizing background (earlier \(z_{\rm cut}\) and/or higher effective \(V_{\rm cut}\)) in regions of higher density (e.g. Efstathiou, 1992; Weinmann et al., 2007; Font et al., 2011). In extreme regions, such as the environs of massive galaxy clusters, this local reionization could increase the abundance of ghost galaxies, and also increase the disparity between their size and luminosity (the latter reducing with stronger/earlier reionization, the former increasing as in situ star formation is suppressed in more massive host halos). Arguing against this, the main branches of cluster progenitors are less likely to satisfy our strict requirement of always remaining below the cooling threshold, because they necessarily collapse earlier than their counterparts in the field (see e.g. Weinmann et al., 2007). However, the maximum stellar mass that can form in those branches will always be limited by (local) reionization; given that limitation, relatively massive cluster satellite halos at \(z=0\) may have very high accreted stellar mass fractions as the result of mergers with multiple progenitors of similar stellar mass. The abundance of ghostly galaxies in high density regions may therefore be substantially different from that in the field, particularly if we were to relax our criteria to include dwarfs that are only _dominated_ by accreted stars, rather than considering only those formed entirely by accretion. These effects could be explored with more detailed models of local reionization in dense regions. Although there is evidence that diffuse dwarf galaxies are common in clusters (e.g. Koda et al., 2015; Munoz et al., 2015; van der Burg et al., 2017), it is presently hard to disentangle the enhancement of different modes of dwarf galaxy formation, such as the scenario above, from the effects of higher galaxy density overall in these regions, potential bias towards deeper observations in clusters, and environmental effects that might act on 'normal' dwarf galaxies. Upcoming wide-area deep-imaging surveys, including LSST (Ivezic et al., 2019), could address these questions by discovering much larger numbers of very low-surface brightness dwarfs and mapping their abundance over larger areas around clusters and in the field. The authors thank Shaun Cole and John Helly for assistance with the Parkinson et al. (2007) EPS merger tree code. WCW and APC are supported by a Yushan Fellowship, awarded to APC by the Taiwan Ministry of Education. APC acknowledges support from Taiwan's National Science and Technology Council under grant 109-2112-M-007-011-MY3. This work used high-performance computing facilities operated by the Center for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University. This equipment was funded by the Ministry of Education of Taiwan, the National Science and Technology Council of Taiwan, and National Tsing Hua University. SB is supported by the UK Research and Innovation (UKRI) Future Leaders Fellowship [grant number MR/V023381/1]. CSF acknowledges support by the European Research Council (ERC) through Advanced Investigator grant DMIDAS (GA 786910). WAH is supported by research grants funded by the National Science Center, Poland, under agreements 2018/31/G/ST9/03388, and 2020/39/B/ST9/03494. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. numpy (Harris et al., 2020), matplotlib (Hunter, 2007), astropy (Collaboration et al., 2013, 2018). Figure 1 was drawn with exclidraw.com.
2309.12381
Memory Efficient Mixed-Precision Optimizers
Traditional optimization methods rely on the use of single-precision floating point arithmetic, which can be costly in terms of memory size and computing power. However, mixed precision optimization techniques leverage the use of both single and half-precision floating point arithmetic to reduce memory requirements while maintaining model accuracy. We provide here an algorithm to further reduce memory usage during the training of a model by getting rid of the floating point copy of the parameters, virtually keeping only half-precision numbers. We also explore the benefits of getting rid of the gradient's value by executing the optimizer step during the back-propagation. In practice, we achieve up to 25% lower peak memory use and 15% faster training while maintaining the same level of accuracy.
Basile Lewandowski, Atli Kosson
2023-09-21T13:55:29Z
http://arxiv.org/abs/2309.12381v1
# Memory Efficient Mixed Precision Optimizers ###### Abstract Traditional optimization methods rely on the use of single-precision floating point arithmetic, which can be costly in terms of memory size and computing power. However, mixed precision optimization techniques leverage the use of both single and half-precision floating point arithmetic to reduce memory requirements while maintaining model accuracy. We provide here an algorithm to further reduce memory usage during the training of a model by getting rid of the floating point copy of the parameters, virtually keeping only half-precision numbers. We also explore the benefits of getting rid of the gradient's value by executing the optimizer step during the back-propagation. In practice, we achieve up to 25% lower peak memory use and 15% faster training while maintaining the same level of accuracy. ## 1 Introduction The global trend in machine learning networks is that larger models yield more accurate results. Consequently, new models have been designed with an ever-increasing number of parameters. However, training such models can be consuming in term of computing power, which is why recent work has been aimed towards alternatives to the single-precision arithmetic (fp32). Most recent models are so large they cannot fit on a single GPU using a traditional training framework: LLaMA (Meta AI's latest large language model) would require for instance 560 GB of memory, which is far more than state of the art GPU can offer. As neural networks grow larger and larger, the need to reduce their memory footprint has become increasingly imperative. While prior research has focused on increasing speed, there remain room for improvements in the way GPU memory is used. Indeed, the common approach to maintaining some accuracy on the parameters while training on a half precision (fp16) model is to keep a master copy of them in full floating-point precision. The drawback of doing so is that every parameter now has to be saved in memory in both fp16 and fp32, further expending the charge for GPU memory. Typical mixed precision training uses half precision values during the forward pass and single precision for the parameters' update. For each parameter, the components stored on GPU memory are then: the single precision value of the model weight and its half precision copy (6 bytes), the optimizer state (dependent on the optimizer), the gradient (usually 4 bytes), and some additional values like the forward activations whose size may vary depending on the model. To lower the memory pressure, solutions have already been developed towards smaller optimizer footprint. Indeed, memory requirements for modern model training are often dictated by the optimizer state, with up to twice as much memory required for each parameter. Alternative optimizers, such as Adafactor or 8bit-Adam, already offer a remedy to this problem by changing the way state memory is stored. Where a standard Adam optimizer would require 8 bytes of state memory by parameter, these optimizers respectively uses 4 bytes and 2 bytes only. The model's parameters and their gradients then becomes the main challenge to enhance memory use. Our work aims at removing the additional memory cost incurred by the fp32 copy of the parameter by keeping in memory only the difference between the original parameter and its fp16 value. We also get rid of the gradient value by directly applying the update during the backward pass, thus relieving the need to keep those values at all time. For a given parameter, this leads to - at least - 6 bytes less having to remain stored on the memory. Our method does not necessitate any alterations to the hyperparameters or the training framework. It is designed to fit models that require an extensive amount of memory for their training, such as Large Language Models, Image Classification or Generation, Natural Language Processing, Object Detection. ### Floating Point Format Basics We remind here the basic principles of floating point arithmetic to frame our work thereafter. The representation of common floating point numbers is ruled by the IEEE-754 Standard 461 (2008). According to it, a binary floating point number \(x\) is represented in memory by : * its **sign**\(s_{x}\) (0 if positive, 1 if negative) * its **exponent**\(e_{x}\), an integer representing the order of magnitude of \(x\) * its **mantissa (significant)**\(m_{x}\), representing its decimal values w.r.t. \(e_{x}\). The first non-zero bit is considered implicit and is not actually stored in memory. such that \(x=(-1)^{s_{x}}*m_{x}*2^{e_{x}}\) This representation is somewhat analog to the standard scientific notation (for instance \(\pi=+3.14*10^{0}\)). The encoding used for \(e_{x}\) and \(m_{x}\) depends on which format is used, as described in Table 1. The exponent value is represented with a bias so that numbers can be ordered lexicographically. Some exponent values are reserved for special numbers : \(e_{max}\) (maximum value) for infinities and NaN, zero for subnormal numbers (that do not have an implicit bit), including zero itself. Some compilers (eg. fast-math option) or non-standard formats (like bfloat16) do not use the subnormal representation. ## 2 Related Work The standard framework for mixed precision training on neural networks is the one described by Micikevicius et al. (2018). Especially, this work describes the use of a full precision copy for each parameter to prevent the training from failing because weight updates are too small compared to weight values. It also proposes the use of loss scaling and full precision arithmetic in some cases where it is necessary to prevent the gradient from instabilities. Alternatives solutions have considered the use of smaller floating point format (16bits and less), with different ways of maintaining some level of accuracy. One common method to ensure the values used remain accurate enough to allow for an efficient training is quantization : splitting the data in several chunks and representing each number with only a few bits to describe their value on a scale from the minimum of the chunk to its maximum. This allow for significant decrease in the memory footprint of the optimizer states (Dettmers et al. (2022)) or for the gradients in a context of distributed training (Alistarh et al. (2017)). Quantization has also been extensively considered for inference-side computations (Gholami et al. (2021)), but the scope of our work is the improvement of the training phase. \begin{table} \begin{tabular}{c c c c c} **Name** & **Length** & **Sign** & **Exponent length** & **Significant length** \\ \hline Float & 32 bits & 1 bit & 8 bits & 1 + 23 bits \\ BFloat16 & 16 bits & 1 bit & 8 bits & 1 + 7 bits \\ Half & 16 bits & 1 bit & 5 bits & 1 + 10 bits \\ \end{tabular} \end{table} Table 1: Classical floating point formats The use of Brain floating point format (bf16) is often preferred for 16bits only training, as it is more robust against gradient absorption in the updates and does not always necessitate some change in the hyper-parameters (Zamirai et al. (2020); Kalamkar et al. (2019)). 8bits floating point format can lead to promising results, and it is yet to determine which format is the best in a context of machine learning training (Micikevicius et al. (2022); Wang et al. (2018)). More extreme solutions have even considered the use of 1-bit binary networks (Qin et al. (2020)). Another method proposed to train models on lower precision is the use of fixed point arithmetic. This solution represents floating values with a fixed number of fractional and integer bits. In this representation, a fixed number of bits are assigned to both parts of a decimal number. The position of the binary point determines the scaling and precision of the fixed-point value. This system may enable some interesting performance improvements but it is usually hard to maintain a sufficient accuracy. ## 3 Implementation We have developed two methods to reduce the need for memory in the context of mixed precision training. The first approach aims at reducing the memory weight of the parameters by removing the full precision copy stored to maintain accuracy during the training. The second one is to get rid of the gradient values as soon as they are computed in order to avoid having all of the gradients stored on the GPU memory. ### 16bits only Mixed-Precision As stated earlier, a classic implementation of mixed precision optimization typically stores in memory both a fp16 and a fp32 value, whereas our approach consists of storing only the difference between the two formats, thus resulting of at least a third less memory dedicated to the storage of a model's parameter. The stored part of the parameter is then used during its update in the backward pass. Classic 16bits floating point format (fp16) contains 10 bits of significand whereas alternative Brain-floating point (bf16) contains only 7bits. There is therefore respectively 13 and 16 bits of precision to keep if we want to maintain a full fp32 accuracy. We also explore the performance when keeping only part of those bits. To do so, we have developed an overload of the arithmetic operators used in the parameters' update (elementwise add, multiply, divide and their classic combinations) that performs the operation in full precision using the extra bits saved separately and outputs both the updated 16 bits float and its extra bits. We use for that a custom CUDA kernel where each thread handles the values stored in the same memory slot. Since there is no efficient way to access arbitrarily sized bit strings in the memory, the accesses are made through chunks of 32 bits (int32). Each GPU thread then handles the operation on several values, depending on how many different extra-stored values can fit on 32 bits. To ensure using all of the memory when the bit-sizes are not multiple of 32, we operate the threads by groups corresponding to that bit-size, for instance values with 12 extra bits would require 12 threads working on 32 values stored in 12 int32 (see fig. 1). The overlapping extra-bits (stored on two different int32) are modified using shared-memory to avoid data-race. We provide an implementation for classic optimizers (Adam and SGD) to use transparently on a fp16 or bf16 model. Extra bits are stored by the optimizer and do not require any modification in the training framework to perform the weight updates. The extra precision provided ensures that small gradients are accurately represented, reducing the risk of gradient underflow and enabling successful model training. This solution may incur some loss in accuracy on the 16bits floating point value. Indeed, saving only the first part of the 32bit significand is equivalent to applying a round-to-zero operation on the full precision value, which is known to be less accurate than the standard round-to-nearest used in this case. Even though the value stored is as precise as a classic fp32 training, the 16bits value used in the computations is then less accurate than a traditional mixed-precision framework. To overcome this issue, we have developed the option to use stochastic rounding when splitting the full precision value. To do so, we store one additional extra-bit to keep in memory whether or not the value was changed when rounded-up, so as to 'un-round' the value before updating it. Finally, we provide a Fused-version of the optimizers, since too many kernel launches can hinder performance on smaller parameters. The principle of fused optimizers is to reduce the amount of CUDA kernels launched during the optimization by handling the parameters as one only stream of values combined all together. That way, one can launch kernels on a larger number of values, so that we ensure the time needed for a kernel launch is amply covered by the time taken by the computations. However, since fused-optimizers access values independently of which parameter they belong to, they are for now only supported for 8 and 16 extra-bits. ### Fusing backward pass and optimizer step The classical implementation of the parameters update in a neural network is to compute the gradient for each parameter and then to use this gradient as input for an optimizer algorithm. The drawback of this system is that every gradient of every parameter has to be stored in the memory at all time (or at least as long as the optimizer has not stepped). This generates a consequent need for GPU memory as it nearly doubles the size of the parameters. Our solution to get rid of this pressure on the memory is to operate the optimization step as soon as the gradient is computed. In practice, we use PyTorch (Paszke et al. (2019)) automatic differentiation package to change the way gradients are computed during the backward pass, which enables us to update directly the parameters without saving any of their values. The optimization step is then performed by our backward pass function, which means every operations on the gradient (eg. clipping or scaling) has to be done through the optimizer. To accommodate every optimizer, our design requires that the optimizer's step function is called for each parameters, which is unusual for a training framework. Our experiments shows that it is not a problem when using classic optimizers. This solution prohibits any operation that would require every gradients of the model at the same time, but it is - to our knowledge - very uncommon. The case of gradient accumulation is discussed in subsection 4.1. ## 4 Experimental results To validate the ability of our setup to maintain training performances comparable to a fp32 setup while using less memory, we carried out several model training from scratch. We consider the following models : the Deep Learning Recommendation Model (DLRM, Naumov et al. (2019)), the Resnet-18 image classification model, T5 text-to-text transformer, DCGAN image generation. Memory SavingsOur solution enables for a smaller memory use in every phase of the training, and especially during the backward pass. The results displayed in Figure 2 show we achieve up to 54% lower peak memory use on a sample network compared to a standard mixed-precision training. This is close to the theoretical maximum loss of 60% in this case. Fusing the back-propagation and the optimization provides on its own a 11% decrease on peak memory while fine-tuning a T5 model on the GLUE benchmark, without significantly slowing down the training (detailed results are presented in Table 2). Overall, we achieve a reduction of 20 to 25% in real training conditions, as Figure 1: Storage example for 12 extra-bits: the extra storage is created and accessed using twelve 32-bits integers, the bits stored to keep some accuracy (12 bits per parameter) are distributed among 32 slots. With this solution, we ensure that extra-storage fits closely to the size of the parameter and that it is accessed efficiently through chunks of 32bits. it is the case with our ResNet18 experiment. We also observe that keeping only part of the mantissa (for instance 8bits on a fp16 training) can be enough to maintain the accuracy while further reducing the memory pressure (see fig. 4 for instance). AccuracyOur experiments show our solution achieve the same accuracy levels as mixed precision. In most of the models we tested, accuracy obtained with 16bits formats is close to the results of full precision training. However, when 16bits training tends to diverge, our system prevent any massive loss of accuracy. On the recommendation model we trained, we match the accuracy of the full precision training by adding only 8bits of precision. As can be seen in fig. 4, this allows us to run fp16 training where it would diverge in a standard context. In the case of ResNet18, our experiments showed no relevant difference in accuracy between the different format, which led us to experiment with 8bits floating point format (fp8). The results summarized in Table 3 indicates our mixed-precision framework could also provide a fp16 accuracy on a 8bits model. This format however lacks compatible hardware for now and is not yet supported by our solution. On the DC-GAN model, our optimizer produces significantly better results than a standard bf16 training, but it does not exactly match the results of full precision training. This is hard to evaluate performance difference in this case because the results are better or worse than the fp32 training depending on which metric we use. EfficiencyThe performance of our optimization techniques varies with the number of extra-bits we use. The absolute throughput of our operators shows that they are as efficient as torch's standard ones, and somewhat weaker for extra-bits sizes that rely heavily on shared memory (see Figure 3). On actual model training, our solution achieve up to 16% faster training on the Resnet-18 model, as compared to classic mixed-precision training. \begin{table} \begin{tabular}{c c c c} & \begin{tabular}{c} Fused backward pass \\ (enabled \\ \end{tabular} & \begin{tabular}{c} Peak memory \\ usage \\ \end{tabular} & \begin{tabular}{c} Time to complete \\ \end{tabular} \\ \hline \hline MNLI & ✗ & 29 930 MB & 10:41:28 \\ & ✓ & 26 818 MB & 11:07:34 \\ \hline QNLI & ✗ & 30 239 MB & 03:11:09 \\ & ✓ & 26 826 MB & 03:16:48 \\ \hline MRPC & ✗ & 29 964 MB & 06:23 \\ & ✓ & 26 649 MB & 06:40 \\ \end{tabular} \end{table} Table 2: Finetuning Performance on the GLUE Benchmark. We trained a large flan-T5 model on several tasks of the GLUE benchmark, comparing its performance when the backward pass is fused or not. Results show that peak memory usage is down by 11%, while computation time increased only slightly. \begin{table} \begin{tabular}{c c c c} Training framework & Execution Time & Accuracy (top 1) & Global GPU Memory usage \\ \hline fp32 & 42 min & 94.64 \% & 4 720 MB \\ amp fp16 + fp32 & 24 min & 94.04 \% & 4 520 MB \\ fp16+13 rstoc & 38 min & 94.10 \% & 3 868 MB \\ fp16+8 & 20 min & 94.06 \% & 3 776 MB \\ fp16 & 23 min & 93.79 \% & 3 840 MB \\ fp8+8 & – & 93.96 \% & – \\ fp8+4 & – & 65.53 \% & – \\ fp8 & – & 8.78 \% & – \\ \end{tabular} \end{table} Table 3: Training performance on Resnet18 with CIFAR10. Our solution outperforms classic mixed-precision training both in terms of speed and memory use. This context shows too little difference in accuracy between the different formats to draw any conclusion. Training was performed on a Nvidia V100-SXM GPU. This hardware does not natively support fp8, therefore we emulated 8bits precision on fp16 values (e5m2 fp8 is equivalent to a truncated fp16), which is why execution time and memory usage is not relevant in that case. Moreover, fp8-only training could yield better results if it were part of a more adapted framework, it is displayed here only for comparison purposes. Concerning fused optimizer, we notice a small improvement (2% to 10% faster training) when training a fused optimizer as compared to our standard optimizer. The difference appears to be smaller than between classic optimizers, we can suppose that is because the extra-bits values are here accessed separately, whereas they form chunks of 32bits in our base optimizer. The results displayed in Figure 5 show that our solution does not perform better than fused-fp32 when using 8 extra-bits, however in this case bf16 training is slower than full precision training (likely because the input data is in fp32 format). Stochastic RoundingStochastic rounding provides a better accuracy as compared to classic 16bits formats or basic extra-bit format (see Figure 6). However, stochastic rounding is less efficient due to the additional operations required to round and 'unround' the values, thus leading to performance closer to fp32 training. Moreover, the gain in accuracy compared to the standard formats is preeminent when few operations have been done, which is where the error due to the data is maximal. Figure 2: Memory Usage During A Synthetic Training. Left graph shows memory usage when using torch’s standard mixed-precision framework, while the right one uses ours. Training consists of three steps on a dummy 2 billion parameters model using SGD with momentum optimizer. Our solution uses fp16 for the models parameters, the activation values and the gradients, and fp32 for the optimizer state. The extra-bits are considered as part of the optimizer state memory footprint. The profiler also considers the automatic mixed precision fp16 copy of the model’s parameters as activation values, which is why they appear larger in the first figure. In total, peak memory usage is down by 54% and could be further reduced by using an optimizer with reduced state memory. Figure 4: Accuracy improvement during training w.r.t classical fp16 framework on a DLRM model. In this case, 8 extra-bits of storage are sufficient to achieve the same accuracy as a standard single-precision setup. While a vanilla fp16 model leads to losing part of the model’s accuracy, our optimizer enables us to avoid this loss. Figure 5: Impact of the fused optimizer on training performance of a DCGAN model. We trained a DCGAN model on the lsun-bedroom dataset, first graph shows the time needed to complete the training in the different configurations, second graph shows the final accuracy, measured with the inception score and Fréchet inception distance. Figure 3: Throughput for an arithmetic operation on tensors. Data is shown as percentage of the theoretical GPU bandwidth. Experiments were run on a NVIDIA A100-40GB-SXM GPU ### Limitations Gradient AccumulationAs stated earlier, fusing the backward pass and the optimizer step prevents the use of the gradients after the backward pass. The use of gradient accumulation (stepping the optimizer after several back-propagation) therefore becomes impossible. However, the main point of gradient accumulation is to reduce the need on GPU memory by splitting the batches into smaller mini-batches, and our solution allows for larger batches by getting rid of the gradient, thus reducing the need for such a mechanism. Gradient ClippingGradient clipping is a method designed to prevent exploding gradients by setting a threshold on their values. It is usually applied between the backward pass and the optimizer step, it then becomes impossible when both are fused. One simple solution to mitigate this issue is to perform the clipping through a parameter hook, for instance: for p in model.parameters(): p.register_hook(lambda grad: torch.clamp(grad, -clip_value, clip_value)) The same applies to loss scaling. ClosureSome training frameworks make use of a "closure" function, which is called during the optimizer step. It is for instance the case with the L-BFGS optimizer, that uses several call to the loss function during the step process. In this case, fusing the backward pass and the stepping may have unexpected consequences on the optimizer behavior, since gradients are not available where they should be. Although such frameworks are - to our knowledge - rather uncommon, we warn that stepping the optimizer after each gradient computation might overall trigger issues with some training configuration that we are not aware of. ## 5 Conclusion Mixed precision training is an efficient technique to accelerate neural networks training without massive loss in accuracy. We have shown that its memory consumption can be further reduced by tweaking the optimization process. In many cases, the full precision copy can be dispensed with only part of it. Moreover, fusing the backward-pass and the optimizer step reduces peak memory usage by removing the pressure issued by gradients. Further work could be focused on extending these mechanisms with smaller floating point arithmetic, like 8bits models. We could also investigate their integration in parallel and distributed training framework, as well as less in common optimization setup. Figure 6: Forward Error On 16bits Tensor Addition. We measure the round-off error of our different half float formats on operations with random values following the standard normal distribution. First graph shows the error after only one operation (in-place add) given some tensor size \(n\). Second graph shows the error after several operations, given their condition number. We denote \(\varepsilon\) the fp16 round-off precision and \(u\) the precision when using 8 extra-bits. We observe on the first operation that basic fp16+8 behaves somewhat poorer than classical fp16, whereas both extra-bit formats produce significantly better results on cumulative operations.
2309.14928
Noise-Tolerant Few-Shot Unsupervised Adapter for Vision-Language Models
Recent advances in large-scale vision-language models have achieved impressive performance in various zero-shot image classification tasks. While prior studies have demonstrated significant improvements by introducing few-shot labelled target samples, they still require labelling of target samples, which greatly degrades their scalability and generalizability while handling various visual recognition tasks. We design NtUA, a Noise-tolerant Unsupervised Adapter that allows the learning of effective target models with few unlabelled target samples. NtUA works as a key-value cache that formulates visual features and predicted pseudo-labels of the few unlabelled target samples as key-value pairs. It consists of two complementary designs. The first is adaptive cache formation that combats pseudo-label noises by weighting the key-value pairs according to their prediction confidence. The second is knowledge-guided cache refinement, which refines pair values (i.e., pseudo-labels) and cache weights by leveraging knowledge distillation from large-scale vision language models. Extensive experiments show that NtUA achieves superior performance consistently across multiple widely adopted benchmarks.
Eman Ali, Muhammad Haris Khan
2023-09-26T13:35:31Z
http://arxiv.org/abs/2309.14928v3
# Noise-Tolerant Unsupervised Adapter for Vision-Language Models ###### Abstract Recent advances in large-scale vision-language models have achieved very impressive performance in various zero-shot image classification tasks. While prior studies have demonstrated significant improvements by introducing few-shot labelled target samples, they still require labelling of target samples, which greatly degrades their scalability while handling various visual recognition tasks. We design NtUA, a Noise-tolerant Unsupervised Adapter that allows learning superior target models with few-shot unlabelled target samples. NtUA works as a key-value cache that formulates visual features and predicted pseudo-labels of the few-shot unlabelled target samples as key-value pairs. It consists of two complementary designs. The first is adaptive cache formation that combats pseudo-label noises by weighting the key-value pairs according to their prediction confidence. The second is pseudo-label rectification, which corrects both pair values (i.e., pseudo-labels) and cache weights by leveraging knowledge distillation from large-scale vision language models. Extensive experiments show that NtUA achieves superior performance consistently across multiple widely adopted benchmarks. The code will be released. ## 1 Introduction The recent development in large-scale pretrained vision-language models [36, 18, 47] has advanced image-text relationship modelling greatly. One representative is CLIP [36] which learns image-text relations by jointly training a visual encoder and a linguistic encoder over web-scale image-text data. Thanks to the linguistic diversity of the web data, CLIP can be exploited in various image classification tasks regardless of the number and nature of image classes. The typical routine is to employ CLIP's linguistic encoder to generate text embeddings of pre-defined class names and then match the text embeddings with the features of test images which are extracted with CLIP's visual encoder in a zero-shot manner. Although pre-trained CLIP has demonstrated great effectiveness in image classification tasks, its performance depends heavily on the distribution discrepancy between its pretraining image-text pairs and specific classification images in various target domains. Several studies introduce few-shot labelled target samples of each class to adapt the pre-trained CLIP to various target classification tasks to mitigate the inter-domain discrepancy [10, 52, 56, 55]. Though these studies have achieved clear improvements in various Figure 1: Unlike key-value cache from labelled samples in supervised method [52, 32], we build weighted key-value cache from unlabelled samples, where the cache weights are determined by the confidence of the pseudo-labels predicted by large-scale vision-language models. The weighting mechanism makes the unsupervised adaptation more tolerant to noisy pseudo-labels. few-shot classification benchmarks, they require labelling numerous samples for each class of target domains which greatly degrades their scalability, especially while handling large-scale datasets such as ImageNet [5] that have large number of image classes. Unsupervised learning adapts pre-trained CLIP to target classification tasks with few-shot unlabelled target samples, which offers an alternative to remove the labelling effort and improves the learning scalability greatly. Unsupervised learning with few-shot unlabelled target samples has been explored in various generative and discriminative tasks such as image classification [51, 17], image generation [19, 41] and domain adaptation [27, 49]. However, it has been largely neglected for the adaptation of large-scale vision-language models in various downstream tasks. To the best of our knowledge, this work is the first that explores unsupervised learning for adapting vision-language models towards various downstream classification datasets under the presence of few-shot unlabelled target samples. We design NtUA, a Noise-tolerant Unsupervised Adapter that enables robust adaptation of pre-trained vision-language models with few-shot unlabelled target samples. NtUA achieves noise-tolerant adaptation by generating more accurate pseudo-labels of the few-shot unlabelled target samples. Inspired by the adapter idea in supervised methods [11, 20, 32, 52, 52], NUA introduces weighted key-value cache which formulates the CLIP-extracted visual features as keys, the predicted pseudo-labels of target samples as values, and the corresponding pseudo-label confidence as weights of the key-value pairs as illustrated in Fig. 1. The incorporation of cache weights greatly enhances NtUA's tolerance to pseudo-label noises, as the prediction confidence is closely correlated with pseudo-label accuracy [57, 44]. In addition, we design a noise rectification technique that improves the quality of the predicted pseudo-labels effectively. The rectification leverages CLIP-distilled knowledge, which updates both pair values and cache weights iteratively. Extensive experiments show that NtUA is simple but effective, with an average accuracy gain of 5.06% across 11 widely studied datasets. In summary, the contributions of this work are threefold. _First_, we design an unsupervised learning framework that can effectively adapt pretrained vision-language models towards various downstream classification tasks under the presence of few-shot unlabelled target samples. _Second_, we design a noise-tolerant unsupervised adapter that is robust to noisy pseudo-labels predicted by vision-language models. The adapter introduces weighted cache and noise rectification, which exploit pseudo-label confidence and CLIP-distilled knowledge to enhance pseudo-label quality effectively. _Third_, extensive experiments demonstrate the great effectiveness of our designed NtUA across multiple classification benchmarks. ## 2 Related Work **Data-efficient transfer learning** aims to generate transferable features capable of achieving high performance on various visual recognition tasks with few-shot labelled data. Under the setting of data-efficient transfer learning, recent studies demonstrate that techniques such as prompt optimization [55, 4, 56] and adapter fine-tuning [10, 39, 52] can significantly enhance the performance of vision-language models. However, these approaches often need a significant amount of labelled data, which is often difficult or expensive to obtain. Differently, our method enhances the generalization of the pre-trained vision-language models by learning from few-shot unlabelled target data without requiring additional labelled data. **Cache model** refers to a database that stores information about the features and labels of training data in a key-value format. In the inference phase, the cache model can quickly retrieve relevant information by treating the feature generated from a test example as a query and searching the database for matching information [43]. It is widely used to enhance the performance of various models, including language models [11, 29], vision models [32], and vision-language models [52]. For example, Tip-Adapter [52] introduces a blended cache model to enhance the efficiency of retrieval in pre-trained vision-language models with two cascading matrix multiplication operations. Different from existing work, the proposed NtUA is the first that introduces a weighting mechanism in the construction of cache models for the task of vision-language model adaptation. **Knowledge distillation** is a machine learning technique that transfers knowledge from a larger and more powerful teacher model to a student model with fewer parameters. By learning from the teacher model's outputs, the student model can reduce its error and enhance its performance [24, 25]. Most existing studies achieve knowledge distillation via two typical approaches: distillation from intermediate features [14, 15, 16, 21, 33, 35, 37, 40, 42, 48, 50] and distillation from logits [2, 9, 30, 46, 53, 22]. Our NtUA follows the second approach, _i.e._, distillation from logits. Unlike prior studies that only distil knowledge from logits, NtUA updates both the values (logits) and weights by leveraging distilled knowledge from CLIP models. ## 3 Method This section presents our proposed unsupervised transfer learning method that enhances the pre-trained CLIP by using few-shot unlabelled target samples. In Section 3.1, we provide a brief overview of Tip-Adapter, a closely related supervised transfer method. Subsequently, in Section 3.2, we describe the proposed Noise-tolerant Unsupervised Adapter (NtUA) that is designed to address noisy pseudo labels during unsupervised transfer learning. Finally, we discuss NtUA's connection to Tip-Adapter in Section 3.3. ### A Revisit of Tip-Adapter Tip-Adapter is an efficient learning method that adapts the pre-trained CLIP model for supervised few-shot image classification. The task of supervised image classification involves a labelled dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{m}\), where \(x_{i}\) represents an image, \(y_{i}\) is the corresponding ground-truth label and \(m\) denotes the number of samples. Typically, the dataset \(D\) can be divided into three subsets: a train set \(D_{\rm train}\), a validation set \(D_{\rm valid}\), and a test set \(D_{\rm test}\). For the task of supervised few-shot image classification, \(D_{\rm train}\) consists of few-shot training samples of \(N\)-way-\(K\)-shot, where \(N\) represents the number of classes, \(K\) represents the number of training examples per class, and the number of training samples can thus be derived by \(|D_{\rm train}|=NK\). For each image \(x_{train}\) in \(D_{\rm train}\), Tip-Adapter utilizes a visual encoder \(E_{v}\) of the pretrained CLIP model to extract \(d\)-dimensional L\(2\) normalized image features \(f_{\rm train}=E_{v}(x_{\rm train})\) and converts its ground-truth label into a \(N\)-dimensional one-hot vector. For all \(NK\) training samples, Tip-Adapter extracts \(f_{\rm train}\in\mathbb{R}^{1\times d}\) and \(y_{train}\in\mathbb{R}^{1\times N}\) from each training image to obtain the image features \(\mathbf{F}_{\rm train}\in\mathbb{R}^{NK\times d}\) and one-hot vectors \(\mathbf{L}_{\rm train}\in\mathbb{R}^{NK\times N}\) for all the whole training set. In order to build an efficient feature adapter, Tip-Adapter constructs a key-value cache by storing \(\mathbf{F}_{\rm train}\) as the keys and \(\mathbf{L}_{\rm train}\) as the values. During inference, the image features \(f_{\rm test}\) generated from each test image \(x_{\rm test}\in D_{\rm test}\) are utilized as a query to retrieve relevant information from the key-value cache. In Tip-Adapter, the prediction logits of a testing image \(f_{\rm test}\) is obtained as follows: \[P_{\rm TA}(f_{\rm test})=\alpha\varphi(f_{\rm test}\cdot\mathbf{F}_{\rm train}^{ T})\mathbf{L}_{\rm train}+f_{\rm test}\cdot\mathbf{W}^{T}, \tag{1}\] where \(\varphi(x)=\exp(-\beta(1-x))\) is a mapping function defined in [52], \(\beta\) is a modulating hyperparameter, \(\mathbf{W}\in\mathbb{R}^{N\times d}\) denotes the parameters of the textual encoder \(E_{t}\) of the pre-trained CLIP, and \(\alpha\) refers to a balancing ratio. The values of the hyperparameters \(\alpha\) and \(\beta\) are updated using the validation set \(D_{\rm valid}\). The keys in the cache model can be tuned using a loss function defined as follows: \[\mathcal{L}_{\rm TA}(f_{\rm train},y_{\rm train})=\mathcal{L}_{\rm CE}(P_{\rm TA }(f_{\rm train}),\mathbf{L}_{\rm train}). \tag{2}\] Figure 2: The framework of Noise-Tolerant Unsupervised Adapter (NtUA): (a) In Stage I, UtNA constructs a weighted key-value cache to store the knowledge of unlabelled target samples and then applies pseudo-label rectification to correct both cache values and cache weights. In the cache, the image features extracted from CLIP’s visual encoder \(E_{v}\) serve as the _keys_, the predicted pseudo-labels of CLIP predictions (generated using \(E_{v}\) and CLIP’s textual encoder \(E_{t}\)) serve as the _values_, and the corresponding prediction confidences serve as the _weights_ of the key-value pairs. To perform noisy rectification, NtUA generates CLIP-distilled predictions (with CLIP’s visual encoder \(E_{v}^{kd}\) and textual encoder \(E_{t}\)) and leverages such CLIP-distilled knowledge to update both _values_ and _weights_ in the cache. (b) In Stage II, NtUA updates _the keys_ in the constructed _weighted key-value cache_ by incorporating knowledge from both the cache and CLIP. ### Noise-tolerant Unsupervised Adapter Unlike Tip-Adapter that adopts a few-shot supervised transfer learning approach, we tackle a more challenging problem of unsupervised transfer learning for the CLIP model, where only a limited amount of unlabelled target data is available. The primary obstacle is to overcome the noisy pseudo-labels generated from the pre-trained CLIP model over the unlabelled target data. To address this issue, we propose a Noise-tolerant Unsupervised Adapter (NtUA) that enables robust learning from such noisy pseudo-labels. As illustrated in Fig. 2, NtUA involves two stages, where the first stage is weighted key-value cache construction with pseudo-label rectification and the second stage is weighted cache fine-tuning. We will present these designs in the following sections. #### 3.2.1 Weighted Key-value Cache Construction For the specific task of unsupervised few-shot image classification, we utilize an unlabelled training set \(\hat{D}_{\mathrm{train}}=\{x_{i}\}_{i=1}^{m_{\mathrm{train}}}\), where \(x_{i}\) denotes a training image and \(m_{\mathrm{train}}\) represents the number of training samples. For the ease of benchmarking with Tip-Adapter, we set \(m_{\mathrm{train}}\) to \(NK\), where \(N\) denotes the number of classes and \(K\) is identical to that of Tip-Adapter. In order to generate pseudo-labels for the unlabelled target data, we employ the visual encoder \(E_{v}\) of a pretrained CLIP model to extract image features \(f_{\mathrm{train}}\) for all target data. Then, we utilize the CLIP's textual encoder \(E_{t}\), which takes a prompt as input, to generate the classifier's weights \(\mathbf{W}\) for all the target class names. These weights are applied to the image features, resulting in prediction logits given by \(P(f_{train})=f_{train}\cdot\mathbf{W}^{T}\). From these prediction logits, we can obtain the pseudo-labels and convert them into one-hot vectors \(\mathbf{\hat{L}}_{\mathrm{train}}\) for all unlabelled target images. The resulting logit values are then converted into probability distributions over the target classes by applying the softmax function. The highest probability for each image is taken as the confidence score \(\mathbf{\hat{C}}_{\mathrm{train}}\) of the pseudo-labels. This process allows NtUA to generate useful pseudo-labels for \(\hat{D}_{\mathrm{train}}\) without requiring manual annotations. NtUA incorporates both pseudo-labels \(\mathbf{\hat{L}}_{\mathrm{train}}\) and their corresponding weights \(\mathbf{\hat{C}}_{\mathrm{train}}\) in the weighted cache model. It combines the predictions from the weighted cache model and the predictions from the CLIP model as follows: \[\begin{split} P_{\text{NtUA}}(f_{\mathrm{train}})& =\alpha\mathbf{\hat{C}}_{\mathrm{train}}\varphi(f_{\mathrm{train }}\cdot\mathbf{F}_{\mathrm{train}}^{T})\mathbf{\hat{L}}_{\mathrm{train}}\\ &+f_{\mathrm{train}}\cdot\mathbf{W}^{T}.\end{split} \tag{3}\] NtUA is more robust to pseudo-label noises by incorporating cache weights \(\mathbf{\hat{C}}_{\mathrm{train}}\), since the accuracy of pseudo-labels is closely related to the prediction confidence. We argue that the quality of both pseudo-labels and the corresponding weights in NtUA are very important to the prediction accuracy, as formulated in Eq. (3). We design a noise rectification technique to improve the quality of both pseudo-labels \(\mathbf{\hat{L}}_{\mathrm{train}}\) and the corresponding cache weights \(\mathbf{\hat{C}}_{\mathrm{train}}\) in NtUA. A direct solution to control the quality of pseudo-labels is using a fixed threshold to filter out noisy pseudo-labels with low confidence. However, this solution requires an additional hyperparameter (_i.e._, the threshold) that needs cumbersome parameter learning for different target datasets. We design a noise rectification method that leverages CLIP-distilled knowledge to update both pair values and cache weights iteratively, more details to be elaborated in the ensuing subsection. #### 3.2.2 Pseudo-label Rectification In our pseudo-label rectification method, a large-scale CLIP model is introduced to perform knowledge distillation for updating both pseudo-labels and their weights, where \(\mathrm{E}_{\mathrm{v}}^{\mathrm{kd}}\) denote the visual encoder of the distilled CLIP model. Specifically, NtUA adopt the visual encoder \(\mathrm{E}_{\mathrm{v}}^{\mathrm{kd}}\) to get the accumulated visual features of the \(NK\) unlabelled target images in \(\hat{D}_{\mathrm{train}}\), referred to as \(\mathrm{I}_{NK}\), as follows: \[\mathbf{F}_{\mathrm{train}}^{\mathrm{kd}}=\mathrm{E}_{\mathrm{v}}^{\mathrm{kd} }(\mathrm{I}_{NK}), \tag{4}\] To generate pseudo-labels corresponding to the distilled visual features, NtUA multiplies the distilled features vector \(\mathbf{F}_{\mathrm{train}}^{\mathrm{kd}}\) with the CLIP's classifier weights \(\mathbf{W}\) and converts the pseudo-labels to one-hot vectors of \(N\)-dimensions denoted as \(\mathbf{\hat{L}}_{\mathrm{train}}^{\mathrm{kd}}\). Moreover, the confidence scores for the pseudo-labels, referred to as \(\mathbf{\hat{C}}_{\mathrm{train}}^{\mathrm{kd}}\), are obtained by taking the maximum probabilities over CLIP's predictions. The weighted cache model is constructed using the accumulated feature vector \(\mathbf{F}_{\mathrm{train}}\) from the same CLIP model architecture employed for tuning the keys and adapting the model, along with the pseudo-label \(\mathbf{\hat{L}}_{\mathrm{train}}^{\mathrm{kd}}\) and the confidence score \(\mathbf{\hat{C}}_{\mathrm{train}}^{\mathrm{kd}}\) generated from the distilled CLIP model. Note that NtUA does not utilize the distilled features vector \(\mathbf{F}_{\mathrm{train}}^{\mathrm{kd}}\) in constructing the weighted cache model. Thanks to the proposed noisy rectification method, NtUA allows leveraging the CLIP-distilled pseudo-labels \(\mathbf{\hat{L}}_{\mathrm{train}}^{\mathrm{kd}}\) and the corresponding confidence scores \(\mathbf{\hat{C}}_{\mathrm{train}}^{\mathrm{kd}}\) to enhance the prediction logits as follows: \[\begin{split} P_{\text{NtUA}}^{kd}(f_{\mathrm{train}})=& \alpha\mathbf{\hat{C}}_{\mathrm{train}}^{\mathrm{kd}}\varphi(f_{ \mathrm{train}}\cdot\mathbf{F}_{\mathrm{train}}^{T})\mathbf{\hat{L}}_{ \mathrm{train}}^{\mathrm{kd}}\\ &+f_{\mathrm{train}}\cdot\mathbf{W}^{T}.\end{split} \tag{5}\] #### 3.2.3 Weighted Cache Fine-tuning During training, NtUA updates the keys of the cache model by taking into account the confidence of pseudo-labels generated from the CLIP model architecture. Specifically, prediction logits from the weighted cache are combined with the CLIP's prediction logits to fine-tune the cache model based on the confidence score of the pseudo-labels, as defined in Eq. (5). The loss function in NtUA training can thus be formulated as follows: \[\mathcal{L}_{\text{NtUA}}=\mathcal{L}_{\text{CE}}(P_{\text{NtUA}}^{kd}(f_{\text {train}}),\mathbf{\hat{L}}_{\text{train}}^{\text{kd}}). \tag{6}\] During inference, NtUA utilizes Eq. (1) to make predictions. It is crucial to adjust the keys of the weighted cache model by utilizing the confidence score of the pseudo-labels. This step is essential to mitigate the adverse effects of inaccurate or noisy pseudo-labels. By assigning higher weights to trustworthy pseudo-labels, we can guarantee that the training process relies more heavily on precise pseudo-labels, resulting in superior performance. Additionally, weighting the logits with confidence scores can prevent the model from overfitting to incorrect pseudo-labels by encouraging it to concentrate on dependable pseudo-labels and ignore erroneous or unreliable ones. ### Relationship with Tip-Adapter In Tip-Adapter [52], a cache model is adopted to facilitate few-shot adaptation in a vision-language model. NtUA differs from Tip-Adapter in three significant ways. First, NtUA and Tip-Adapter adapt models in very different ways. While Tip-Adapter aims to adapt vision-language models with a few-shot labelled samples, \(\mathbf{L}_{\text{train}}\), NtUA leverages unsupervised learning to achieve adaptation by utilizing a few unlabelled samples. NtUA is capable of achieving a few-shot model adaptation without depending on the actual labels of the target data. To accomplish this adaption, it exploits pseudo-labels, denoted as \(\mathbf{\hat{L}}_{\text{train}}\), which are predicted by CLIP models, as shown in Eq. (3). Second, NtUA and Tip-Adapter adopt different methods to construct the cache model. Tip-Adapter builds the cache model using the visual features extracted from a pre-trained CLIP model, \(\mathbf{F}_{\text{train}}\), as the key and the true labels, \(\mathbf{L}_{\text{train}}\), as the value. The key-value pairs are then utilized to guide the fine-tuning process of the cache model's keys. Without sample labels, the cache model construction in NtUA is much more challenging which involves not only \(\mathbf{F}_{\text{train}}\), but also pseudo-labels, \(\mathbf{\hat{L}}_{\text{train}}^{\text{kd}}\), and their corresponding confidence scores, \(\mathbf{\hat{C}}_{\text{train}}^{\text{kd}}\), generated by the CLIP model, as shown in Eq. (3). NtUA thus exhibits greater resilience to inaccuracies in pseudo-labels by integrating cache weights, since the reliability of pseudo-labels is strongly correlated with the prediction confidence. Finally, NtUA introduces pseudo-label rectification that utilizes knowledge distillation to improve the quality of the pseudo labels while constructing the weighted cache model. Specifically, NtUA employs a large-scale CLIP architecture, denoted as \(\mathrm{E}_{\text{v}}^{\text{KD}}\), to enhance the accuracy of the pseudo-labels and their corresponding confidence scores. These pseudo-labels and confidence scores provide more reliable and accurate guidance while fine-tuning the keys of the built-weighted cache models. ## 4 Experiments ### Experimental Setups We evaluate NtUA on 11 widely-used image classification datasets: ImageNet [5], Caltech101 [8], DTD [3], EuroSAT [13], FGVCAircraft [28], Food101 [1], Flowers102 [31], OxfordPets [34], SUN397 [45], StandfordCars [23], and UCF101 [38]. We adopt CLIP [36] as our pre-trained \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Methods & ImgNet & Caltech & DTD & ESAT & FGVCA & Food & Flower & OxPets & SUN & StCars & UCF & **Average** \\ \hline CLIP-RN50 [36] & 60.34 & 86.00 & 42.14 & 37.46 & 17.13 & 77.34 & 66.02 & 85.83 & 58.57 & 55.68 & 61.41 & 58.90 \\ \hline \hline \multicolumn{10}{c}{4-shot unlabelled target samples} \\ \hline CoOp [56] & 56.68 & 89.21 & 41.37 & 32.63 & 14.88 & 75.52 & 62.04 & 82.39 & 59.85 & 53.12 & **62.94** & 57.33 \\ Tip-Adapter [52] & 60.82 & 87.91 & 42.32 & 44.27 & 16.74 & 77.52 & 67.40 & 85.42 & 59.95 & 56.37 & 62.65 & 60.12 \\ **NtUA (Ours)** & **61.60** & **89.78** & **47.64** & **58.10** & **19.41** & **77.68** & **70.52** & **87.22** & **60.30** & **59.43** & 62.23 & **63.08** \\ \hline \hline \multicolumn{10}{c}{8-shot unlabelled target samples} \\ \hline CoOp [56] & 57.21 & 89.45 & 43.20 & 47.09 & 15.66 & 75.85 & 64.03 & 81.25 & 60.93 & 53.45 & 62.94 & 59.19 \\ Tip-Adapter [52] & 61.24 & 88.76 & 42.49 & 49.15 & 15.42 & 78.24 & 69.27 & 87.24 & 61.10 & 57.72 & 63.10 & 61.25 \\ **NtUA (Ours)** & **62.37** & **90.18** & **47.70** & **57.16** & **17.94** & **78.72** & **74.83** & **87.46** & **61.81** & **62.50** & **64.82** & **64.14** \\ \hline \hline \multicolumn{10}{c}{16-shot unlabelled target samples} \\ \hline CoOp [56] & 57.31 & 90.06 & 44.62 & 57.09 & 15.66 & 75.59 & 67.03 & 82.28 & 62.72 & 53.30 & 64.05 & 60.88 \\ Tip-Adapter [52] & 61.54 & 89.74 & 44.15 & 39.31 & 17.13 & 78.31 & 69.55 & 86.62 & 62.21 & 57.74 & 64.29 & 60.96 \\ **NtUA (Ours)** & **63.55** & **92.13** & **48.94** & **58.35** & **21.03** & **79.11** & **76.82** & **89.29** & **63.65** & **65.80** & **67.54** & **66.02** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of NtUA with two state-of-the-art unsupervised few-shot adaptation methods over 11 widely adopted image classification benchmarks. Using CLIP-RN50 as a backbone, we perform evaluations over 4-shot, 8-shot, and 16-shot setups, respectively. vision-language model, with ResNet-50 as CLIP's backbone visual encoder. For pseudo-label rectification, we utilize supplementary CLIP models with ViT-L/14 [6] as a visual encoder. CLIP provides a range of prompt templates for inference, comprising 80 hand-crafted prompts for ImageNet. To ensure consistency, we implement CLIP's prompt ensembling for ImageNet and employ a single hand-crafted prompt for the remaining datasets. We adhere to CLIP's data pre-processing protocol to generate the pseudo-labels, which involves random cropping, resizing, and random horizontal flipping. However, for the construction and fine-tuning of the weighted cache model, we apply a range of dataset-specific data augmentations, such as horizontal flipping, affine transformations, and colorJitter. We follow the Tip-Adapter fine-tuning setting for fine-tuning the keys of the weighted cache model by freezing the pre-trained CLIP encoders in fine-tuning, as well as the values and weights of the weighted cache model. We use a batch size of 256, a learning rate of 0.001, and AdamW optimizer [26] with a cosine scheduler to fine-tune the cache model's keys. We conduct a comparative analysis of three state-of-the-art adaption methods: 1) Zero-shot CLIP [36]; 2) CoOp [54]; and 3) Tip-Adapter [52]. Zero-shot CLIP doesn't require any training and relies solely on pre-existing knowledge. In contrast, CoOp and Tip-Adapter are two methods for few-shot supervised adaptation transfer learning. CoOp employs a learnable prompt for training, while Tip-Adapter constructs a cache model using the few-shot target data. As for the unsupervised adaptation of CoOp and Tip-Adapter, we utilize a pre-trained CLIP model with ResNet-50 as a visual encoder to generate pseudo-labels. For CoOp, we opt for the top-k confident pseudo-labels from the whole dataset, whereas with Tip-Adapter, the pseudo-labels are created only for the few-shot unlabeled target data, and a cache model is constructed with these pseudo-labels without resorting to weight or knowledge distillation. ### Comparison of State-the-of-art We present the main results of our proposed method obtained from 11 image classification datasets in Table 1. We benchmark the proposed NtUA with three state-of-the-art methods, namely, CLIP-ResNet50, CoOp, and Tip-Adapter. All three methods, along with our proposed method, receive unlabelled few-shot samples and utilize CLIP as a backbone with ResNet50 as a visual encoder. As shown in Table 1, the proposed NtUA, demonstrates superior performance as compared with the three state-of-the-art methods. NtUA consistently outperforms zero-shot CLIP with only a few training epochs by \(+4.18\%\), \(+5.23\%\), and \(+7.12\%\) in 4, 8, and 16 few-shot respectively. In the context of unlabeled few-shot adaptation, NtUA remains competitive as compared with CoOp, by achieving gains of \(+5.75\%\), \(+4.95\%\), and \(+5.14\%\) respectively, despite CoOp's use of the top-k confidence samples. Although in a 4-shot setting, CoOp's performance diminishes by \(-1.57\%\) compared to CLIP-RN50, our method outperforms CoOp by \(+5.75\%\). This indicates that NtUA is efficacious in improving performance in challenging unlabeled few-shot scenarios. Furthermore, NtUA outperforms Tip-Adapter by achieving steady performance gains across 4, 8, and 16 few-shot scenarios. In contrast, the performance of Tip-Adapter diminishes in 16-shot scenarios, owing to a decline in the accuracy of generated pseudo-labels. By utilizing know \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Methods & ImgNet & Caltech & DTD & ESAT & FGVCA & Food & Flower & OxPets & SUN & StCars & UCF & **Average** \\ \hline CLIP-ViT-B/16 [36] & 68.77 & 92.98 & 44.56 & 47.48 & 24.84 & 86.12 & 71.34 & 89.10 & 62.59 & 65.25 & 66.83 & 65.44 \\ \hline \hline \multicolumn{10}{c}{4-shot unlabelled target samples} \\ \hline CoOp [56] & 67.05 & 94.28 & 45.33 & 60.51 & 22.35 & 84.44 & 70.56 & 88.72 & **66.05** & 63.96 & **68.99** & 66.57 \\ Tip-Adapter [52] & 69.32 & 94.00 & 43.79 & 54.46 & 23.07 & **86.34** & 71.74 & 89.70 & 64.22 & 66.37 & 67.33 & 66.39 \\ **NtUA (Ours)** & **69.76** & **94.40** & **47.93** & **61.32** & **25.50** & 86.33 & **74.54** & **91.44** & 64.64 & **67.64** & 68.04 & **68.32** \\ \hline \hline \multicolumn{10}{c}{8-shot unlabelled target samples} \\ \hline CoOp [56] & 66.67 & **94.89** & 44.27 & 59.37 & 23.16 & 84.54 & 71.78 & 90.22 & **66.74** & 65.20 & 69.89 & 66.98 \\ Tip-Adapter [52] & 69.98 & 92.90 & 44.56 & 56.46 & 24.42 & 86.65 & 71.38 & 90.27 & 65.51 & 65.43 & 69.42 & 67.00 \\ **NtUA (Ours)** & **70.49** & 94.08 & **49.17** & **62.28** & **25.62** & **86.99** & **75.84** & **91.47** & 66.29 & **70.96** & **71.48** & **69.52** \\ \hline \hline \multicolumn{10}{c}{16-shot unlabelled target samples} \\ \hline CoOp [56] & 67.11 & **94.48** & 43.97 & **65.81** & 24.24 & 84.56 & 73.69 & 90.32 & 67.19 & 65.30 & 70.39 & 67.91 \\ Tip-Adapter [52] & 70.43 & 93.75 & 44.62 & 56.26 & 23.79 & 86.84 & 73.24 & 89.34 & 66.32 & 68.51 & 69.71 & 67.53 \\ **NtUA (Ours)** & **71.30** & 94.40 & **51.65** & 62.98 & **26.97** & **87.04** & **79.58** & **91.88** & **68.55** & **72.11** & **73.49** & **70.90** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of NtUA with two state-of-the-art unsupervised few-shot adaptation methods over 11 widely adopted image classification benchmarks. Using CLIP-ViT-B/16 as a backbone, we perform evaluations over 4-shot, 8-shot, and 16-shot setups, respectively. edge distillation and weighting techniques, NtUA successfully achieves a performance gain of \(+5.06\%\) over Tip-Adapter in a 16-shot scenario. NtUA keeps its performance advantage when alternative visual encoders are used in the CLIP backbone, such as ViT-B/16. As Table 2 shows, there is a gradual increase in NtUA performance, when the CLIP backbone is changed from ResNet-50 to ViT-B/16. Compared with zero-shot CLIP with ViT-B/16 backbone, NtUA outperforms by \(+2.88\%\), \(+4.07\%\), and \(+5.46\%\) in 4, 8, and 16 shots setting, respectively. NtUA maintains its superiority over CoOp and Tip-Adapter in different few-shot situations, even if ViT-B/16 is employed as the CLIP backbone for CoOp and Tip-Adapter. This highlights the robustness of NtUA when applied to diverse CLIP visual encoder architectures. ### Ablation Studies We examine NtUA with ablation studies across 11 image classification datasets. All experiments are conducted with 16-shot setting with ResNet-50 as the CLIP backbone. We only fine-tune the keys of the weighted cache model. Different Designs.To investigate the separate impacts of weights and knowledge distillation in NtUA, we train several new networks by including solely weights or knowledge distillation into the cache model of the baseline key-value cache in [52] named NtUA (KC). We then compare the results with NtUA. The results in Table 3 show that the inclusion of weights solely to the cache model, NtUA (WKC), without knowledge distillation, leads to a marginal performance improvement with a gain of \(+0.21\%\) compared to NtUA (KC). On the other hand, implementing knowledge distillation, NtUA (KC+NP), in the key-value cache baseline without the weighted cache model results in a substantial improvement of \(+4.31\%\) over NtUA (KC) and \(+4.10\%\) over NtUA (WKC). Nonetheless, the amalgamation of weights and knowledge distillation in NtUA (WKC+NP) leads to a notable performance gain of \(+0.75\%\) compared to exclusively using knowledge distillation alone. These experiments indicate that the integration of weights and knowledge distillation in NtUA is a pivotal factor in attaining superior performance in unlabelled few-shot adaptation. The efficacy of our approach is well demonstrated by the performance gains obtained by using weights or knowledge distillation separately as well. Different Weighting Strategy.Confidence and certainty, in the context of model prediction, are essential indicators that provide insights into the model's prediction accuracy and reliability. We investigate diverse weighting measures in NtUA's cache model and evaluate the effectiveness of confidence and certainty as weighted measures. The results, as presented in Table 4, show that both confidence and certainty can serve as effective weights in our weighted cache models when compared to the zero-shot CLIP-ResNet-50 model. Although the performance gain is marginal, confidence performs slightly better than certainty, with a performance gain of \(+0.62\%\). However, when compared to the exclusive use of knowledge distillation in Table 3, combining knowledge distillation with confidence measure leads to a more significant performance improvement of \(0.75\%\), as opposed to only \(0.13\%\) for certainty. These results indicate that the choice of weighted measure for cache models can have a significant impact on the performance and reliability of NtUA. Both confidence and certainty have the potential \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Methods & ImgNet & Caltech & DTD & ESAT & FGVCA & Food & Flower & OxPets & SUN & StCars & UCF & **Average** \\ \hline CLIP-RN50 [36] & 60.34 & 86.00 & 42.14 & 37.46 & 17.13 & 77.34 & 66.02 & 85.83 & 58.57 & 55.68 & 61.41 & 58.90 \\ \hline NtUA w/ certainty & **63.61** & 90.99 & 47.40 & 56.73 & 20.16 & 79.06 & 76.00 & **89.32** & **63.70** & 65.10 & 67.33 & 65.40 \\ NUUA w/ confidence & 63.55 & **92.13** & **48.94** & **58.35** & **21.03** & **79.11** & **76.82** & 89.29 & 63.65 & **65.80** & **67.54** & **66.02** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of NtUA with 16 Shots using CLIP-RN50 Backbone, w/ confidence or w/ certainty (entropy). \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Methods & ImgNet & Caltech & DTD & ESAT & FGVCA & Food & Flower & OxPets & SUN & StCars & UCF & **Average** \\ \hline CLIP-RN50 [36] & 60.34 & 86.00 & 42.14 & 37.46 & 17.13 & 77.34 & 66.02 & 85.83 & 58.57 & 55.68 & 61.41 & 58.90 \\ \hline NtUA (KC) & 61.54 & 89.74 & 44.15 & 39.31 & 17.13 & 78.31 & 69.55 & 86.62 & 62.21 & 57.74 & 64.29 & 60.96 \\ NtUA (WKC) & 61.46 & 89.98 & 42.73 & 42.41 & 17.58 & 78.68 & 69.87 & 86.15 & 61.56 & 58.02 & 64.45 & 61.17 \\ NUUA (KC + NP) & 63.46 & 91.16 & 47.22 & 56.09 & 20.37 & 79.01 & 75.88 & 88.93 & 63.58 & 65.18 & 67.12 & 65.27 \\ NUUA (WKC + NP) & **63.55** & **92.13** & **48.94** & **58.35** & **21.03** & **79.11** & **76.82** & **89.29** & **63.65** & **65.80** & **67.54** & **66.02** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies over 11 widely adopted image classification benchmarks. KC, WKC, and NP denote the baseline key-value cache in [52], our proposed weighted key-value cache, and our proposed pseudo-label rectification, respectively. The experiments were conducted with 16-shot unlabeled target samples. to be effective weight measures, but employing confidence in conjunction with knowledge distillation appears to provide an advantage over using certainty. Comparison with Pseudo-label Thresholding.For unsupervised adaption of the baseline key-value cache [52], one solution to ensure high-quality pseudo-labels is to use a fixed threshold to filter out noisy pseudo-labels with low confidence. We investigate the efficiency of using different thresholds to select the most reliable pseudo-labels for building the cache model and compare it with our NtUA. We conduct experiments on NtUA(KC), employing various thresholds (_i.e._, 0.5, 0.6, 0.7, 0.8, 0.9) for filtering out noisy pseudo-labels from the few-shot target data. For the construction of the cache model in this experiment, the weighting mechanism or the pseudo-label rectification method are not used. The results in Table 5 show that although utilizing diverse thresholds leads to a minor improvement compared to the zero-shot CLIP-ResNet-50 model, it does not outperform either NtUA or unsupervised adaption of the baseline key-value cache model in [52]. Different CLIP-distilled Knowledge.For pseudo-label rectification, we investigate the impact of utilizing diverse CLIP backbones to refine pseudo-labels through knowledge distillation on the performance of NtUA. In our experiments, we use various large-scale CLIP visual encoders, including ViT-L/14, ViT-B/32, ViT-B/16, RN50x4, RN50x16, RN50x64, and RN50, to improve the quality of pseudo-labels. The results in Figure 3 show that using a different visual encoder backbone rather than ResNet-50 leads to significant performance gains for NtUA. We observe that utilizing ViT-L/14 achieves the highest average accuracy of \(66.02\%\) across 11 datasets, followed by the RN50x64 visual encoder backbone with an average accuracy of \(64.54\%\). As the visual encoder becomes less powerful, the accuracy decreases, with RN50 producing a mean accuracy of \(61.17\%\), which is akin to using a weighted NtUA without pseudo-label rectification, NtUA (WKC). Hence, the selection of the CLIP model in pseudo-label rectification can significantly impact the performance of NtUA. However, even with less powerful visual encoders, NtUA consistently outperforms the state-of-the-art. ### Conclusion This paper presents NtUA, a Noise-tolerant Unsupervised Adapter, as a solution to learn superior target models with few-shot unlabelled target samples. Specifically, NtUA utilizes a key-value cache that formulates visual features and predicted pseudo-labels of the few-shot unlabelled target samples as key-value pairs. An adaptive cache formation is designed in NtUA to combat pseudo-label noises by weighting the key-value pairs according to their prediction confidence, while a pseudo-label rectification is proposed in NtUA to correct both pair values and cache weights by leveraging knowledge distillation from large-scale vision language models. The experimental results have demon Figure 3: Comparison of the mean accuracy of various visual encoders for generating pseudo-labels. The experiments are conducted over 11 datasets under a 16-shot setup. ViT-B/32 and ViT-B/16 refer to ViT-Base [7] with the patch size of \(32\times 32\) and \(16\times 16\), respectively. ViT-L/14 refers to ViT-Base [7] with 14 transformer layers. RN50x4, RN50x16, and RN50x64 refer to ResNet-50 [12] with 4, 16, and 64 times more computation, respectively. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Methods & ImgNet & Caltech & DTD & ESAT & FGVCA & Food & Flower & OxPets & SUN & StCars & UCF & **Average** \\ \hline CLIP-RN50 [36] & 60.34 & 86.00 & 42.14 & 37.46 & 17.13 & 77.34 & 66.02 & 85.83 & 58.57 & 55.68 & 61.41 & 58.90 \\ \hline NtUA (KC, Thr=0.5) & 60.85 & 86.57 & 40.13 & 38.43 & 17.10 & 78.33 & 66.38 & 85.17 & 60.08 & 56.37 & 61.54 & 59.18 \\ NtUA (KC, Thr=0.6) & 60.51 & 86.04 & 42.43 & 37.31 & 16.95 & 78.28 & 65.94 & 85.28 & 59.00 & 56.68 & 61.19 & 59.06 \\ NtUA (KC, Thr=0.7) & 60.42 & 86.61 & 43.14 & 37.17 & 17.28 & 77.98 & 65.45 & 84.98 & 58.39 & 55.68 & 62.28 & 59.03 \\ NtUA (KC, Thr=0.8) & 60.26 & 86.25 & 42.61 & 38.20 & 17.19 & 77.47 & 66.06 & 86.56 & 58.40 & 55.84 & 62.28 & 59.19 \\ NtUA (KC, Thr=0.9) & 60.21 & 86.21 & 42.49 & 39.31 & 16.86 & 77.32 & 65.94 & 85.88 & 58.62 & 55.29 & 60.69 & 58.98 \\ \hline NtUA & **63.55** & **92.13** & **48.94** & **58.35** & **21.03** & **79.11** & **76.82** & **89.29** & **63.65** & **65.80** & **67.54** & **66.02** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of the proposed NtUA with other baselines denoted as NtUA (KC, Thr) that is the key-value cache in [52] applied with various thresholding (_i.e._, Thr=0.5, 0.6, 0.7, 0.8, 0.9) for filtering out noisy pseudo-labels. strated that NtUA achieves consistently superior performance across 11 public classification datasets.
2309.11688
LLM Guided Inductive Inference for Solving Compositional Problems
While large language models (LLMs) have demonstrated impressive performance in question-answering tasks, their performance is limited when the questions require knowledge that is not included in the model's training data and can only be acquired through direct observation or interaction with the real world. Existing methods decompose reasoning tasks through the use of modules invoked sequentially, limiting their ability to answer deep reasoning tasks. We introduce a method, Recursion based extensible LLM (REBEL), which handles open-world, deep reasoning tasks by employing automated reasoning techniques like dynamic planning and forward-chaining strategies. REBEL allows LLMs to reason via recursive problem decomposition and utilization of external tools. The tools that REBEL uses are specified only by natural language description. We further demonstrate REBEL capabilities on a set of problems that require a deeply nested use of external tools in a compositional and conversational setting.
Abhigya Sodani, Lauren Moos, Matthew Mirman
2023-09-20T23:44:16Z
http://arxiv.org/abs/2309.11688v1
# LLM Guided Inductive Inference for Solving Compositional Problems ###### Abstract While large language models (LLMs) have demonstrated impressive performance in question-answering tasks, their performance is limited when the questions require knowledge that is not included in the model's training data and can only be acquired through direct observation or interaction with the real world. Existing methods decompose reasoning tasks through the use of modules invoked sequentially, limiting their ability to answer deep reasoning tasks. We introduce a method, Recursion based extensible LLM (REBEL), which handles open-world, deep reasoning tasks by employing automated reasoning techniques like dynamic planning and forward-chaining strategies. REBEL allows LLMs to reason via recursive problem decomposition and utilization of external tools. The tools that REBEL uses are specified only by natural language description. We further demonstrate REBEL capabilities on a set of problems that require a deeply nested use of external tools in a compositional and conversational setting. Machine Learning, ICML, ICML ## 1 Introduction Recently, neural models for natural language generation have demonstrated impressive results (Koroteev, 2021; Devlin et al., 2018; Brown et al., 2020), opening significant new avenues for solving natural language reasoning tasks precisely (Huang and Chang, 2022; Qiao et al., 2022). While LLMs have shown a unique ability to scale in predictable and efficient ways, is unclear whether they show this scaling behavior on complex reasoning tasks (Huang and Chang, 2022). Moreover, the limitations of large language models in accessing dynamic external knowledge sources significantly restrict their usefulness. Human reasoning involves a combination of observations and interactions with the world, highlighting the action-oriented nature of reasoning. In this paper, we address this by introducing the Recursion Based Extensible LLM (REBEL) framework. REBEL allows LLMs to reason through highly complex problems that require knowledge from disparate external sources. This is accomplished using an inference engine that, using the provided tools, gathers the necessary facts to infer the correct answer. Specifically we show three contributions: 1. Designing a system capable of answering questions using any arbitrary external tool. 2. An evaluation showing that REBEL improves upon the state-of-the-art performance on multi-Hop fact retrieval and compositional question answering problems. 3. Releasing our code and evaluation suite for open-source usage at rebel.anarchy.ai. ## 2 Related Works At a high-level, methods for approaching reasoning tasks using LLMs can be broken down into prompt engineering techniques (Liu et al., 2023; Schlag et al., 2023) and fine-tuning (Michel and Fleuret, 2021; Schick et al., 2023) techniques, or combinations of the above. Here we focus only on prompt techniques. Forward chaining (Liebowitz, 1988) is a reasoning strategy historically used by expert systems. It operates by repeatedly applying logical inference rules from an initial repository of known axioms to eventually ideally produce the goal. This strategy has recently been employed to solve natural language problems with the assistance of LLMs in Chain of Thought (CoT) (Wei et al., 2022). RecAct (Yao et al., 2023) builds off of CoT by generating task-specific actions in response to reasoning. Chameleon (Lu et al., 2023) takes this further, using LLMs to synthesize tool pipelines including off-the-shelf computer vision models, web-search engines, and calls to generative models. In contrast to forward-chaining, the technique of backward-chaining (Russell, 2010) attempts to limit the search-space of possible inferences by determining what must be true for a goal to be shown (Picco et al., 2021). Press et al. (2022) demonstrates a method to evaluate problem-solving abilities on a category of non-trivial reasoning tasks with _compositional_ structure (Lake and Baroni, 2018; Keysers et al., 2019) that is poorly addressed by prior methods. They express compositional error as the number of questions in which two subquestions are answered correctly but the top-level question is not. Prior work has shown how this can be addressed via problem decomposition (Yang et al., 2022; Zhou et al., 2022; Drozdov et al., 2022; Khot et al., 2022). In this work, we show how problem decomposition can be augmented with tool usage. ## 3 Methods In this section we introduce the REBEL algorithm as shown in Fig. 1, and all necessary notation and background. At a high level, it works recursively to solve questions, breaking questions into subquestions until no further subquestions can be generated. Let us call the \(nth\) question/subquestion \(Question_{n}\) and its answer \(Answer_{n}\). For example, the user-provided question would be \(Question_{0}\). Let us call the subquestions that are generated to answer \(Question_{n}\)\(Subquestions_{n}\). In each recursive step, we break \(Question_{n}\) into \(Subquestions_{n}\). Let us call the answer to the \(ith\) member of \(Subquestions_{n}\)\(subansw_{n}[i]\). We recursively call each member of \(Subquestions_{n}\), and each \(subansw_{n}[i]\) is returned as a \(fact\) which is the tuple \((Subquestions_{n}[i],subansw_{n}[i])\). This fact is appended to a list of \(facts\) that is is global to each \(Question_{n}\). This list of \(facts\) becomes \(Memory_{n}\) which is used to inform \(Answer_{n}\). In order to stop unbounded recursion, we delete members of \(Subquestions_{n}\) whose featurizations have cosine similarities above 0.98 to the featurization of \(Question_{n}\). The REBEL system contains a \(Tool\_List\), which is a numbered list of the tools we have available and their descriptions. If required, we determine a \(Tool_{n}\) for each \(Question_{n}\), which is the number of the tool required to answer \(Question_{n}\) given \(Memory_{n}\). Below we define the basic steps of this algorithm: question splitting, checking memory, picking tools, and using tools. Figure 1 depicts this pipeline. ### Question Splitting The split subroutine divides \(Question_{n}\) into \(Subquestions_{n}\) with the size of \(Subquestions_{n}\) being the number of subquestions that the LLM generates. The LLM is prompted with \(Tool\_List\), and 4 shots of question splitting examples. This step is representing in step 1 of Figure 1. To see a single shot of context for question splitting see Appendix A. We answer each subquestion and its results are returned as a \(fact\) (see Algorithm 1). These facts are accumulated and passed to all subsequent subquestions. The list \(Subquestions_{n}\) is ordered such that the \(fact\) gained from answering a lower indexed subquestion will aid in the answering of a higher indexed subquestion. ``` functionpromptf(\(Question_{n}\),\(facts\),\(allowsplit=True\)) if\(allowsplit\)then \(Subquestions_{n}\) = split(\(Question_{n}\), \(facts\)) {split the question into subquestions to answer} for\(subquestion\) from 1 to \(s\) in \(Subquestions_{n}\)do ifcos_similarity(\(Question_{n}\),\(subquestion)\) > 0.98 then Delete \(subquestion\) \(allowsplit\)=False endif endfor for\(subquestion\) from 1 to \(s\) in \(Subquestions_{n}\)do \(\_newfacts\)=promptf (\(subquestion,facts,allowsplit\)) \(facts\) += \(newfact\) endfor endif ifmemorycheck(\(Question_{n},facts\))then \(Answer_{n}=\textsc{callGPT}(Question_{n},facts)\) return\(Answer_{n},(Question_{n},Answer_{n})\) else \(tool=\textsc{pick}\textsc{tool}(Question_{n},facts)\) \(toolinput=\textsc{callGPT}(tool,Question_{n},facts)\) {to determine tool input} \(Answer_{n}=\textsc{useTool}(toolinput,facts)\) endif return\(Answer_{n}\),\((Question_{n},Answer_{n})\) endfunction ``` **Algorithm 1** REBEL We check if a question can be answered without any tool use. This can mean either that the question can be answered using \(Memory_{n}\) or the question can be answered by an LLM without the use of any tools (see step 2 Figure 1). If this is the case, we directly provide our base LLM \(Memory_{n}\) and \(Question_{n}\) to find \(Answer_{n}\). To see the complete memory check prompt see Appendix B. ### Tool Picker Here we evoke the LLM to decide what member of \(Tool\_List\) (described by integer \(Tool_{n}\)) would be best to decide the answer to a question. This is a 0-shot prompted system which can be seen in step 3 of Figure 1. ### Tool Input Generation We use GPT-3 to generate standardized input to our tools. We provide the tools to the LLM with 2 fields. The description of the tool and the dynamic parameters of the tool. We store the 3 more fields about each tool that are hidden from the LLM, these are: if the tool is a GET/POST request, the endpoint URL for the tool, and the static parameters of the tool. The dynamic parameters are parameters that will be adjusted based on each call (for example, and query field). The static parameters are parameters that stay the same on each API call (for example, an authentication key). REBEL uses 3 default tools: search, weather, and Google Maps. We configure the inputs to every tool as a JSON. A tool input JSON maps a given tool's dynamic parameters to the values those parameters should be in order to obtain information to answer a given question: \(\{"tool_{n}param_{1}":"tool_{n}value_{1}",\)... \("tool_{n}param_{k}":"tool_{n}value_{k}"\}\). A standardized JSON format reduces the load on the LLM to format an entire API call by itself. REBEL allows for an arbitrary tools to be added to it, however, the k-shot examples that are provided to the LLM for generating input given \(Tool_{n}\) are designed around the 3 base tools. We have found that this prompting does extrapolate to 0-shot uses of unseen and arbitrary tools. See Appendix C for a complete single shot of tool input generation context. ### Tool Usage The UseTool function takes the dynamic parameters (from the LLM generated tool input), the static parameters that we have stored for each tool, and the API endpoint and makes a single request URL. This URL is requested, and the return output is stored as a string. If the return output is longer than 15,000 characters, it is truncated to that amount. Then, we use an LLM, provided with \(Memory_{n}\), \(Question_{n}\), and the API request output, to generate an answer to the \(Question_{n}\). This answer is returned from the UseTool function as \(Answer_{n}\). Our approach has some consequences. On the positive side, users do not have the indicate how to parse the output of the tools they give us, this makes REBEL extremely extendable and flexible to interpret many tool return types and formats. On the negative side, because of the extremely unstructured nature of tool returns, errors are caused by UseTool not being able to answer a question based on a tool return. ## 4 Evaluation In this section we first introduce the experimental setup, including the benchmarks used for evaluation, and then present the results. ### Experimental Setup We tested REBEL on 3 datasets: Compositional Celebrities (Press et al., 2022), FEVER (Thorne et al., 2018), and HotPotQA (Yang et al., 2018). On these datasets, correctness was determined by a human experimenter based on the output of each system. RecAtt outputs with simply the answer to the question, while REBEL often outputs the answer wrapped in reasoning behind the system's thoughts. For these experiments, two separate sets of rules had to be determined for fact verification and fact retrieving questions. For fact retrieving questions, an answer was considered correct if the desired answer was contained in the system output. For fact verification, if the model output determination of the truthfulness of a statement was the same as the desired truthfulness, then the generated answer was considered correct. On Compositional Celebrities, due to computational limitations, we tested using 5 of the 17 categories available, using 100 questions per category, randomly chosen. These Figure 1: Visual depiction of the pipeline of the REBEL Algorithm from Algorithm 1 to answer some \(Question_{n}\). Blue boxes contain descriptions of each step of the pipeline, and the red boxes contains the output variable for each step that will be used in subsequent steps. categories can be found in Table 1. We tested on FEVER and HotPotQA with 100 of the same random questions from each dataset on both ReAct and REBEL. The accuracy results for this experiment can be found at Table 2. FEVER has 3 types of potential output labels (SUPPORTS, REFUTES, NOT ENOUGH INFO). In order to make prevent accidental correct answers from the REBEL system, only questions with the SUPORTS and REFUTES labels were considered. For this experiment REBEL was only allowed to use a search tool to query the internet, as that is the only tool that the ReAct system has access to. Our code, which can be found at rebel.anarchy.ai, was implemented in Python using the OpenAI Completion API to access GPT-3 (text-davinci-003). ### Results We found that REBEL outperformed ReAct on answering questions that require i) the gathering of many facts to determine an answer ii) very specific search queries that return large amounts of unstructured data. With our experimental results we were able to show that REBEL is a state-of-the-art system in terms of its ability to consistently answer questions from disparate knowledge bases. #### 4.2.1 Multi-Hop Fact Retrieval We used 2 datasets to test multi-hop fact retrieval: Compositional Celebrities and HotPotQA. Compositional Celebrities is a dataset consisting of 8.6k questions about Celebrities in different categories. All questions require retrieving two facts and basic reasoning. These two facts have never co-occurred in any text that would conceivably be part of the LLM training and the only way that the conclusion could be reached is for both of them to be evaluated correctly and composed with one another. We found that the REBEL system largely outperformed the ReAct system at all of the 5 categories that were experimented on for Compositional Celebrities. **On average, over the 5 categories tested, REBEL beat ReAct by 27.6 percent.** The reason for this is likely the ability of the REBEL system to work with unstructured tool return data. This allows the REBEL system to make and interpret very specific tool queries, whereas other systems that require standardized output can become constricted by the by a smaller possible set of tool queries. The results of this experiment can be found in Table 1. HotpotQA is a challenging question-answering dataset containing 113,000 pairs of questions and answers derived from Wikipedia articles. The questions in HotpotQA necessitate synthesis of information from diverse sources and cannot be found pre-existing training knowledge bases. **ReAct outperformed REBEL on HotPotQA by 13 percent** (Table 2). HotPotQA has questions that are significantly more than 2-hops, and on these questions REBEL tends to generate a massive recursive tree of subquestions. This introduces the issue of generating subquestions that lose context of the original question. Many times this can lead to the LLM not being able to reason through the large context window generated when processing these layers of recursive subquestions, resulting in the LLM finding no solution. #### 4.2.2 Fact Verification To test fact verification abilities, we employed the FEVER dataset. This benchmark is designed to evaluate the ability of models to extract factual information from textual sources and verify claims. The fact verification task involves determining the accuracy of claims made in a given piece of text. **On FEVER, the REBEL system (78 percent accuracy) performed slightly better (Table 2) than ReAct system (72 percent).** The reason for this out-performance by the REBEL system is because of the significant amount of "facts" that it gathers during its recursive solving of a fact verification problem. On several occasions, the ReAct system cannot find the information it is looking for to answer a questions, and therefore reports that it cannot make a determination if a certain fact is true or not. ### Ablation Study In order to determine the efficiency of REBEL, we conducted several ablation tests. In these tests the aim was to \begin{table} \begin{tabular}{l c c} \hline \hline Category & ReAct & REBEL \\ \hline birthplace\_rounded\_lat & 28 & **59** \\ birthplace\_currency & 85 & **94** \\ birthplace\_currency\_symbol & 35 & **47** \\ birthyear\_nobell\_literature & 33 & **82** \\ birthdate\_suppresident & 53 & **90** \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy (percent of questions answered correctly) of different algorithms on the categories of Compositional Celebrities. \begin{table} \begin{tabular}{l c c} \hline \hline Dataset & ReAct & REBEL \\ \hline FEVER & 72 & **78** \\ HotPotQA & **63** & 50 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy (percent of questions answered correctly) of different algorithms on HotPotQA and FEVER. isolate the affect of the REBEL system upon compositional problem solving. We used plain GPT3 (text-davinci-003) as our baseline. The results of these tests are in (Table 3 and Table 4). These tables show that GPT3 outperforms REBEL (with or without an external search tool) when a question can be easily answered with data that GPT3's training set. This is seen in Table 3 in the rows pertaining to \(Birthyear\_NobeLiterature\) and \(Birthplace\_Currency\). The REBEL algorithm without the external search tool outperformed the baseline when information processing is necessary to determine a final answer. Examples of this include questions that required the returning of a currency symbol or a rounded latitude. GPT3 succeeded in fetching the currency name or latitude correctly, but failed to round the latitude or return the symbol associated with the currency name. Adding external search augmented the REBEL algorithm's ability to reason with current facts, and therefore furthered the REBEL algorithm performance on most categories of Compositional Celebrities. Occasionally, the inclusion of an external search tool decreased performance due to the unstructured nature of return data the external tool provided. An example of this is on the \(Birthyear\_NobeLiterature\) category of Compositional Celebrities. On most categories of Compositional Celebrities and on HotPotQA, REBEL without the use of an external search tool improved performance over baseline GPT3. This indicates that our recursive approach adds reasoning capability to GPT3 independently of external tool use. ## 5 Cost Analysis The recursive search nature of the REBEL algorithm means that it employs many calls to an LLM before determining an answer to a question. The downsides of this approach manifest themselves in latency (Table 5) and monetary cost of LLM queries. Any external tools that are provided to the REBEL system will also be called very frequently, potentially leading to REBEL being a monetarily expensive system on that front as well. If a user desires to use REBEL without any tools, a cost in terms of hallucination has a potential of arising. Due to the lack of any external knowledge base, a hallucination on one subquestion has the potential to pollute the entire tree of reasoning. ## 6 Conclusion We have introduced REBEL, a recursive reasoning algorithm designed to use any arbitrary API as an external tool. REBEL outperforms the state-of-the-art on questions that require the collection of many facts and those that benefit from the ability to make highly specific queries to outside sources of data, which may be unstructured. REBEL also has a demonstrable improvement over the GPT3 LLM when answering questions that require multi-step information processing. However, the REBEL algorithm tends to over-complicate simple problems, leading to a reduction in accuracy when compared to baseline GPT3 on questions that require minimal compositionality. Future work would ideally address fine-tuning LLMs for \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & GPT3 & REBEL w/o tools & REBEL \\ \hline FEVER & 77 & 73 & **78** \\ HotPotQA & 43 & 46 & **50** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy (percent of questions answered correctly) of different algorithms on HotPotQA and FEVER. \begin{table} \begin{tabular}{l c c c} \hline \hline Category & GPT3 & REBEL w/o tools & REBEL \\ \hline birthplace\_rounded\_lat & 16 & 39 & **59** \\ birthplace\_currency & **95** & 94 & 94 \\ birthplace\_currency\_symbol & 28 & 45 & **47** \\ birthyear\_NobeLiterature & **95** & 90 & 82 \\ birthdate\_lspresident & 44 & **91** & 90 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy (percent of questions answered correctly) of different algorithms on the categories of Compositional Celebrities. \begin{table} \begin{tabular}{l c} \hline \hline Algorithm & Time (s) \\ \hline GPT3 & 0.94 \\ REBEL w/o tools & 5.358 \\ REBEL w/ tools & 9.76 \\ \hline \hline \end{tabular} \end{table} Table 5: Average time taken to answer a question from Compositional Celebrities each step in the REBEL pipeline and experimenting with limiting recursive depth of subquestion generation.
2301.13865
From Semi-supervised to Omni-supervised Room Layout Estimation Using Point Clouds
Room layout estimation is a long-existing robotic vision task that benefits both environment sensing and motion planning. However, layout estimation using point clouds (PCs) still suffers from data scarcity due to annotation difficulty. As such, we address the semi-supervised setting of this task based upon the idea of model exponential moving averaging. But adapting this scheme to the state-of-the-art (SOTA) solution for PC-based layout estimation is not straightforward. To this end, we define a quad set matching strategy and several consistency losses based upon metrics tailored for layout quads. Besides, we propose a new online pseudo-label harvesting algorithm that decomposes the distribution of a hybrid distance measure between quads and PC into two components. This technique does not need manual threshold selection and intuitively encourages quads to align with reliable layout points. Surprisingly, this framework also works for the fully-supervised setting, achieving a new SOTA on the ScanNet benchmark. Last but not least, we also push the semi-supervised setting to the realistic omni-supervised setting, demonstrating significantly promoted performance on a newly annotated ARKitScenes testing set. Our codes, data and models are released in this repository.
Huan-ang Gao, Beiwen Tian, Pengfei Li, Xiaoxue Chen, Hao Zhao, Guyue Zhou, Yurong Chen, Hongbin Zha
2023-01-31T18:58:41Z
http://arxiv.org/abs/2301.13865v1
# From Semi-supervised to Omni-supervised ###### Abstract Room layout estimation is a long-existing robotic vision task that benefits both environment sensing and motion planning. However, layout estimation using point clouds (PCs) still suffers from data scarcity due to annotation difficulty. As such, we address the semi-supervised setting of this task based upon the idea of model exponential moving averaging. But adapting this scheme to the state-of-the-art (SOTA) solution for PC-based layout estimation is not straightforward. To this end, we define a quad set matching strategy and several consistency losses based upon metrics tailored for layout quads. Besides, we propose a new online pseudo-label harvesting algorithm that decomposes the distribution of a hybrid distance measure between quads and PC into two components. This technique does not need manual threshold selection and intuitively encourages quads to align with reliable layout points. Surprisingly, this framework also works for the fully-supervised setting, achieving a new SOTA on the ScanNet benchmark. Last but not least, we also push the semi-supervised setting to the realistic omni-supervised setting, demonstrating significantly promoted performance on a newly annotated ARKitScenes testing set. Our codes, data and models are made publicly available. ## I Introduction Over the past decade, room layout estimation has drawn a lot of attention from the robotics community [1, 2, 3, 4, 5, 6] since it marks a crucial step towards understanding indoor scenes and might help robot agents make better decisions in challenging environments [7, 8, 9, 10]. However, the majority of earlier efforts exploit perspective or panoramic RGB images as input [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28], whereas the promising paradigm of layout estimation using point clouds (PCs) [29] still suffers from the lack of annotated data. It is due to the difficulty of annotating the boundaries of 3D indoor scenes manually, particularly rooms containing non-cuboid shapes and many corners. We envision an omni-supervised setting [30] where intelligent robots all over the world can exploit enormous unannotated raw point clouds to continuously improve the collective intelligence (i.e., layout estimation accuracy in this study). To this end, we start from the semi-supervised setting in which we assume a large portion of ScanNet [31] annotations is not available and push it finally to the omni-supervised setting using the recent ARKitScenes [32] dataset. Actually, semi-supervised room layout estimation has already been studied in a recent work SSLayout360 [33]. However, it still relies upon hand-crafted post-processing and only exploits the model exponential moving averaging (EMA) technique to learn representations from many unannotated panoramic images. Note that this paradigm does not apply to the state-of-the-art (SOTA) PC-based layout estimator [29], which directly predicts quads end-to-end. To this end, we propose the first semi-supervised room layout estimation method using point cloud inputs. Our method builds upon the SOTA counterpart PQ-Transformer [29], which takes the 3D point cloud of a scene as input (see Fig. 1(a)) and predicts a set of quadrilateral (referred to as _quads_) equations representing layout elements (wall, floor and ceiling). As observed in Fig. 1(b), it performs poorly on unseen scenes if only \(20\%\) annotations are used for training. By contrast, our model is able to predict a more accurate layout by making use of the unlabeled data (see Fig. 1(c)). Specifically, the success of our method is credited to two techniques. The **first** is a consistency based training framework inspired by the Mean Teacher [34] method. We design a quad matching strategy and three consistency regularization losses that are tailored for the layout estimation problem. We also identify a simple but effective add-on that capitalizes on the confidence of the teacher model. The **second** is a pseudo label generation module that decomposes the distribution of a new hybrid metric into two components, based upon gamma mixture. It intuitively aligns quad predictions to reliable layout point clouds. Through ablation experiments, both techniques are proven effective, and combining them brings larger improvements. Experimental results highlight four notable messages: (1) our solution with different percentages (e.g., \(5\%\) to \(40\%\) Fig. 1: _(a)_ The input is a 3D point cloud whose colors are only for visualization. _(b)_ We train the former SOTA method PQ-Transformer with only 20\(\%\) labeled data of ScanNet training set and use it as the baseline. _(c)_ We adopt our method on the whole ScanNet training set with only \(20\%\) annotations, resulting in a more accurate layout prediction. of annotations available consistently and greatly outperforms supervised baselines on the ScanNet dataset. (2) with only 40\(\%\) of labeled data we are able to surpass prior fully-supervised SOTA. (3) in the fully-supervised setting, our method can also improve strong baselines by +4.11\(\%\). (4) we further extend the method into a more realistic omnip-supervised [30] setting, where we leverage all ScanNet training data and unlabeled ARKitScenes [32] training data. On a newly crowd-sourced ARKitScenes testing set, a significant performance gain is achieved, with F1-score going from 10.66\(\%\) to 25.85\(\%\). Our contributions are as follows: * We propose the first semi-supervised framework for room layout estimation using point clouds, with tailored designs including a quad set matching strategy and three confidence-guided consistency losses. * We propose a threshold-free pseudo-label harvesting technique based upon a newly-proposed hybrid distance metric and gamma mixture decomposition. * We achieve significant results in semi-supervised, fully-supervised and omni-supervised settings. We contribute a new crowd-sourced testing set and release our codes. ## II Related Works Recently, semi-supervised and weakly-supervised learning are hot topics in the robotics community, with many methods proposed for various tasks including point cloud semantic parsing [35, 36, 37, 38] and representation learning [39], 3D object detection [40][41], articulation understanding [42], single-view reconstruction [43] and intrinsic decomposition [44]. This line of research envisions an exciting future scheme that robots all over the world exploit unlimited unlabeled data to continuously improve the collective intelligence. [45, 46, 47, 48] Our study is the first semi-supervised framework for room layout estimation from point clouds, which contributes to this robotic vision trend. From the perspective of methodology, we briefly review two kinds of semi-supervised learning (SSL) paradigms. **The consistency based SSL methods** rely on the assumption that near samples from the low-dimensional input data manifold result in near outputs in the feature space [49, 50]. Thus, they enforce the model to stay in agreement with itself despite perturbations. Under this scope, multiple perturbation strategies are explored. The \(\Pi\) model [51, 52] penalizes the difference of hidden features of the same input with different data transformations and dropout. Temporal Ensembling training [52] regularizes consistency on current and former predictions. The Mean Teacher method [34] uses exponential moving average of student network parameters. **The pseudo-label based SSL methods**, on the other hand, are more general as they don't require domain-specific data transformations. By equipping them with a few necessary designs, they can be as proficient as consistency based ones. For example, in 3D object detection task, [53] proposes two post-processing modules to improve the recall rate and the precision rate of the pseudo labels. In image classification task, [54] sets a constant confidence threshold \(\tau\) for determining whether to discard a pseudo-label, and [55] upgrades that constant to a set of per-class learnable variables. In 2D object detection task, Noisy Pseudo-box Learning strategy is proposed by [56], which only considers \(N\) proposals of top-quality as pseudo labels and the rest ones as false positives. ## III Method We aim to develop a learning framework that allows robot agents to leverage enormous unlabeled data to infer room layouts \(\mathbf{Y}\) from indoor scene point clouds \(\mathbf{X}\). Following [29], we denote a layout wall (represented by a quad) \(\mathbf{y}=\{\mathbf{c},\mathbf{n},\mathbf{s},p\}\in\mathbf{Y}\) by its center coordinate \(\mathbf{c}\), unit normal vector \(\mathbf{n}\), size \(\mathbf{s}=(w,h)\), and predicted quadness score \(p\). The quadness scores of ground truths are fixed to \(1.0\). To start with, we formally describe three training settings. Suppose we have a 3D point cloud (PC) dataset \(\mathcal{D}_{L}\) with layout annotations in conjunction with a much larger unlabeled PC dataset \(\mathcal{D}_{U}\). In the **fully-supervised setting**, \(\mathcal{D}_{L}\) is the whole training set of ScanNet with quad annotations whereas \(\mathcal{D}_{U}\) is a null set. In the **semi-supervised setting**, \(\mathcal{D}_{L}\) is part of the ScanNet training set along with quad annotations whereas \(\mathcal{D}_{U}\) is the complementary set whose annotations are assumed unknown. In the **omni-supervised setting**[30] which is a real-world generalization of the semi-supervised setting, \(\mathcal{D}_{L}\) is the whole training set of ScanNet with annotations whereas \(\mathcal{D}_{U}\) is the ARKitScenes training set without annotations. We introduce our method in the three settings using unified notations \(\mathcal{D}_{L}\) and \(\mathcal{D}_{U}\). As depicted in Fig. 2, we adapt the Mean Teacher [34] training framework (see Sec. III-A) to end-to-end room layout estimation with a tailored quad matching strategy and three consistency losses. We also integrate a novel pseudo-label refinement module (Sec. III-B) for quads, which is based upon gamma mixture decomposition. In Sec. III-C, we describe the loss terms to optimize. ### _Quad Mean Teacher (QMT)_ Mean Teacher [34] is a successful framework for semi-supervised learning with a student model and a teacher model of the same architecture. The general idea is to feed two models with the same input samples transformed differently and enforce the predictions of the two models to be consistent. The student model is updated by gradient descent while the teacher model is updated by exponential moving average (EMA) of the weights of the student model. Inspired by the idea of Mean Teacher, we first sample \(\mathbf{X}^{U}\) from \(\mathcal{D}_{U}\) and \((\mathbf{X}^{L},\mathbf{Y}^{L})\) from \(\mathcal{D}_{L}\) to form a batch \(\mathbf{X}=\{\mathbf{X}^{L},\mathbf{X}^{U}\}\). \(\mathbf{X}\) is transformed with stochastic transformation \(T\) before feeding into the student model to yield \(\tilde{\mathbf{Y}}_{S}=\{\tilde{\mathbf{Y}}_{S}^{L},\tilde{\mathbf{Y}}_{S}^{U}\}\). \(\mathbf{Y}^{L}\) is transformed into \(\tilde{\mathbf{Y}}^{L}\) with the same transformation. Meanwhile, \(\mathbf{X}\) is also fed into the teacher model and then applied the same transformation \(T\) to yield \(\tilde{\mathbf{Y}}_{T}=\{\tilde{\mathbf{Y}}_{L}^{L},\tilde{\mathbf{Y}}_{T}^{U}\}\). Following the same loss design in [29], we impose a supervised loss \(\mathcal{L}_{\text{sup}}\) between \(\tilde{\mathbf{Y}}_{S}^{L}\) and \(\tilde{\mathbf{Y}}^{L}\). The success of Mean Teacher based methods relies on domain-specific data transformation and carefully designed consistency losses between two sets of predictions, without which the method could suffer from degeneration. Based upon this observation, we design the transformation domain and consistency losses for room layout estimation as follows. **Data transformation** We adopt four kinds of transformations: Farthest Point Sampling (FPS) [57], flipping along horizontal axes, rotating along vertical axes and coordinates scaling. FPS [57] downsamples the point cloud by repeatedly choosing the point farthest from the chosen ones, discarding only redundant points. Also, flipping, rotating and scaling in constrained ways mimic the natural viewpoint changes of humans. Among them, layout annotations are invariant to FPS [57] as subsampling does not change the layout geometries and equivariant to the other three transformations with which the geometries should be transformed accordingly. Hence, when applying the same transformation, for invariant transformation (i.e., FPS [57]) we use different seeds and for the other three we apply the same transformation before the student model and after the teacher model. **Quad Set Matching** To encourage consistency between the predicted quad sets of two models, the difference between two quads should be defined first. Given two quad predictions, \(\tilde{\mathbf{y}}_{1}=\{\tilde{\mathbf{c}}_{1},\tilde{\mathbf{n}}_{1},\tilde{ \mathbf{s}}_{1},p_{1}\}\) and \(\tilde{\mathbf{y}}_{2}=\{\tilde{\mathbf{c}}_{2},\tilde{\mathbf{n}}_{2}, \tilde{\mathbf{s}}_{2},p_{2}\}\), the differences of three geometrical characteristics (quad center location \(\tilde{\mathbf{c}}\), quad normal \(\tilde{\mathbf{n}}\), quad size \(\tilde{\mathbf{s}}\)) should all be considered. Thus, as illustrated in Fig. 3(b), we define the distance between two quads as: (\(\|\cdot\|_{k}\) denotes \(k\)-norm) \[d(\tilde{\mathbf{y}}_{1},\tilde{\mathbf{y}}_{2})=\|\tilde{\mathbf{c}}_{1}- \tilde{\mathbf{c}}_{2}\|_{2}+|1-\tilde{\mathbf{n}}_{1}\cdot\tilde{\mathbf{n}} _{2}|+\|\tilde{\mathbf{s}}_{1}-\tilde{\mathbf{s}}_{2}\|_{2}^{2} \tag{1}\] Based on the distance metric between quads, we calculate the difference of two predicted quad sets by first finding the correspondences between the two quad sets and then summing up the distances between corresponding quads. To establish the correspondences, we find the nearest student-predicted quad \(\tilde{\mathbf{y}}_{S}=\{\tilde{\mathbf{c}}_{S},\tilde{\mathbf{n}}_{S}, \tilde{\mathbf{s}}_{S},p_{S}\}\) for each teacher-predicted quad \(\tilde{\mathbf{y}}_{T}=\{\tilde{\mathbf{c}}_{T},\tilde{\mathbf{n}}_{T}, \tilde{\mathbf{s}}_{T},p_{T}\}\): \[\mathcal{P}_{\tilde{\mathbf{Y}}_{S}}(\tilde{\mathbf{y}}_{T})= \operatorname*{argmin}_{\tilde{\mathbf{y}}_{S}\in\tilde{\mathbf{Y}}_{S}}| \tilde{\mathbf{c}}_{S}-\tilde{\mathbf{c}}_{T}\|_{2} \tag{2}\] We use \(\mathcal{P}(\cdot)\) to represent this injective mapping from the teacher model prediction to the student model prediction. **Consistency Loss Design** Although the quad geometries (i.e., \(\tilde{\mathbf{c}},\tilde{\mathbf{n}},\tilde{\mathbf{s}}\)) predicted by teacher are not adequately precise, the predicted quadness score \(p\) could measure the correctness of the predictions. Considering that the teacher-predicted quads are generally more reliable than the student-predicted quads, we use teacher-predicted quadness scores \(p_{T}\) as the confidence and define the consistency loss \(\mathcal{L}_{\text{QMT}}\) as: \[\mathcal{L}_{\text{QMT}}=\frac{1}{|\tilde{\mathbf{Y}}_{T}|}\sum_{\tilde{ \mathbf{y}}_{T}\in\mathbf{Y}_{T}}d(\mathcal{P}_{\tilde{\mathbf{Y}}_{S}}( \tilde{\mathbf{y}}_{T}),\tilde{\mathbf{y}}_{T})\cdot p_{T} \tag{3}\] **Remark** A similar idea to evaluate the closeness of two sets is the Chamfer Distance, which establishes two injective mappings from each of the two sets to its counterpart. On the contrary, our method only establishes a one-way mapping from the teacher model predictions to the student model predictions since the latter is less reliable than the former. As depicted in Fig. 3(a), finding the nearest teacher prediction around \(S\) and penalizing the quad distance in between would wrongly push \(S\) to the unreliable prediction \(T_{1}\). By contrast, as \(S\) is the nearest student prediction around unreliable \(T_{1}\) and reliable \(T_{2}\), optimizing the weighted quad distance sum would push \(S\) to the reliable prediction \(T_{2}\). ### _Gamma Mixture Filtering (GMF)_ In this stage, we introduce the Gamma Mixture Filtering module which makes further use of the unlabeled data and Fig. 2: **Method Overview**. In each training iteration, we sample \((\mathbf{X}^{L},\mathbf{Y}^{L})\) from labeled dataset and \(\mathbf{X}^{U}\) from unlabeled dataset to form a batch. The input batch is first stochastically transformed then fed into the student model to produce predictions \(\tilde{\mathbf{Y}}_{S}^{L}\) and \(\tilde{\mathbf{Y}}_{S}^{U}\). Meanwhile, the input batch is also fed into the teacher model then transformed to produce predictions \(\tilde{\mathbf{Y}}_{T}^{L}\) and \(\tilde{\mathbf{Y}}_{T}^{U}\). In the two adopted transformations, FPS sampling uses different seeds whereas rotation, flipping and scaling are identical. We impose three losses in total: (1) a supervised loss between the transformed label and predictions of student model. (2) a consistency loss that minimizes the difference between student predictions and teacher predictions. (3) a pseudo-label loss that encourages quads to align with reliable layout points. The student parameters are updated by gradient descent according to the sum of three losses, whereas the teacher parameters are updated by exponential moving average (EMA) of student parameters. re-estimates a more accurate quad prediction \(\tilde{\mathbf{y}}_{TR}\) from the noisy prediction \(\tilde{\mathbf{y}}_{T}\). A naive approach to do this is to select points whose perpendicular distance to the quad is below a manually chosen distance threshold \(\epsilon_{D}\) and use these points to estimate a more accurate quad. However, it is inevitable to manually tune the hyper-parameter \(\epsilon_{D}\), which is time-consuming and ineffective as a fixed threshold is usually not applicable to all scenes. Besides, using perpendicular distance solely as the metric may erroneously select points in the room corners which belong to other quads. To address these issues, we introduce 1) hybrid distance between point and quad as an improved metric and 2) the gamma mixture decomposition filtering strategy to automatically select the threshold for filtering. **Hybrid Point-Quad Metric** We propose a hybrid metric to measure the distance between a point and a quad. Instead of using the perpendicular distance alone, we also leverage normals and quad sizes. Consider a point \(\mathbf{p}\) with coordinate \(\mathbf{c}_{p}\) and normal \(\mathbf{n}_{p}\) estimated with adjacent points in the PC, and a quad \(\tilde{\mathbf{y}}_{T}\) whose plane equation is \(\tilde{\mathbf{n}}_{T}\cdot(\mathbf{c}-\tilde{\mathbf{c}}_{T})=0\), \(\mathbf{c}\in\mathbb{R}^{3}\). Then the perpendicular distance can be written as: \[\mathcal{M}_{\mathbf{p}}(\mathbf{p},\tilde{\mathbf{y}}_{T})=|(\mathbf{c}_{p}- \tilde{\mathbf{c}}_{T})\cdot\tilde{\mathbf{n}}_{T}| \tag{4}\] Note that \(\tilde{\mathbf{n}}_{T}\) is of unit length. In some corner cases where points are close but differ greatly in normals (e.g. in wall corners), using this measure solely would erroneously include points on other quads. Therefore, we also define a cosine similarity metric for the normals: \[\mathcal{M}_{\mathbf{o}}(\mathbf{p},\tilde{\mathbf{y}}_{T})=|1-\mathbf{n}_{p }\cdot\tilde{\mathbf{n}}_{T}| \tag{5}\] Furthermore, as the size of quads is not considered in the proposed two measures, we consider the extent to which the projections of points lay outside the quad. Since the vertical edges of predicted quads are parallel to \(\hat{\mathbf{z}}=(0,0,1)^{T}\), the horizontal edges should be parallel to \(\hat{\mathbf{x}}=\frac{\mathbf{n}_{T}\times\hat{\mathbf{z}}}{|\tilde{ \mathbf{n}}_{T}\times\hat{\mathbf{z}}|_{2}}\) (\(\times\) denotes cross product). The horizontal and vertical distances between the quad center and the projection of \(\mathbf{p}\) on the quad are then given by \(w_{p}=|(\mathbf{c}_{p}-\tilde{\mathbf{c}}_{T})\cdot\hat{\mathbf{x}}|\) and \(h_{p}=|(\mathbf{c}_{p}-\tilde{\mathbf{c}}_{T})\cdot\hat{\mathbf{z}}|\), respectively. Thus, we define the out-of-quad metric as: \[\mathcal{M}_{\mathbf{s}}(\mathbf{p},\tilde{\mathbf{y}}_{T})=[\text{ReLU}((w_{ p},h_{p})^{T}-\tilde{\mathbf{s}}_{T})]_{1} \tag{6}\] Finally, the hybrid point-quad distance is defined as: \[\mathcal{M}=\mathcal{M}_{p}+\mathcal{M}_{o}+\mathcal{M}_{s} \tag{7}\] In Fig. 4(b) we illustrate the three proposed metrics between the highlighted quad and points. **Mixture Decomposition Filtering** In this stage we use the hybrid metric \(x=\mathcal{M}(\cdot,\cdot)\) to select points from the PC for each quad. We first collect the metrics between the quad and all points, and then use the metrics to fit a probabilistic mixture model. The possibility density function (PDF) of the probability model is defined as \[P(x|\theta_{0},\theta_{1})=w_{0}P(x|\theta_{0})+w_{1}P(x|\theta_{1}) \tag{8}\] where \(P(x|\theta_{0})\) and \(P(x|\theta_{1})\) are PDFs of individual components and \(w_{0},w_{1}\) denote weights of them, with \(w_{0}+w_{1}=1\). The two individual components correspond to points that belong to the quad and those don't, respectively. We empirically choose gamma distribution for the two components: \[P(x|\theta_{i})=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{a-1}(1-x)^{b-1}, \theta_{i}=\{a,b\} \tag{9}\] To fit this mixture distribution, we follow [58] to decide the parameters \(\theta_{0},\theta_{1},w_{0}\) and \(w_{1}\). By using the expectation maximization (EM) algorithm, we take the parameters when \(\sum_{p\in P}\log P(x_{p}|\theta_{0},\theta_{1})\) is maximized. The fitting result is illustrated in Fig. 4(c), where the blue curve represents \(P(x|\theta_{0})\) and the red curve represents \(P(x|\theta_{1})\). Finally, with this mixture model, we examine the probabilities that an unlabeled point belongs and not belongs to the quad. When the former is larger than the latter, we keep this point during filtering. In other words, for each quad \(\mathbf{y}_{T}\) we keep points \(\mathbf{p}_{i}\) that satisfy \(w_{0}P(x_{i}|\theta_{0})\leq w_{1}P(x_{i}|\theta_{1})\) where \(x_{i}=\mathcal{M}(\mathbf{p}_{i},\mathbf{y}_{T})\), as shown in Fig. 4(d). It is unnecessary to manually tune a threshold, as the intersection point of the two component PDFs works as a per-quad threshold obtained by statistics of the unlabeled points around that quad. **Quad estimation** With the set of selected points \(P^{\prime}\), we reconstruct a more accurate quad \(\tilde{\mathbf{y}}_{TR}\) for each predicted quad \(\tilde{\mathbf{y}}_{T}\). We refine the quad center and quad normal to \(\mathbf{c}^{\prime}=\frac{1}{[P^{\prime}]}\sum_{p\in P}\mathbf{c}_{p}\) and \(\mathbf{n}^{\prime}=\sum_{p\in P^{\prime}}\mathbf{n}_{p}\) / \(\|\sum_{p\in P^{\prime}}\mathbf{n}_{p}\|_{2}\). To estimate the quad size, we randomly take \(K_{s}\) samples \(\{\tau_{i}\}_{i=1}^{K_{s}}\) from \([0,1]\). Under the assumption that the point Fig. 4: **Illustration on Gamma Mixture Filtering**. We calculate the proposed hybrid metrics between points and quads in (b), where warmer colors indicate shorter distances. Then we decompose the distribution of metrics into two components, corresponding to points that belong to the quad and those don’t, respectively. We filter out redundant points using the mixture distribution model (depicted in (c)), and re-estimate quads with higher accuracy for the student model to learn. Fig. 3: **Illustration on Teacher Student Alignment**. _(a)_ For every teacher-predicted quad, we find the nearest student-predicted quad. Although teacher predictions are noisy, the quadness scores demonstrate how accurate the predictions are. _(b)_ These three figure illustrate the three components of the defined distance between two quads. collection \(P^{\prime}\) is uniformly sampled from the refined quad, we refine the quad size to \(\mathbf{s}^{\prime}=\frac{1}{K_{*}}\sum_{i=1}^{K_{*}}\frac{1}{\tau_{i}}\cdot \mathrm{quanti}(\tau_{i})\). Here \(\mathrm{quanti}(\tau_{i})\) is defined as \(\tau_{i}\)-th quantiles of \(\{\mathbf{s}_{p}|p\in P^{\prime}\}\) computed on \(\hat{\mathbf{x}}\) axis and \(\hat{\mathbf{z}}\) axis, respectively. In each scene of each training step, due to tractability concerns, we choose one of all teacher predicted quads to refine, as illustrated in Fig. 2. Based on the refined quad \(\tilde{\mathbf{y}}_{TR}=\{\mathbf{c}^{\prime},\mathbf{n}^{\prime},\mathbf{s}^{ \prime},1.0\}\), we propose the pseudo-label loss: \[\mathcal{L}_{\text{GMF}}=d(\mathcal{P}(\tilde{\mathbf{y}}_{T}),\tilde{ \mathbf{y}}_{TR}) \tag{10}\] ### _Loss_ The loss term we aim to optimize during training is: \[\mathcal{L}=\mathcal{L}_{\text{sup}}+\lambda_{\text{QMT}}\mathcal{L}_{\text{ QMT}}+\lambda_{\text{GMF}}\mathcal{L}_{\text{GMF}} \tag{11}\] where \(\lambda_{\text{QMT}}\) and \(\lambda_{\text{GMF}}\) are loss weights. ## IV Experiment ### _Datasets and Implementation Details_ **Datasets** In the semi-supervised setting, our methods are evaluated on the ScanNet dataset. ScanNet [31] is a large-scale RGB-D video dataset with 3D reconstructions of indoor scenes, including 1513 scans reconstructed from around 2.5 million views. On top of the ScanNet, SceneCAD [59] provides scene layout annotations containing 8.4K polygons. In our experiments, we use the 3D reconstructions from ScanNet [31] as the input point clouds and use the scene layouts from SceneCAD [59] as the ground truth labels. Furthermore, we extend our methods to the omni-supervised setting and employ ARKitScenes dataset [32]. ARKitScenes is another large-scale RGB-D dataset containing 4493 training scans and 549 validation scans. In our experiments, the training scans are leveraged as the unlabeled input. The validation scans are used for testing, whose ground-truth layouts are annotated by crowd-sourcing. **Implementation Details** In the transformation stage, the point cloud is first downsampled to 40,000 points with FPS and rotated along the z-axis by \(\theta=\theta_{1}+\theta_{2}\), with \(\theta_{1}\) randomly chosen from \(\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\) and \(\theta_{2}\) uniformly sampled from \([-5^{\circ},5^{\circ}]\). Next, the point cloud is flipped along the x-axis and the y-axis with the probability of 0.5 and scaled by a ratio uniformly sampled from \([0.85,1.15]\). We implement the teacher and student models in the proposed Quad Mean Teacher framework with PQ-Transformer [29], while the framework also works with other layout estimators. The preprocessing of quad annotations and the evaluation metrics are the same as [29]. The consistency loss weight is set to \(\lambda_{\text{QMT}}=0.05\), using the same warm-up strategy as [60]. The pseudo-label loss weight is set as \(\lambda_{\text{QMF}}=5\times 10^{-4}\). Our experiments run on a single NVIDIA RTX A4000 GPU with batch size of 6. Half of the samples in a batch have quad annotations. ### _Results_ To the best of our knowledge, our methods are the first to perform the PC-based layout estimation task in the semi-supervised and the omni-supervised setting. Hence we compare our method with fully-supervised methods including SceneCAD [59] and PQ-Transformer [29]. We evaluate our method and the baselines in various semi-supervised settings on ScanNet validation set and report the F1-scores of the predicted layouts in Tab. I. The size of labeled set \(\mathcal{D}_{L}\) sampled from the ScanNet training split, or the amount of ground truth annotations in use, is denoted by percentages in the first row. And \(\mathcal{D}_{U}\) is the complementary set whose annotations are assumed unknown. It can be seen that either QMT or GMF can result in performance boost. And by combining these two techniques together, we see further improvement in performance. In all semi-supervised settings, the performances of our methods are better than baselines by large margins. With only 40\(\%\) quad annotations available, our method achieves similar performance to that of the state-of-the-art method trained in fully supervised settings. Surprisingly, our method also performs better in fully supervised settings than former arts. We attribute the outperformance to the consistency regularization mechanism promoting the model's robustness to perturbations and the pseudo-label refinement module providing guidance on the geometrical information of layouts. We further demonstrate the robustness of our method in the omni-supervised setting [30]. To be more specific, we train our method and the baselines with the whole labeled training split of ScanNet \(\mathcal{D}_{L}\) and then evaluate the performance on the validation split of the ARKitScenes dataset with crowd-sourced layout annotations. Besides, in our method, the unlabeled training split of ARKitScenes serves as the unlabeled dataset \(\mathcal{D}_{U}\). As shown in Tab. II, our method achieves a significant margin over former arts, showing the ability to generalize to more realistic omni-supervised settings. In addition, we provide visualization of the quad predictions of our method on ScanNet and ARKitScenes in Fig. 5 and Fig. 6. These qualitative results show that exterior quads as well as the interior quads are predicted by our method accurately, compensating for the ineffectiveness of PQ-Transformer [29] w.r.t. interior wall quads. ### _Ablation Study_ **Data Transformation Strategies** We run the 10\(\%\)-supervised experiment on ScanNet with different data transformations. As shown in Tab. III, data transformation is crucial to our proposed method, as any of the transformations improves the performance, and in extreme cases without transformations the F1-score decreases by 6.31%. Among the four transformations, rotation has the largest influence on the performances. One possible reason is that rotation brings the most changes to the coordinates of points whilst keeping the holistic layouts of the scenes unchanged. **Quad Mean Teacher** We compare Quad Mean Teacher and the basic Mean Teacher (MT) method in the 10%-supervised settings and report their performances on ScanNet in Tab. IV. MT assumes that all teacher predictions are equally reliable. Results show that the QMT achieves a large margin over MT on the precision of prediction. We believe this is because the confidence of predictions by the teacher model is exploited and erratic or incorrect predictions are neglected accordingly. **Gamma Mixture Filtering** In the 10%-supervised settings, we compare our method using only pseudo-label loss with the naive \(\epsilon_{D}\) approach introduced in Sec. III-B. We set the fixed threshold \(\epsilon_{D}=0.2\)m. More specifically, in the alternative method, a point stays after filtering if its perpendicular distance to the plane is less than \(\epsilon_{D}\). Compared to ours, the alternative method achieves significantly lower performance, since no supervision is applied on the quad normals and sizes. ## V Conclusion Our research makes the first step towards omni-supervised layout estimation merely using point clouds, which has promising implications in robotics. Our training framework combines Quad Mean Teacher and Gamma Mixture Filtering to better exploit the unlabeled data. Experimental results demonstrate our method's effectiveness in semi-supervised, fully-supervised and omni-supervised settings. Despite the effectiveness of our method, limitations still exist. The predictions of our method are unsatisfactory in incomplete scenes, in which insufficient points fail to form a layout wall. In the future, we will consider possible rectifications including ensembling online inference results, thanks to the quasi-real-time speed brought by the PQ-Transformer [29] implementation. Fig. 5: **Qualitative results on ScanNet. The ratio represents the proportion of annotated data in use.** Fig. 6: **Qualitative results on ARKitScenes. Ground truth layouts are annotated by crowd-sourcing.**
2307.08675
Exploring Implied Certainty Equivalent Rates in Financial Markets: Empirical Analysis and Application to the Electric Vehicle Industry
In this paper, we mainly study the impact of the implied certainty equivalent rate on investment in financial markets. First, we derived the mathematical expression of the implied certainty equivalent rate by using put-call parity, and then we selected some company stocks and options; we considered the best-performing and worst-performing company stocks and options from the beginning of 2023 to the present for empirical research. By visualizing the relationship between the time to maturity, moneyness, and implied certainty equivalent rate of these options, we have obtained a universal conclusion -- a positive implied certainty equivalent rate is more suitable for investment than a negative implied certainty equivalent rate, but for a positive implied certainty equivalent rate, a larger value also means a higher investment risk. Next, we applied these results to the electric vehicle industry, and by comparing several well-known US electric vehicle production companies, we further strengthened our conclusions. Finally, we give a warning concerning risk, that is, investment in the financial market should not focus solely on the implied certainty equivalent rate, because investment is not an easy task, and many factors need to be considered, including some factors that are difficult to predict with models.
Yifan He, Svetlozar Rachev
2023-06-30T18:25:46Z
http://arxiv.org/abs/2307.08675v2
Exploring Implied Certainty Equivalent Rates in Financial Markets: Empirical Analysis and Application to the Electric Vehicle Industry ###### Abstract In this paper, we mainly study the impact of the implied certainty equivalent rate on investment in financial markets. First, we derived the mathematical expression of the implied certainty equivalent rate by using put-call parity, and then we selected some company stocks and options; we considered the best-performing and worst-performing company stocks and options from the beginning of 2023 to the present for empirical research. By visualizing the relationship between the time to maturity, moneyness, and implied certainty equivalent rate of these options, we have obtained a universal conclusion--a positive implied certainty equivalent rate is more suitable for investment than a negative implied certainty equivalent rate, but for a positive implied certainty equivalent rate, a larger value also means a higher investment risk. Next, we applied these results to the electric vehicle industry, and by comparing several well-known US electric vehicle production companies, we further strengthened our conclusions. Finally, we give a warning concerning risk, that is, investment in the financial market should not focus solely on the implied certainty equivalent rate, because investment is not an easy task, and many factors need to be considered, including some factors that are difficult to predict with models. **Keywords**: put-call parity; implied put-call parity certainty equivalent rate; electric vehicle industry ## 1 Introduction The certainty equivalent rate is a measure derived from the certainty equivalent1, which plays a pivotal role in financial investment. Investors usually need to refer to the changing trend of this value to decide whether a certain company is worth investing in or if certain companies are worth investing in, that is, it is used to determine the priority of investment. The main purpose of this paper is to solve these two problems. First, we give the mathematical expression of the certainty equivalent rate using the put-call parity formula. From the mathematical expression, we can find the factors that cause changes in the certainty equivalent rate. Second, we select the stocks and options of three companies with the best performance from the beginning of 2023 to the current time and the stocks and options of three companies with the worst performance for empirical research. Through data visualization, we obtain a general conclusion. Third, we apply the general conclusions drawn to the US electric vehicle industry. Specifically, we select three well-known US electric vehicle companies and explore whether they are worth investing in from the perspective of the certainty equivalent rate and the priority of investment. Finally, we summarize this paper and emphasize that investment is an extremely complicated matter that requires the consideration of many factors, not just the certainty equivalent rate. When many factors are considered, investors are more likely to make the optimal decision. ## 2 Theoretical Support The key theorem that we will use is put-call parity. A detailed explanation of put-call parity can be found in [1]. [1] considers interest and dividends to be paid in accordance with continuous compounding, but in a real financial market, interest and dividends are more likely to be paid at specific points in time rather than every second. Thus, we prefer to use the discrete-compounding version of put-call parity when we consider problems in a real financial market. Hence, the following is the detailed mathematical expression of put-call parity that we will use in this paper: \[C+\frac{K}{(1+r)^{T}}=P+\frac{S}{(1+q)^{T}}, \tag{1}\] where \[\begin{cases}T&\stackrel{{\rm def}}{{=}}\text{The time to maturity},\\ C&\stackrel{{\rm def}}{{=}}\text{A given company's call option price with respect to the maturity date},\\ P&\stackrel{{\rm def}}{{=}}\text{A given company's put option price with respect to the maturity date},\\ K&\stackrel{{\rm def}}{{=}}\text{A given company's option strike price with respect to the maturity date},\\ S&\stackrel{{\rm def}}{{=}}\text{A given company's stock price with respect to the start date},\\ q&\stackrel{{\rm def}}{{=}}\text{A given company's dividend yield},\\ r&\stackrel{{\rm def}}{{=}}\text{A given company's put-call parity certainty equivalent rate}.\\ \end{cases}\] Based on (1), we can obtain the mathematical expression for \(r\): \[r=\left[\frac{K(1+q)^{T}}{S+(P-C)(1+q)^{T}}\right]^{\frac{1}{T}}. \tag{2}\] In the following sections, we will mainly use (2) to explore the relationship between the time to maturity \(T\), the moneyness \(S/K\), and the implied company-specific put-call parity certainty equivalent rate \(r\). Empirical Research ### Preparation Before conducting our empirical research, we had to figure out how to obtain the values of the arguments in (2): * **Argument**\(S\): Since our purpose is to explore the stocks' behavior in 2023, we choose the start date as January 3, 2023, which is the first business day in 2023. Therefore, \(S\) in (2) will be the company's stock price on January 3, 2023. We can find these values in every stock's "historical data" section on Yahoo Finance. * **Argument**\(q\): We will consider the value of the "forward annual dividend yield"; the relevant data can be found in the "statistics" section on Yahoo Finance. * **Arguments \(T\) and \(K\)**: On CBOE, we can find a given company's stock option's strike \(T\) and its maturity date. Then, we subtract the start date (January 3, 2023) from the maturity date and convert the result into years2. Finally, we obtain the value of the time to maturity \(T\). Footnote 2: This is because the time to maturity \(T\) in the put-call parity expression has units of years, and we assume that one calendar year has 252 business days in this paper. * **Arguments \(C\) and \(P\)**: Although we cannot obtain the values of these two arguments directly from CBOE, we can obtain the "bid" and "ask" of every option. Here, we calculate the mid-price of the bid and ask, and we consider it to be the corresponding call option price and put option price. ### Data Visualization and Explanation Based on financial news from [6], [8], [7], and [4], we can select three of the best-performing stocks, which come from **Apple**, **Nvidia**, and **Meta**, respectively. On the other hand, the three worst-performing stocks that we select are from **First Republic Bank**, **Signature Bank**, and **Charles Schwab**, respectively. Next, we use MATLAB to create figures that describe the relationship between the time to maturity \(T\)3, the moneyness \(S/K\)4, and the implied put-call parity certainty equivalent rate \(r\) for the companies we selected, and we explain some key values from these figures5. Footnote 3: The options’ maturity dates for Apple, Nvidia, Meta, and Charles Schwab range from 06/09/2023 to 12/19/2025; the options’ maturity dates for First Republic Bank range from 06/09/2023 to 07/19/2024; and the options’ maturity dates for Signature Bank range from 06/16/2023 to 12/15/2023. Figure 1: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Apple stock Figure 2: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Nvidia stock Figure 4: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for First Republic Bank stock Figure 3: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Meta stock Figure 5: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Signature Bank stock Figure 6: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Charles Schwab stock From Table 16, it is clear that the best-performing companies have strictly positive implied put-call parity certainty equivalent rates, regardless of the maximum value, minimum value, or mean value. On the other hand, the worst-performing companies have strictly negative implied put-call parity certainty equivalent rates. Footnote 6: Via put-call parity, we can obtain different rates with respect to different time to maturity values, and thus we can determine the maximum value, minimum value, and mean value of these rates. The values in Table 2 are similar. A positive rate means that investors have a high probability of obtaining a return if they invest in the company's stock, while a negative rate means that investors may lose their money if they try to invest in the company's stock. By combining the data and the financial news, we find that the implied put-call parity certainty equivalent rate is quite useful; it can help investors determine which company's stock option is worth investing in. Now, let us consider the shape of the graph. For the best-performing companies, we can observe that when the time to maturity \(T\) is small, the corresponding rate is high, which means that in the near future, the return of the stock option is quite high, so that investors may make money by investing in the option. Of course, they may have to take on some amount of risk. In this case, we would suggest that the investor consider Apple first, then Meta, and finally Nvidia. A high rate represents a high risk, and normal investors definitely do not want to take on a high risk when they decide to invest in something. As time goes by, i.e., as the value of the time to maturity \(T\) becomes larger, the implied put-call parity certainty equivalent rate will become smaller because it is reasonable to consider the long-term rate as the riskless rate of the financial market. Additionally, it is clear that the riskless rate of the financial market should be lower than the near-future implied put-call parity certainty equivalent rates of the best-performing companies. Let us consider the worst-performing companies. We can see that the values of the certainty equivalent rates of these companies are negative, which means that investors have a high risk of losing money if they decide to invest in these companies' stock options. Finally, from Figures 1-6, we can see that if we fix the value of the time to maturity \(T\) and change the value of the moneyness \(S/K\), the value of the implied certainty equivalent rate \(r\) hardly changes, which tells us that the implied certainty equivalent rate is almost independent of moneyness. \begin{table} \begin{tabular}{c c c c} \hline **Company** & **Maximum value** & **Minimum value** & **Mean value** \\ \hline Apple & 0.8644 & 0.1181 & 0.4437 \\ Nvidia & 4.0908 & 0.3308 & 1.8897 \\ Meta & 2.6418 & 0.2238 & 1.2135 \\ \hline First Republic Bank & -0.8600 & -0.9998 & -0.9731 \\ Signature Bank & -0.0166 & -0.4008 & -0.1601 \\ Charles Schwab & -0.0481 & -0.4523 & -0.2625 \\ \hline \end{tabular} \end{table} Table 1: Implied put-call parity certainty equivalent rate (best- and worst-performing companies) Application: Electric Vehicle Industry Today, more and more people are paying attention to environmental issues. To reduce the pollution released by vehicles, people are considering driving electric vehicles7 instead of traditional oil-powered vehicles. In the US, there are several well-known companies that produce electric vehicles, such as Tesla, General Motors, and Ford Motor Company8. We can apply the results we obtained in the previous section to these electric vehicle companies. Figures 7-9 visualize the data of these companies, and the key values derived from these figures are given in Table 2. Footnote 7: The current electric vehicle market situation is described in [5], and the reasons that electric vehicles can reduce pollution can be found in [9]. Footnote 8: The options’ maturity dates for Tesla and Ford Motor Company range from 06/09/2023 to 12/19/2025; the options’ maturity dates for General Motors range from 06/09/2023 to 06/20/2025. The options’ strike prices correspond to the maturity dates. Figure 8: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for General Motors stock Figure 9: Relationship between the time to maturity, moneyness, and the implied put-call parity certainty equivalent rate for Ford Motor Company stock Based on the shapes of the graphs shown in Figures 7, 8, and 9, we can see that these three companies are all doing well. In the near future, we recommend that investors invest in these companies' stock options. According to the data from Table 2, we can see that if we plot the three surfaces on the same coordinate axis, the order of these three surfaces from top to bottom is Tesla, then Ford Motor Company, and finally General Motors. Based on the positional relationships, we can see that investors should prefer to invest in General Motors, then Ford Motor Company, and finally Tesla. ## 5 Summary In this paper, we have used put-call parity to derive the implied put-call parity certainty equivalent rate. We have also considered the meaning of positive rates and negative rates. Then, we utilize the idea9 of the implied volatility surface to construct the implied put-call parity certainty equivalent rate surface. From the relative positions of these surfaces, we can determine which stock option we should consider investing in first. Footnote 9: The implied volatility surface is a three-dimensional surface that explores the relationship between the time to maturity \(T\), moneyness \(S/K\), and volatility \(\sigma\). However, the certainty equivalent rate is only one factor that should be considered in investing. It is obvious that investors cannot consider this factor alone. When deciding to invest in a product, investors should also consider other factors, such as political factors. To be more specific, at the end of May 2023, Tesla CEO Elon Musk's visit to China10 caused Tesla's stock to soar and made him the world's richest man. Thus, investors who invested in Tesla before this event made a large amount of money. It is clear that we cannot predict such an outcome via a mathematical model. Hence, when deciding to invest in a product, investors should also consider other factors, so that they have a better chance of making optimal decisions. \begin{table} \begin{tabular}{c c c c} \hline \hline **Company** & **Maximum value** & **Minimum value** & **Mean value** \\ \hline Tesla & 2.1970 & 0.1994 & 1.0320 \\ General Motors & 0.0927 & 0.0395 & 0.0708 \\ Ford Motor Company & 0.4549 & 0.0615 & 0.2626 \\ \hline \hline \end{tabular} \end{table} Table 2: Implied put-call parity certainty equivalent rate (electric vehicle companies)
2306.00100
MetaXLR -- Mixed Language Meta Representation Transformation for Low-resource Cross-lingual Learning based on Multi-Armed Bandit
Transfer learning for extremely low resource languages is a challenging task as there is no large scale monolingual corpora for pre training or sufficient annotated data for fine tuning. We follow the work of MetaXL which suggests using meta learning for transfer learning from a single source language to an extremely low resource one. We propose an enhanced approach which uses multiple source languages chosen in a data driven manner. In addition, we introduce a sample selection strategy for utilizing the languages in training by using a multi armed bandit algorithm. Using both of these improvements we managed to achieve state of the art results on the NER task for the extremely low resource languages while using the same amount of data, making the representations better generalized. Also, due to the method ability to use multiple languages it allows the framework to use much larger amounts of data, while still having superior results over the former MetaXL method even with the same amounts of data.
Liat Bezalel, Eyal Orgad
2023-05-31T18:22:33Z
http://arxiv.org/abs/2306.00100v1
MetaXLR - Mixed Language Meta Representation Transformation for Low-resource Cross-lingual Learning based on Multi-Armed Bandit ###### Abstract Transfer learning for extremely low-resource languages is a challenging task as there is no large-scale monolingual corpora for pre-training or sufficient annotated data for fine-tuning. We follow the work of Xia et al. (2021) which suggests using meta learning for transfer learning from a single source language to an extremely low resource one. We propose an enhanced approach which uses multiple source languages chosen in a data-driven manner. In addition, we introduce a sample selection strategy for utilizing the languages in training by using a multi armed bandit algorithm. Using both of these improvements we managed to achieve state-of-the-art results on the NER task for the extremely low resource languages while using the same amount of data, making the representations better generalized. Also, due to the method's ability to use multiple languages it allows the framework to use much larger amounts of data, while still having superior results over the former MetaXL method even with the same amounts of data. ## 1 Introduction Multilingual pre-training such as XLM-R Conneau et al. (2020) have presented great results on various NLP tasks for many languages. But in order to achieve that they require a large scale of monolingual data. Unfortunately, this is not the case for extremely low resource languages such Quechua or Ilocano where data for these languages barely exist. Therefore, in the case of a low resource language a way-to-go approach would be using transfer learning. We follow the work of Xia et al. (2021) which uses a meta learning approach. In their work they used a high resource language to learn the task while learning concurrently how to convert representations to the low resource language. However, the mentioned approach uses only one source language - missing the generalization that could be given using multiple languages. But moving to multi-language approach is not straight forward as different languages can have different effects on the learning process. Selecting languages can be done manually but it is a tedious process that requires linguistic knowledge that sometimes is not widely available. To overcome that, we suggest an approach which uses multiple languages, that are selected in a data driven manner and are balanced during training using a MAB algorithm. In this paper we show how using multiple languages is more powerful than using a single source language even with the same amount of data. In addition, we propose utilizing multi armed bandit as a sampling strategy to balance the contribution of each language to the training process. Combining both we were able to achieve improved results on the downstream task of NER evaluating languages that are never seen by the pretrained model before. In addition, the language selection is easy and can be done seamlessly. ## 2 Method **Using multiple source languages** To select the source languages we took advantage of both LangRank (Lin et al., 2019) and the languages clusters present in Chiang et al. (2022). First, given a target language \(t\) we chose a closely related source language \(s_{1}\) used in Xia et al. (2021) using LangRank. Next, we used the language clusters and mapped \(s_{1}\)'s cluster \(c\). Then, we chose \(n-1\) arbitrary languages from \(c\). **Multi armed bandit as sampling strategy** Since the languages from the previous step are selected from a large cluster, they may have different effects on the training process as they are yet varied. Thus, we balance the training process by defining a sampling distribution for the source languages while training. In our strategy we increase the weight of languages that are harder to learn from. The intuition is that in order to generalize the representations properly across the different languages, we should train more with languages that the model struggles with. For making this weighting strategy adaptive to different languages without manual interference, we reduced the problem to a MAB problem where we consider each source language as an arm. Every training step, we select one language from the language distribution and get a reward, which in this case is the loss - The higher it gets, the more the model struggles with this language. The multi armed bandit we used is EXP3 described in Auer et al. (2002). We used it as part of the meta learning algorithm as can be seen in Algorithm 1. ## 3 Experiments Similarly to Xia et al. (2021) we used XLM-R. The data that is used for our experiments is WikAnn, which contains 282 different languages on the NER task. Results presented in Table 1. Our method outperforms the baselines (1-4) by at least 2.4 F1 score in average, using the same amount of data. Comparing method (1) and (2) emphasizes the importance of the data size. We can also observe the importance of selecting related languages by comparing method (1) and (3). Our method leverages these two observations, by the fact that we are able to use related languages with limited data, and have a large amount of data size combining several related languages together. MetaXL (Xia et al., 2021) works well and improved even by using uniform language selection distribution, by presenting an improved result of at least 1.3 F1 in average. Our language selection algorithm further improves the result by 1.1 F1 score in average. Comparing the two methods (5) and (6), the language selection algorithm performs at least as well as the uniform selection and sometimes outperforms. ## 4 Conclusion In this paper, we study cross-lingual transfer learning for extremely low-resource languages. We broadened MetaXL (Xia et al., 2021), enabling it to use a set of source languages, while choosing from them in the training loop using a Multi Armed Bandit algorithm. We managed to both improve on the results of previous works while simultaneously increasing the pool of usable data and achieve state-of-the-art results for the extremely low resource languages. \begin{table} \begin{tabular}{l l l l l l l l} \hline Source / Target & qu & ilo & mhr & mi & tk & gn & Average \\ \hline 1. English 5k (Xia et al., 2021) & 68.67 & 77.57 & 68.16 & 88.56 & 66.99 & 69.37 & 73.22 \\ 2. English 20k (Xia et al., 2021) & 73.04 & 85.99 & 70.97 & 89.21 & 66.02 & 73.39 & 76.44 \\ 3. Related language 5k (Xia et al., 2021) & 77.06 & 75.93 & 69.33 & 86.46 & **73.15** & 71.96 & 75.65 \\ 4. Related language (6k-8k) & 76.47 & 82.3 & 73.78 & **93.53** & 71.07 & 74.07 & 78.54 \\ 5. Uniform selection & 76.27 & 86.41 & 71.43 & 92.67 & **72.9** & **79.65** & 79.88 \\ 6. MetaXLR (ours) & **78.76** & **86.96** & **74.65** & 92.67 & **73.08** & **79.44** & **80.93** \\ \hline \end{tabular} \end{table} Table 1: F1 for NER across six settings: (1) Source data size of 5k, English data source only. (2) Source data size of 20k, English data source only. (3) One source language, data size of 5k. (4) MetaXL related source language using the exact same data size as we used in our method (varies between 6k-8k as in Table 2). (5) Choosing languages, uniform distribution. (6) Our method: Source languages defined in Table 2 with MetaXLR algorithm (Algorithm 1). ### URM Statement The authors acknowledge that at least one key author of this work meets the URM criteria of ICLR 2023 Tiny Papers Track.
2309.16859
Preface: A Data-driven Volumetric Prior for Few-shot Ultra High-resolution Face Synthesis
NeRFs have enabled highly realistic synthesis of human faces including complex appearance and reflectance effects of hair and skin. These methods typically require a large number of multi-view input images, making the process hardware intensive and cumbersome, limiting applicability to unconstrained settings. We propose a novel volumetric human face prior that enables the synthesis of ultra high-resolution novel views of subjects that are not part of the prior's training distribution. This prior model consists of an identity-conditioned NeRF, trained on a dataset of low-resolution multi-view images of diverse humans with known camera calibration. A simple sparse landmark-based 3D alignment of the training dataset allows our model to learn a smooth latent space of geometry and appearance despite a limited number of training identities. A high-quality volumetric representation of a novel subject can be obtained by model fitting to 2 or 3 camera views of arbitrary resolution. Importantly, our method requires as few as two views of casually captured images as input at inference time.
Marcel C. Bühler, Kripasindhu Sarkar, Tanmay Shah, Gengyan Li, Daoye Wang, Leonhard Helminger, Sergio Orts-Escolano, Dmitry Lagun, Otmar Hilliges, Thabo Beeler, Abhimitra Meka
2023-09-28T21:21:44Z
http://arxiv.org/abs/2309.16859v1
# Preface: A Data-driven Volumetric Prior ###### Abstract NeRFs have enabled highly realistic synthesis of human faces including complex appearance and reflectance effects of hair and skin. These methods typically require a large number of multi-view input images, making the process hardware intensive and cumbersome, limiting applicability to unconstrained settings. We propose a novel volumetric human face prior that enables the synthesis of ultra high-resolution novel views of subjects that are not part of the prior's training distribution. This prior model consists of an identity-conditioned NeRF, trained on a dataset of low-resolution multi-view images of diverse humans with known camera calibration. A simple sparse landmark-based 3D alignment of the training dataset allows our model to learn a smooth latent space of geometry and appearance despite a limited number of training identities. A high-quality volumetric representation of a novel subject can be obtained by model fitting to 2 or 3 camera views of arbitrary resolution. Importantly, our method requires as few as two views of casually captured images as input at inference time. ## 1 Introduction Reconstruction and novel view synthesis of faces are challenging problems in 3D computer vision. Achieving high-quality photorealistic synthesis is difficult due to the underlying complex geometry and light transport effects exhibited by organic surfaces. Traditional techniques use explicit geometry and appearance representations for modeling individual face parts such as hair [14], skin [17], eyes [4], teeth [64] and lips [16]. Such methods often require specialised expertise and hardware and limit the applications to professional use cases. Recent advances in volumetric modelling [3, 28, 33, 52] have enabled learned, photorealistic view synthesis of both general scenes and specific object categories such as faces from 2D images alone. Such approaches are particularly well-suited to model challenging effects such as hair strands and skin reflectance. The higher dimensionality of the volumetric reconstruction problem is inherently more ambiguous than surface-based methods. Thus, initial developments in neural volumetric rendering methods [3, 33] relied on an order-of-magnitude higher number of input images (\(>100\)) to make the solution tractable. Such a large image acqui sition cost limits application to wider casual consumer use cases. Hence, few-shot volumetric reconstruction, of both general scenes and specific object categories such as human faces, remains a prized open problem. This problem of the inherent ambiguity of volumetric neural reconstruction from few images has generally been approached in 3 ways: i) Regularisation: using natural statistics to constrain the density field better such as low entropy [3, 50] along camera rays, 3D spatial smoothness [37] and deep surfaces [73] to avoid degenerate solutions such as floating artifacts; ii) initialisation: metamarnt initialisation [57] of the underlying representation (network weights) to aid faster and more accurate convergence during optimisation; iii) data-driven subspace priors: using large and diverse datasets to learn generative [7, 9, 10, 12, 13, 18, 75] or reconstructive [6, 48, 50, 61] priors of the scene volume. For human faces, large in-the-wild datasets [22, 23, 27] have proved to be particularly attractive in learning a smooth, diverse, and differentiable subspace that allow for few-shot reconstruction of novel subjects by performing inversion and finetuning of the model on a small set of images of the target identity [51]. But such general datasets and generative models also suffer from disadvantages: i) The sharp distribution of frontal head poses in these datasets prevents generalisation to more extreme camera views, and ii) the computational challenge of training a 3D volume on such large datasets results in very limited output resolutions. In this paper, we propose a novel volumetric prior for faces that is learned from a multi-view dataset of diverse human faces. Our model consists of a neural radiance field (NeRF) conditioned on learnt per-identity embeddings trained to generate 3D consistent views from the dataset. We perform a pre-processing step that aligns the geometry of the captured subjects [50]. This geometric alignment of the training identities allows our prior model to learn a continuous latent space using only image reconstruction losses. At test time, we perform model inversion to compute the embedding for a novel target identity from the given small set of views of arbitrary high resolution. In an out-of-model finetuning step, the resulting embedding and model are further trained with the given images. This results in NeRF model of the target subject that can synthesise high-quality images. Without our prior, the model cannot estimate a 3D consistent volume and overfits to the sparse training views (Fig. 3). While we present a novel data-driven subspace prior, we Figure 3: Naively training on two images leads to overfitting and the model fails to synthesise novel views. With the proposed prior, the model can render view-consistent novel views. Figure 2: Our key contribution is a prior face model (left), learned from a multiview dataset of faces captured in a controlled setting. The prior model is resolution independent and can be fine-tuned to synthesise novel views at high resolution given as few as two images from a target identity captured in the studio (middle left) or in-the-wild (middle right). also extensively evaluate the role of regularisation and initialisation in achieving plausible 3D face volumes from few images by comparing with relevant state-of-the-art techniques and performing design ablations of our method. In summary, we contribute: * A prior model for faces that can be finetuned to generate a high-quality volumetric 3D representation of a target identity from two or more views. * Ultra high-resolution 3D consistent view-synthesis (demonstrated up to 4k resolution). * Generalisation to in-the-wild indoor and outdoor captures, including challenging lighting conditions. ## 2 Related works Volumetric reconstruction techniques [3, 26, 34, 44, 45] achieve a high-level of photorealism. However, they provide a wider space of solutions than surface based representations [31, 43], and hence often perform very poorly in the absence of sufficient constraints [32, 37, 46, 58, 68, 70]. To mitigate this, related works employ additional regularisation [20, 32, 37, 50, 58, 68], perform sophisticated initialisation [25, 48, 57, 60], and leverage data-driven priors [9, 11, 18, 20, 32, 41, 46, 48, 50, 56, 58, 63, 70, 72]. RegularisationA common solution to novel view synthesis from sparse views is employing regularisation and consistency losses for novel views. RegNeRF [37] proposes a smoothness regulariser on the expected depth and a patch-based appearance regularisation from a pretrained normalising flow. A concurrent work, FreeNeRF [68], observes that NeRFs tend to overfit early in training because of the high frequencies in the positional encoding. They propose a training schedule where the training starts with the positional encodings masked to the low frequencies only and continuously fade in higher frequencies during the course of training. These methods have shown promising results for in-the-wild scenes but struggle to output high-quality results for human faces 8. It is also possible to leverage priors from large pretrained models. DietNeRF[20] follows a strategy of constraining high-level semantic features of novel view images to map to the same scene object in the "CLIP" [47] space. These methods require generating image patches per mini-batch rather than individual pixels. This is compute and memory intensive and reduces the effective batch size and resolution at which the models can be trained, limiting the overall quality. InitialisationRecent papers explore the effect of initialisation [25, 48, 57, 60]. Metalearning [15, 35, 55, 71] initial model parameters from a large collection of images [57] has shown promising results for faster convergence. However, the inner update loop in metalearning becomes very expensive for large neural networks. This limits its applicability in high-resolution settings. Data-driven PriorsRecent works propose generative neural fields models in 3D [7, 9, 12, 18, 38, 49, 50, 54, 56, 65, 75]. These models typically map a random latent vector to a radiance field. At inference time, the model can generate novel views by inverting a target image to the latent space [1]. GRAF and PiGAN [7, 54] are the first technique to learn a 3D volumetric generative model trained with an adversarial loss on in-the-wild datasets. Since neural radiance fields are computationally expensive, training them in an adversarial setting requires an efficient representation. EG3D [9] proposes a tri-plane representation, which enables training lightweight neural radiance field as a 3D GANs, resulting in state-of-the-art synthesis results. Due to memory limitations, such generative models can be trained only at limited resolutions. They commonly rely on an additional 2D super-resolution module to generate more details [7, 9, 18, 56], which results in the loss of 3D consistency. Recent works render 3D consistent views by avoiding a 2D super-resolution module [6, 61]. MoRF [61] learns a conditional NeRF [34] for human heads from multiview images captured using a polarisation based studio setup that helps to learn separate diffuse and specular image components. Their dataset consists of 15 real identities and is supplemented with synthetic renderings to generate more views. Their method is limited to generating results in the studio setting and does not generalise to in-the-wild scenes. Cao et al. 2022 [6] train a universal avatar prior that can be finetuned to a target subject with a short mobile phone capture of RGB and depth. Their underlying representation follows Lombardi et al. [29]. A popular option for novel view synthesis from sparse inputs is formulating the task as an auto-encoder and perform image-based rendering. This family of methods [11, 32, 63, 70] follow a feedforward approach of generalisation to novel scenes by training a convolutional encoder that maps input images to pixel aligned features that condition a volumetric representation of the scene. Multiple works extend this approach with additional priors including keypoints [32], depth maps [19, 46, 66], or correspondences [58]. KeypointNeRF [32] employs an adapted positional encoding strategy based on 3D keypoints. DINER [46] includes depth maps estimated from pretrained models to bootstrap the learning of density field and sample the volume more efficiently around the expected depth value. Employing our face prior outperforms these methods (see Tbl. 1, Fig. 8 and 9). ## 3 Method We propose a prior model for faces that can be finetuned to very sparse views. The finetuned model can generate ultra-high resolution novel view synthesis with intricate details like individual hair strands, eyelashes, and skin pores (Fig. 1). In this section, we first introduce neural radiance fields [34] in Sec. 3.1 and our prior model in Sec. 3.2. We then outline our reconstruction pipeline in Sec. 3.3. ### Background A NeRF [33] represents a scene as a volumetric function \(f:(\mathbf{x},\mathbf{d})\rightarrow(\mathbf{c},\sigma)\) which maps 3D locations \(\mathbf{x}\) to a radiance \(\mathbf{c}\) and a density \(\sigma\), which is modelled using a multi-layer perceptron (MLP). The radiance is additionally conditioned on the view direction \(\mathbf{d}\) to support view dependent effects such as specularity. In order to more effectively represent and learn high frequency effects, each location is positionally encoded before being passed to the MLP. Given a NeRF, a pixel can be rendered by integrating along its corresponding camera ray in order to obtain the radiance or colour value \(\mathbf{\hat{c}}=\mathbf{F}(\mathbf{r})\). Assuming a predetermined near and far camera plane \(t_{n}\) and \(t_{f}\), the integrated radiance of the camera ray can be computed using the following equation: \[\mathbf{F}(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{c }(\mathbf{r}(t),\mathbf{d})dt, \tag{1}\] \[\text{where }T(t)=\text{exp}\left(-\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))ds \right). \tag{2}\] In practice, this is estimated using raymarching. The original NeRF implementation approximated the ray into a discrete number of sample points, and estimated the alpha value of each sample by multiplying its density with the distance to the next sample. They further improve quality using a coarse-to-fine rendering method, by first distributing samples uniformly between the near and far planes, and then importance sampling the quadrature weights. Mip-NeRF [2] solves the classic anti-aliasing problem resulting from discrete sampling in a continuous space. This is achieved by sampling conical volumes along the ray. MipNeRF360 [3] also introduced an efficient pre-rendering step; a uniformly sampled coarse rendering pass by a proposal network, which predicts the sampling weights instead of the density and colour values using a lightweight MLP. This is followed by an importance-sampled NeRF rendering step. We incorporate both of these ideas in our model. ### Face Prior Model Our prior model is a conditional neural radiance field \(F_{\theta}\) that is trained as an auto-decoder [5, 50]. Given a ray \(\mathbf{r}\) and a latent code \(\mathbf{w}\), \(F_{\theta}\) predicts a colour \(\mathbf{\hat{c}}=\mathbf{F}_{\theta}(\mathbf{r},\mathbf{w})\) with volumetric rendering [34]. The architecture of the prior model is based on MipNeRF360 [3] and consists of two MLPs. Unlike MipNeRF360, the MLPs are conditioned on a latent code \(\mathbf{w}_{\text{identity}}\), representing the identity. The first MLP--the _proposal_ network--predicts density only. The second MLP--the _NeRF_ MLP--predicts both density and colour. Both MLPs take an encoded point \(\tilde{\gamma}_{\mathbf{x}}(\mathbf{x})\) and a latent code \(\mathbf{w}\) as input, where \(\tilde{\gamma}_{\mathbf{x}}(\cdot)\) denotes a function for integrated positional encodings [2]. The NeRF MLP further takes the positionally encoded view direction \(\gamma_{\mathbf{v}}(\mathbf{d})\) as input (without integration for the positional encoding). Fig. 5 gives an overview of the backbone NeRF MLP of our prior model. The latent code is concatenated at each layer. Unlike state-of-the-art generative models [8, 18, 50], our model also conditions on the view direction \(\mathbf{d}\). For training, we sample random rays \(\mathbf{r}\) and render the output colour \(\mathbf{\hat{c}}\) as described in Sec. 3.1. Given \(N\) training subjects, we optimise over both the network parameters \(\theta\) Figure 4: Overview. We train an implicit prior model on low-resolution multi-view images (left). At test time, we fit the prior model to as few as two images of a target identity. A naïve optimisation without inversion or regularisation leads to strong view-dependent colour distortions and fuzzy surface structures, see Sec. 6.4 and Fig. 11. To solve this, we first find a good initialisation through inversion (middle) and then finetune all model parameters under additional constraints for geometry \(\mathcal{L}_{\text{normal}}\) and appearance \(\mathcal{L}_{v}\) (right). and the latent codes \(\mathbf{w}_{1..N}\). Our objective function is \[\operatorname*{arg\,min}_{\theta,\mathbf{w}_{1..N}}\mathcal{L}_{\text{prior}}= \mathcal{L}_{\text{recon}}+\lambda_{\text{prop}}\mathcal{L}_{\text{prop}}, \tag{3}\] with \(\lambda_{\text{prop}}=1\). We describe the loss terms \(\mathcal{L}_{\text{recon}}\) and \(\mathcal{L}_{\text{prop}}\) for a single ray. The final loss is computed as the expectation over all rays in the training batch. The objective function has a data term comparing the predicted colour with the ground truth \(\mathcal{L}_{\text{recon}}=\|\mathbf{F}_{\theta}(\mathbf{r},\mathbf{w})- \mathbf{c}\|_{1}\), as well as a weight distribution matching loss term between the NeRF MLP and the proposal MLP \(\mathcal{L}_{\text{prop}}\). The latter is the same as in Mip-NeRF360 [3]. We refrain from regularising the latent space, and we disable the distortion loss. As our scene is not unbounded, we also disable the 360-parameterisation or space-warping of Mip-NeRF360. We train the prior model for 1 Mio. steps on multi-view images of resolution \(512\times 768\). Please refer to Sec. 4 for details about the training set. ### Volumetric Reconstruction Pipeline Figure 4 illustrates the reconstruction pipeline, which comprises three steps: 1) Preprocessing and head alignment, 2) inversion, and 3) model fitting. This section describes each step in detail. #### 3.3.1 Preprocessing We estimate camera parameters and align the heads to a predefined canonical pose during the data preprocessing stage. For the studio setting, we calibrate the cameras and estimate 3D keypoints by triangulating detected 2D keypoints; for in-the-wild captures, we use Mediapipe [30] to estimate the camera positions and 3D keypoints. We align and compute a similarity transform to a predefined set of five 3D keypoints (outer eye corners, nose, mouth centre, and the chin) in a canonical pose. Please see the supp. mat. for details. #### 3.3.2 Inversion The reconstruction results depend on a good initialisation of the face geometry (see Tbl. 2). We solve an optimisation problem to find a latent code that produces a good starting point [1]. Given \(K\) views of a target identity, we optimise with respect to a new latent code while keeping the network weights frozen. Let \(P\) be a random patch sampled from one of the \(K\) images of the target identity and \(P_{\mathbf{w}}\) be a patch rendered by our prior model when conditioning on the latent code \(\mathbf{w}\). The latent code of the target identity \(\mathbf{w}_{\text{target}}\) is recovered by minimising the following objective function: \[\mathbf{w}_{\text{target}}=\operatorname*{arg\,min}_{\mathbf{w}}\mathcal{L}_{ \text{recon}}+\lambda_{\text{LPIPS}}\mathcal{L}_{\text{LPIPS}}, \tag{4}\] where \(\mathcal{L}_{\text{recon}}=\frac{1}{|P|}\|\hat{P}_{\mathbf{w}}-P\|\) is the same loss as in Eq. 3, but computed over an image patch, and \(\mathcal{L}_{\text{LPIPS}}(\hat{P}_{\mathbf{w}},P)\) is a perceptual loss[74] with \(\lambda_{\text{LPIPS}}=0.2\). We optimise at the same resolution as the prior model after removing the background [42]. #### 3.3.3 Model Fitting The goal of model fitting is to adapt the weights of the prior model for generating novel views of a target identity at high resolutions. We do this by finetuning the weights of the prior model to a target identity from sparse views. Please note that the prior model is trained on _low resolution_ and is optimised to reconstruct a _large set of identities_ from _many views_ for each identity, see Sec. 5. After model fitting, the model should generate _high-resolution novel views_ with intricate details like individual hair strands for a _single_ target identity given as few as _two_ views. Training a NeRF model on sparse views leads to major artifacts because of a distorted geometry [36] and overfitting to high frequencies [68]. We find that correctly initialising the weights of the model avoids floater artifacts and leads to high-quality novel view synthesis. We initialise the model weights with the pretrained prior model and use the latent code \(\mathbf{w}_{\text{target}}\) obtained through inversion (Sec. 3.3.2). Fig. 11 shows that naively optimising without any further constraints leads to overfitting to the view direction (first column). Regularising the weights of the view branch causes fuzzy surface structures (second column), which can be mitigated using a normal consistency loss [59] (third column). We initialise the model with the weights of the prior and optimise it given the objective function \[\operatorname*{arg\,min}_{\theta_{\text{target}},\mathbf{w}_{ \text{target}}}\mathcal{L}_{\text{fit}} =\mathcal{L}_{\text{recon}}+\lambda_{\text{prop}}\mathcal{L}_{\text {prop}}\] \[+\lambda_{\text{normal}}\mathcal{L}_{\text{normal}}+\lambda_{v} \mathcal{L}_{v}, \tag{5}\] Figure 5: Prior Model Architecture. Our prior model extends the Mip-NeRF360 [3] architecture with a conditioning input at each layer of the trunk MLP. Unlike SOTA generative NeRF models [9, 18, 50], our model conditions both on a latent code _and_ a view direction, which enables view-dependent effects. During model fitting to very few images, we prevent overfitting by regularising the view direction weights. See Fig. 11 for an example. where the loss terms \(\mathcal{L}_{\text{recon}}\), \(\mathcal{L}_{\text{prop}}\) and the hyperparameter \(\lambda_{\text{prop}}\) are the same as in Eq. 3. The regulariser for the normals \(\mathcal{L}_{\text{normal}}\) is the same as in RefNeRF [59]. We regularise the weights of the view branch with \(\mathcal{L}_{v}=\|\theta_{v}\|^{2}\), where the parameters \(\theta_{v}\) correspond to weights of the connections between the encoded view direction and the output, see the highlighted box in Fig. 5. We set \(\lambda_{\text{normal}}=0.001\) and \(\lambda_{v}=0.0001\) and optimise until convergence. Since our model generates faces that are aligned to a canonical pose and location (Sec. 5), the rendering volume can be bounded by a rectangular box. We set the density outside this box to zero for the final rendering. ## 4 Dataset We capture a novel high-quality multi-view dataset of diverse human faces from 1450 identities with a neutral facial expression under uniform illumination, see Fig. 6. 13 camera views are distributed uniformly across the frontal hemisphere. Camera calibration is performed prior to every take to obtain accurate camera poses. We hold out 15 identities for evaluation and train on the rest. The camera images are of \(4096\times 6144\) resolution. We made a concerted effort for a diverse representation of different demographic categories in our dataset, but acknowledge the logistical challenges in achieving an entirely equitable distribution. We provide more details of the demographic breakdown of the dataset in the supplementary document. To assess the out-of-distribution performance of our method we show results on the publicly available Facescape multi-view dataset [67]. We also acquire a handful of in-the-wild captures of subjects using a mobile camera to qualitatively demonstrate the generalisation capability of our method further. ## 5 Experiments PreprocessingWe perform an offline head alignment to a canonical location and pose. This step is crucial to learn a meaningful prior over human faces. For each subject, we estimate five 3D keypoints for the eyes, nose, mouth, and chin and align the head to a canonical location and orientation. The canonical location is defined as the median location of the five keypoints across the first 260 identities of our training set. For an illustration and more details, please see the supplementary document. Prior Model TrainingWe train the prior model with our pre-processed dataset containing 1450 identities and 13 camera views. To make our training computationally tractable, we train versions of our prior model at a lower resolution. We train two versions of our model, at 256\(\times\)384 and 512\(\times\)768 image resolution. The lower resolution model is trained only for the purpose of quantitative evaluation against other SOTA methods, to ensure fair comparison against other methods that cannot be trained at a higher resolution due to compute and memory limitations. We provide details about our training hardware and hyperparameters in the supplementary document. ComparisonsWe perform evaluations on three different datasets: Our high-quality studio dataset, a publicly available studio dataset (Facescape [67]), and in-the-wild captures from a mobile and a cellphone camera. For the studio datasets, we assume calibrated cameras. For the in-the-wild captures, we estimate camera parameters with Medipape [30]. The metrics for the quantitative comparisons are computed after cropping the images to squares and setting the background to black with foreground masks from a state-of-the-art network [42]. For more details, please refer to the supplementary material. ## 6 Results We perform extensive evaluation and experiments to demonstrate i) our core claims - high resolution, few shot, in-the-wild synthesis, ii) improved performance over the state-of-the-art methods, iii) ablation of various design choices. We also encourage the reader to see the video results and more insightful evaluations in the supp. mat. \begin{table} \begin{tabular}{r c c c} \hline \hline **Method** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline FreeNeRF [68] & 15.02 & 0.6795 & 0.3093 \\ EG3D-based prior [9] & 19.70 & 0.7588 & 0.2897 \\ Learnit [57] & 20.04 & 0.7716 & 0.3299 \\ RegNeRF [36] & 20.40 & 0.7432 & 0.2858 \\ KeypointNeRF [32] & 22.79 & 0.7878 & 0.2713 \\ \hline **Ours** & **25.69** & **0.8039** & **0.1905** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with related works at 1K resolution on _two_ views of our studio dataset. The metrics are computed as the average over six views of three holdout subjects. Our method outperforms the related works by a clear margin. For a visual comparison, please refer to Fig. 8. The supp. mat. contains metrics and visuals for more input views. Figure 6: Exemplar images of our captured dataset. Our dataset contains 1450 different subjects (_bottom row_) captured under 13 different cameras on the frontal hemisphere (_top rows_). ### Ultra-high Resolution Synthesis We demonstrate ultra high resolution synthesis after fine-tuning our 512\(\times\)768 prior model to sparse high-resolution images in the studio setting (Fig. 1) and in-the-wild (Fig. 7). 4K Novel Views from Three ViewsFigure 1 shows \(4096\times 4096\) (4K) renderings after finetuning to three views of a held-out subject from our studio dataset. Note the range of the rendered novel views and the quality of synthesis results for such an out-of-distribution test subject at 4K resolution. From just three images, our method learns a highly detailed and photorealistic volumetric model of a face. We synthesise smooth and 3D consistent camera trajectories while preserving challenging details such as individual hair strands, skin pores and eyelashes. Our model learns both consistent geometry and fine details of individual hair strands and microgeometry of the skin, making the synthesised images barely distinguishable from captured views. Please see the supplementary material for video results and results on other subjects. 2K Novel Views from Two in-the-wild ViewsOur method also affords reconstruction from in-the-wild cap \begin{table} \begin{tabular}{r c c c} \hline **Initialization** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline Furthest & 23.91 & 0.7876 & 0.2041 \\ Nearest & 24.41 & 0.7900 & 0.2002 \\ Mean & 24.61 & 0.7934 & 0.1959 \\ Noise & 24.66 & 0.7957 & 0.1998 \\ Zeros & 24.65 & 0.7941 & 0.1944 \\ \hline Inversion (**Ours**) & **25.69** & **0.8040** & **0.1905** \\ \hline \end{tabular} \end{table} Table 2: Ablation on various types of initialising \(\mathbf{w}_{\text{target}}\) when fine-tuning the model. We compare taking the mean across all latent codes during training; initialising it with zeros, Gaussian noise; and copying the latent code of the nearest or furthest neighbor in the training set. Inversion (**Ours**) performs best. Please refer to the supplementary material for visual examples. Figure 8: Visual comparison when given two target views. Our method consistently produces more pleasing results. Please see Tbl. 1 for metrics and the supplementary material for implementation details and results on more than two target views. Figure 7: In-the-wild Results. We reconstruct a target identity from two images acquired with a consumer camera (left). Note how the novel views can extrapolate from the input camera angles. The inlays show the normals (top) and depth (bottom). The hair density is low, thus the grey normal colour in that region. We encourage the reader to see the supp. mat. for the high-resolution results and videos. tures from a single camera. We use a digital camera to capture two images. Results are shown in Fig. 7. The upper row was captured outdoors in front of a wall; the bottom was row was captured in a room. Please see the supplementary material for more examples and videos. ### Comparison with Related Work Our goal is high-resolution novel view synthesis from sparse inputs. We perform comparisons by training related works [9, 32, 37, 57, 68] on our studio dataset and rendering results for unseen views at resolution \(1024\times 1024\) (1K). Since the task of novel view synthesis becomes substantially easier when given more views of the target subject, we perform comparisons for different number of views ranging from two to seven. Fig. 8 and Tbl. 1 show that our method can handle difficult cases at high resolution and clearly outperforms all related works when reconstructing from two views. Please see the supp. mat. for results on more views. We observe that some of the related methods perform significantly better at lower resolutions and when given more than just two views of the target subject. Hence, we complement our comparisons with a comparison on the FaceScape dataset [67]. We follow the setting of the best performing related work, DINER [46], and use four reference images at resolution \(256\times 256\). Fig. 9 displays visuals and the supplementary document provides metrics. Note Figure 10: Single Image Reconstruction Results. From left to right: input image captured using a studio setup, synthesised views around the subject face using a single frontal view for model fitting. Figure 9: Comparison with the state-of-the-art on holdout identities from FaceScape [67]. Each method is given four input views and we show novel views and the L1 residue. Please see the supp. mat. for implementation details, more examples, and detailed metrics. that KeypointNerf [32] and DINER [46] were trained on Facescape while ours is not. This means that our scores represent results in the "out-of-distribution" setting. ### Single Image Fitting Our method is also capable of fitting to a single image and still produces detailed results. We show such result on held-out test subjects from our dataset in Fig. 10. Note the consistent depth and normal maps and photorealistic renderings. This indicates that our model learns a strong prior over head geometry which helps it resolve depth ambiguity to reconstruct a cohesive density field for the head, including challenging regions like hair. ### Ablations InitialisationThe initialisation of the latent code plays a key role in achieving good results. We ablate various initialisation choices such as: i) a zero vector, ii) Gaussian noise, iii) the mean over the training latent codes, iv) the nearest and furthest neighbour in the training set defined by a precomputed embedding [53], and v) inversion (Ours). We finetune the prior model to two views of three holdout identities and report the results in Tbl. 2. Inversion performs best in all metrcis. RegularisationWe also ablate the choice of regularisation for the model finetuning. Fig. 11 shows that without any regularisation, the view branch of the model overfits to the view direction from the sparse input signal. We observe that the parameter weights of the view branch become very large and dominate the colour observed from a particular view. To mitigate this, we regularise the L2 norm of the weights using \(L_{v}\) (green highlight in Fig. 5). However, the model still overfits by generating a fuzzy surface that produces highly specular effects from the optimised views but has incorrect geometry. To regularise the geometry, we extend the trunk of our model with a branch predicting normal and supervise it with the analytical normals [59]. With both regularisation terms, the model can be robustly fit to a target identity from very sparse views. Challenging Lighting ConditionsOur method can generate high-quality novel views even under challenging lighting conditions with shadows and specular reflections, see Fig. 12. Further AblationsWe perform further ablations for fitting to a higher number of target views, for different configurations of our prior models, and for frozen latent codes during model finetuning. Please see the supplementary material for results. ## 7 Conclusion We present a method that can create ultra high-resolution NeRFs of unseen subjects from as few as two images, yielding quality that surpasses other state-of-the-art methods. While our method generalises well along several dimensions such as identity, resolution, viewpoint, and lighting, it is also impacted by the limitations of our dataset. While minor deviations from a neutral expression such as smiles can be synthesised, it struggles with extreme expressions. Clothing and accessories are also harder to synthesise. We show examples of such failure cases in the supplementary. Our model fitting process can take a considerable amount of time, particularly at higher resolutions. While some of these problems can be solved with more diverse data, others are excellent avenues for future work. **Acknowledgments. We thank Emre Aksan for insightful discussions and Malte Prinzler for sharing DINER results.** Figure 11: Ablation on the choice of regularisers. Without any regularisation, the view branch of the model overfits to the view direction from the sparse in- put signal. Additional regularisers allows the model to fit to a target identity from very sparse views. Figure 12: We show results for challenging lighting conditions with shadows and specular reflections, e.g., on the forehead. The right column lists PSNR, SSIM, and LPIPS. ## References * [1]R. Abdal, Y. Qin, and P. Wonka (2019) Image2stylegan: how to embed images into the stylegan latent space?. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4432-4441. Cited by: SS1. * [2]J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan (2021) MIp-nerf: a multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855-5864. Cited by: SS1. * [3]J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman (2022) MIp-nerf 360: unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470-5479. Cited by: SS1. * [4]P. Berard, D. Bradley, M. Nitti, T. Beeler, and M. Gross (2014-10) High-quality capture of eyes. ACM Trans. Graph.33 (6). External Links: ISSN 0021-9222, Link, Document Cited by: SS1. * [5]P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam (2018) Optimizing the latent space of generative networks. In Proceedings of the 35th International Conference on Machine Learning, pp. 2640-3498. Cited by: SS1. * [6]C. Cao, T. Simon, J. Kyu Kim, G. Schwartz, M. Zollhoefer, S. Saito, S. Lombardi, S. Wei, D. Belko, S. Yu, et al. (2022) Authentic volumetric avatars from a phone scan. ACM Transactions on Graphics (TOG)41 (4), pp. 1-19. Cited by: SS1. * [7]E. Chan, M. Monteiro, P. Kellnhofer, J. Wu, and G. Wetzstein (2020) Pi-gan: periodic implicit generative adversarial networks for 3d-aware image synthesis. In arXiv, Cited by: SS1. * [8]E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, et al. (2022) Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123-16133. Cited by: SS1. * [9]E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. G. Guibas, J. Tremblay, S. Khamis, T. Karras, and G. Wetzstein (2021) Efficient geometry-aware 3d generative adversarial networks. In arXiv, Cited by: SS1. * [10]A. Chen, R. Liu, L. Xie, Z. Chen, H. Su, and J. Yu (2021) SofGAN: a portrait image generator with dynamic styling. ACM transactions on graphics. Cited by: SS1. * [11]A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su (2021) Mvsnerf: fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124-14133. Cited by: SS1. * [12]Y. Deng, J. Yang, J. Xiang, and X. Tong (2022) Gram: generative radiance manifolds for 3d-aware image generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Cited by: SS1. * [13]Y. Deng, J. Yang, J. Xiang, and X. Tong (2022) Gram: generative radiance manifolds for 3d-aware image generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Cited by: SS1. * [14]J. I. Echevarria, D. Bradley, D. Gutierrez, and T. Beeler (2014-01) Capturing and stylizing hair for 3d fabrication. ACM Trans. Graph.33 (4). External Links: ISSN 0021-9222, Link, Document Cited by: SS1. * [15]C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126-1135. Cited by: SS1. * [16]P. Garrido, J. Zollhofer, C. Wu, D. Bradley, P. Perez, T. Beeler, and C. Theobalt (2016) Corrective 3D Reconstruction of Lips from Monocular Video. ACM Transactions on Graphics (TOG)35 (6). External Links: ISSN 0021-9222, Link, Document Cited by: SS1. * [17]P. Gotardo, J. Riviere, D. Bradley, A. Ghosh, and T. Beeler (2018-10) Practical dynamic facial appearance modeling and acquisition. ACM Trans. Graph.37 (6). External Links: ISSN 0021-9222, Link, Document Cited by: SS1. * [18]J. Gu, L. Liu, P. Wang, and C. Theobalt (2021) Stylenfer: a style-based 3d-aware generator for high-resolution image synthesis. arXiv preprint arXiv:2110.08985. Cited by: SS1. * [19]G. Zhaoi Chen, C. C. Loy, and Z. Liu (2023) Sparsenerf: distilling depth ranking for few-shot novel view synthesis. Technical Report. Cited by: SS1. * [20]A. Jain, M. Tancik, and P. Abbeel (2021) Putting nerf on a diet: semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5885-5894. Cited by: SS1. * [21]R. Jensen, A. Dahl, G. Vogiatzis, E. Tola, and H. Aanaes (2014) Large scale multi-view stereopsis evaluation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 406-413. Cited by: SS1. * [22]T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: SS1. * [23]T. Karras, S. Laine, and T. Aila (2018) A style-based generator architecture for generative adversarial networks. CoRRabs/1812.04948. Cited by: SS1. * [24]D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: SS1. * [25]A. Kundu, K. Genova, X. Yin, A. Fathi, C. Pantofaru, L. J. Guibas, A. Tagliasacchi, F. Dellaert, and T. Funkhouser (2022) Panoptic neural fields: a semantic object-aware neural scene representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12871-12881. Cited by: SS1. * [26]G. Li, A. Meka, F. Mueller, M. C. Buehler, O. Hilliges, and T. Beeler (2022) Eyenfer: a hybrid representation for photorealistic synthesis, animation and relighting of human eyes. ACM Transactions on Graphics (TOG)41 (4), pp. 1-16. Cited by: SS1. * [27]Z. Liu, P. Luo, X. Wang, and X. Tang (2015-12) Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Cited by: SS1. * [28]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [29]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [30]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [31]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [32]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [33]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [34]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [35]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [36]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [37]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [38]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [39]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [40]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [41]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [42]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-05) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph.38 (4), pp. 65:1-65:14. Cited by: SS1. * [43]S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Leh * [29] Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, and Jason Saragih. Mixture of volumetric primitives for efficient neural rendering. _ACM Trans. Graph._, 40(4), jul 2021. * [30] Camilo Lugaresi, Jiuqiang Tang, H Radon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, et al. Mediapipe: A framework for building perception pipelines. _arXiv preprint arXiv:1906.08172_, 2019. * [31] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 4460-4470, 2019. * [32] Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, and Shunsuke Saito. KeypointNeRF: Generalizing image-based volumetric avatars using relative spatial encoding of keypoints. In _European conference on computer vision_, 2022. * [33] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In _ECCV_, 2020. * [34] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In _European Conference on Computer Vision_, pages 405-421. Springer, 2020. * [35] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_, 2018. * [36] Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. Regrenf: Regularizing neural radiance fields for view synthesis from sparse inputs. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5480-5490, 2022. * [37] Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, and Noha Radwan. Regrenf: Regularizing neural radiance fields for view synthesis from sparse inputs. In _Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_, 2022. * [38] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11453-11464, 2021. * [39] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3504-3515, 2020. * [40] Michael Oechsle, Songyou Peng, and Andreas Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5589-5599, 2021. * [41] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. _arXiv e-prints_, pages arXiv-2112, 2021. * [42] Rohit Pandey, Sergio Orts Escolano, Chloe Legendre, Christian Haene, Sofien Bouaziz, Christoph Rhemann, Paul Debevec, and Sean Fanello. Total relighting: learning to relight portraits for background replacement. _ACM Transactions on Graphics (TOG)_, 40(4):1-21, 2021. * [43] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 165-174, 2019. * [44] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofen Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. _ICCV_, 2021. * [45] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofen Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. _ACM Trans. Graph._, 40(6), dec 2021. * [46] Malte Prinzler, Otmar Hilliges, and Justus Thies. Diner: Depth-aware image-based neural radiance fields, 2022. * [47] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. _CoRR_, abs/2103.00020, 2021. * [48] Eduard Ramon, Gil Triginer, Janna Escur, Albert Pumarola, Jaime Garcia, Xavier Giro-i Nieto, and Francesc Moreno-Noguer. H3d-net: Few-shot high-fidelity 3d head reconstruction. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5620-5629, 2021. * [49] Pramod Rao, Mallikarjun B R, Gereon Fox, Tim Weyrich, Bernd Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Ayush Tewari, Christian Theobalt, and Mohamed Elgharib. Vorf: Volumetric relightable faces. _British Machine Vision Conference (BMVC)_, 2022. * [50] Daniel Rebain, Mark Matthews, Kwang Moo Yi, Dmitry Lagun, and Andrea Tagliascachi. Lolnerf: Learn from one look. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1558-1567, 2022. * [51] Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. _ACM Trans. Graph._, 2021. * [52] Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In _CVPR_, 2022. * [53] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 815-823, 2015. * [54] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. _Advances in Neural Information Processing Systems_, 33:20154-20166, 2020. * [55] Vincent Sitzmann, Eric Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. Metasdf: Meta-learning signed distance functions. _Advances in Neural Information Processing Systems_, 33:20154-20166, 2020. * [56] Vincent Sitzmann, Eric Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. Metasdf: Meta-learning signed distance functions. _Advances in Neural Information Processing Systems_, 33:20154-2016, 2020. Processing Systems_, 33:10136-10147, 2020. * [56] Feitong Tan, Sean Fanello, Abhimitz Meka, Sergio Orts-Escolano, Danhang Tang, Rohit Pandey, Jonathan Taylor, Ping Tan, and Yinda Zhang. Volux-gan: A generative model for 3d face synthesis with hdri relighting. In _ACM SIGGRAPH 2022 Conference Proceedings_, SIGGRAPH '22, New York, NY, USA, 2022. Association for Computing Machinery. * [57] Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P Srinivasan, Jonathan T Barron, and Ren Ng. Learned initializations for optimizing coordinate-based neural representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2846-2855, 2021. * [58] Prune Truong, Marie-Julie Rakotosaona, Fabian Manhardt, and Federico Tombari. Sparf: Neural radiance fields from sparse and noisy poses. IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2023. * [59] Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. Ref-nerf: Structured view-dependent appearance for neural radiance fields. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5481-5490. IEEE, 2022. * [60] Suhani Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Genova, Mehdi SM Sajjadi, Etienne Pot, Andrea Tagliasacchi, and Daniel Duckworth. Nesf: Neural semantic fields for generalizable semantic segmentation of 3d scenes. _arXiv preprint arXiv:2111.13260_, 2021. * [61] Daoye Wang, Prashanth Chandran, Gaspard Zoss, Derek Bradley, and Paulo Gotardo. Morf: Morphable radiance fields for multiview neural head modeling. In _ACM SIGGRAPH 2022 Conference Proceedings_, SIGGRAPH '22, New York, NY, USA, 2022. Association for Computing Machinery. * [62] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. _arXiv preprint arXiv:2106.10689_, 2021. * [63] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In _CVPR_, 2021. * [64] C. Wu, D. Bradley, P. Garrido, M. Zollhofer, C. Theobalt, M. Gross, and T. Beeler. Model-Based Teeth Reconstruction. _ACM Transactions on Graphics (TOG)_, 35(6), 2016. * [65] Jianfeng Xiang, Jiaolong Yang, Yu Deng, and Xin Tong. Gram-hd: 3d-consistent image generation at high resolution with generative radiance manifolds. _arXiv preprint arXiv:2206.07255_, 2022. * [66] Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, and Zhangyang Wang. Sinnerf: Training neural radiance fields on complex scenes from a single image. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXII_, pages 736-753. Springer, 2022. * [67] Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, and Xun Cao. Facescape: A large-scale high quality 3d face dataset and detailed riggable 3d face prediction. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2020. * [68] Jiawei Yang, Marco Pavone, and Yue Wang. Freenerf: Improving few-shot neural rendering with free frequency regularization. In _Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_, 2023. * [69] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. _Advances in Neural Information Processing Systems_, 34:4805-4815, 2021. * [70] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelNeRF: Neural radiance fields from one or few images. In _CVPR_, 2021. * [71] Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. Few-shot adversarial learning of realistic neural talking head models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, October 2019. * [72] Jingbo Zhang, Xiaoyu Li, Ziyu Wan, Can Wang, and Jing Liao. Fdnerf: Few-shot dynamic neural radiance fields for face reconstruction and expression editing. _arXiv preprint arXiv:2208.05751_, 2022. * [73] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. _CoRR_, abs/2010.07492, 2020. * [74] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_, 2018. * [75] Peng Zhou, Lingxi Xie, Bingbing Ni, and Qi Tian. CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis. 2021. ## Appendix A Supplementary Material This supplementary document provides more details about our experimental setting in Sec. B and supplementary results and ablations in Sec. C. For videos and ultra-high resolution results up to 4K, please see the project page. ## Appendix B Detailed Experimental Setting ### Architecture and Hyperparameters In the following, we describe the architecture of the prior and the finetuned model in detail and list the hyperparameters we used for training and finetuning our models. #### b.1.1 Prior Model Following Mip-NeRF [3], the prior model consists of two MLPs. The first MLP is the _proposal_ network that only predicts density. The second MLP a neural radiance field (_NeRF_) that predicts both density and colour. The proposal MLP has 4 linear layers with \((256+512)\times 256\) parameters: \(256\) neurons for the features from the previous branch and 512 neurons for the concatenated latent code. The NeRF MLP has 8 linear layers with \((1024+512)\times 1024\) parameters: 1024 neurons for the features from the previous branch and 512 neurons for the concatenated latent code. The total parameter count of our prior model including all latent codes is 14.6 Mio. During training and inference, we use three hierarchical sampling steps [34]. The first step uses 256 proposal samples, the second step 256 refined proposal samples, and the third step 128 NeRF samples. We use the same number of positional encoding frequencies for both the proposal and the NeRF MLPs. The integrated positional encoding for the trunk networks \(\hat{\gamma}_{\mathbf{x}}(\cdot)\) has 12 levels; the positional encoding \(\gamma_{\mathbf{v}}(\cdot)\) for the view direction has 4 levels, and it appends the view direction without positional encoding. The view branch of the NeRF MLP has a bottleneck with width 256. The positionally-encoded view direction is concatenated to the bottleneck features and processed by a linear layer of width \((256+27)\times 128\) before being projected to RGB (256 bottleneck features and 27 features from the positional encoding of the view direction). We optimise the prior model as an auto-decoder [5], where each identity has a latent code with 512 dimensions. Each training step samples 128 random rays from 8 views of 64 identities, which yields a batch size of \(65,536\). We train our prior model for 1 Mio. steps, which takes 144 hours (approximately 6 days) on 36 TPUs. We optimise our model using Adam [24] with \(\beta_{1}=0.9,\beta_{2}=0.999\). The learning rate starts at \(0.002\) and exponentially decays to \(0.00002\). We clip gradients with norms larger than 0.001. #### b.1.2 Inversion We perform inversion on the prior model to find a good initialisation for the finetuning. In each step, we sample 8 random patches of size \(32\times 32\) from all available views. We initialise the new latent code with zeros. The optimisation uses Adam with \(\beta_{1}=0.9,\beta_{2}=0.999\) and a fixed learning rate of \(0.001\). We optimise for \(1,500\) steps on 4 TPUs, which takes 10 minutes. #### b.1.3 Finetuned Model The architecture of the finetuned model is the same as the prior model, except for an additional linear layer that maps the features from the trunk to 3-d normal vectors. We create batches of \(8,912\) rays by sampling random pixels from all available views. We start with a learning rate of 0.001 and exponentially decay to \(0.00002\). The number of optimisation step depends on the resolution. For low-resolution (\(256\times 256\)), we optimise for \(25,000\) steps. We increase the number of optimisation steps for higher resolutions: \(50,000\) steps for \(512\times 512\); \(100,000\) steps for \(1024\times 1024\); \(200,000\) steps for \(2048\times 2048\); and \(300,000\) steps for \(4096\times 4096\). We always optimise on four TPUs. The model finetuning takes 4 hours for \(25,000\) steps and linearly increases for more training steps. #### b.1.4 Camera Alignment A crucial preprocessing step is to align all cameras to a canonical pose. As described in the main paper, we estimate five 3D keypoints on the outer eye corners, nose, mouth, and chin and calculate a similarity transform the the same five keypoints in a canonical space using Procrustes analysis. The canonical keypoints are computed as the median keypoint location across the first 260 subjects in our training set. Fig. 13 shows an example. ### Studio Dataset Our studio dataset consists of 1450 volunteers who were prompted to optionally self-report various characteristics like age, gender, skin colour, and hair colour. We report the statistics here and in Fig. 14. 60% of the participants were male, 38% female, 0.2% non-binary and the rest preferred not to state. The age of the participants was heavily centered in the 24-50 age group. We also note the bias in appearance characteristics. The participants were also given the option to wear or remove their glasses, hence a very small percentage \(\sim\)1% wore glasses during capture. The capture was performed over a period of many months. Initial captures contain a black background and were later Figure 13: Visualisation of the five keypoints used for aligning captured subjects to a canonical pose. changed to a green screen to allow for better foreground segmentation if required. We do not mask out the background during prior model training. During finetuning, we estimate a foreground mask with a robust pretrained estimator [42]. Hence, our method works without any constraints on the background, as long as the camera poses are accurate. ## Appendix C Supplementary Results and Analysis This section supplements the results in the main paper with more visuals and detailed metrics. We provide supplementary results for comparisons related works in Sec. C.1, more visuals for one- and few-shot synthesis in the studio setting and in-the-wild in Sec. C.2, and a detailed analysis of our ablations in Sec. C.3. ### Supplementary Comparisons This section supplements the comparison from the main paper with detailed metrics and visuals for individual holdout subjects. #### c.1.1 Comparisons on Our Studio Dataset This section provides supplementary results on our multiview studio dataset described in the main paper and in Sec. B.2. Note that our goal is novel view synthesis so we refrain from comparing with methods that explicitly target geometry reconstruction [39, 40, 48, 62, 69]. We train the competing methods [9, 32, 37, 57, 68] on our dataset and compare with our results in Tbl. 8. In the following, we describe the experimental details for each competing method. For KeypointNeRF, we use their publicly available code and their default training and network settings. We manually chose 13 keypoints that closely resemble the ones shown in their paper (Fig. 23) and compute the near and far planes from our own dataset. We made a considerable effort to train them at 1K resolution, but we found that their results at the resolution 256 is of much higher quality than their results at 1K. Therefore, we present their results at both 1K resolution (Tbl. 8) and at 256 resolution (Tbl. 7). For the lower resolution comparison, we compare with our lower-resolution prior model trained at resolution \(256\times 384\). For the comparison with RegNeRF [36], we train their model with the default settings provided by the authors for the DTU dataset [21], except for adjusting the near / far planes and scene scaling. We also disable the loss from the appearance regulariser because the model is not available. For FreeNeRF, we implement their frequency regularisation with a 90% schedule into our pipeline. We do not employ their occlusion regularisation because it causes transparent surfaces and floaters on our dataset. For learnt [57], we adapt their publicly available notebook to work with our dataset. For training the meta model, we set the batch size to 4096, the number of inner steps to 64, the number of samples along the ray to 128, and train for 15,000 steps. We run the inference-time optimisation for the same number of steps as ours: 100,000 steps. For the EG3D-based prior, we train a prior model with a tri-plane representation as proposed in Chan et al. [9]. The model is trained as an auto-decoder model similar to ours. We simultaneously optimize a per-identity latent code and the network weights to obtain an EG3D prior model that is finetuned to sparse views of a target subject for the same number of steps as ours. We do not apply our additional regularisers when finetuning EG3D. We train the EG3D prior on low-resolution images at resolution \(256\times 256\) that are super-resolved to resolution \(1024\times 1024\). The triplane resolution is \(256\times 256\) and the per-identity latent codes have dimensionality \(512\). Since the EG3D model requires rendering the full image, we reduce the number of initial samples per ray to 64 and the number of importance samples to 8. For all methods, we perform the same inference-time bounding box based culling as we did for our method. Table 8 lists metrics for experiments on 2, 3, 5, and 7 views and Fig. 24, 25, and 26 show visual examples. Our method consistently outperforms related works. We do not compare with DINER [46], Sparse NeRF [19], and SPARF [58] on our dataset because their training code is not pub \begin{table} \begin{tabular}{c c|c c c} \hline \(\mathcal{L}_{v}\) & \(\mathcal{L}_{\text{normal}}\) & **PSNR**\(\uparrow\) & **SSIM**\(\uparrow\) & **LPIPS**\(\downarrow\) \\ \hline \(\times\) & \(\times\) & 23.91 & 0.7787 & 0.2233 \\ \(\times\) & ✓ & 24.79 & 0.7839 & 0.2066 \\ ✓ & \(\times\) & 25.53 & 0.7996 & 0.1963 \\ \hline ✓ & ✓ & **25.69** & **0.8040** & **0.1905** \\ \hline \end{tabular} \end{table} Table 3: Ablation on regularisation when finetuning the model. The scores have been computed on models trained on two views with resolution \(1024\times 1024\) and averaged across six views of three holdout subjects. Please refer to Fig. 19 for visuals. Figure 14: Distribution of characteristics in our dataset: we report the percentage distribution of our dataset by age, gender, skin colour and hair colour. licy available at the time of submission. #### c.1.2 Comparison on FaceScape Figure 16 adds more examples for the comparison with Facescape [67], and Tbl. 9 lists metrics. For the comparison on FaceScape [67], we obtain the outputs directly from the authors of DINER [46]. For each target identity, we perform model finetuning on two different subset of four views and average the scores. Since we develop our method on neutral faces, we filter out faces with non-neutral expressions. For the comparison with RegNeRF [36], we follow the same protocol as described in Sec. C.1.1. We follow the default settings provided by the authors for the DTU dataset [21], but adjust the near / far planes and scene scaling. Again, we disable the loss from the appearance regulariser. For the EG3D-based prior [8], we train their model on Celeb-A [27] dataset at a 256 tri-plane and image resolution without the super-resolution module to ensure 3D consistent results. We note that their discretised volume representation leads to blurry results. ### Few-shot Synthesis Ultra High-resOur main setting is fitting to two or more views at a ultra-high resolution up to 4K. This goes far beyond the resolution of the prior model (\(512\times 768\)). Using at least two views provides the coverage from side angles such that the model can reconstruct intricate details like individual skin pores or a beard, \begin{table} \begin{tabular}{l c c c|c c c|c c c} \hline **Objective** & \multicolumn{3}{c}{**PSNR \(\uparrow\)**} & \multicolumn{3}{c}{**SSIM \(\uparrow\)**} & \multicolumn{3}{c}{**LPIPS \(\downarrow\)**} \\ Subject & A & B & C & A & B & C & A & B & C \\ \hline \(\arg\min_{\hat{a}_{\text{target}}}\) & 26.07 & 27.21 & 22.90 & 0.7949 & **0.8000** & 0.7998 & **0.1823** & 0.1651 & 0.2126 \\ \(\arg\min_{\hat{a}_{\text{target}},\mathbf{w}_{\text{target}}}\) (Ours) & **26.55** & **27.30** & **23.22** & **0.8113** & 0.7996 & **0.8009** & 0.1962 & **0.1650** & **0.2102** \\ \hline \end{tabular} \end{table} Table 4: The model finetuning performs best when optimising both the model parameters \(\Theta_{target}\) and the latent code \(\mathbf{w_{target}}\). All metrics were computed after finetuning to two views at 1K resolution. Visually, the optimisation results look very similar, see Fig. 20. Figure 15: Visual comparison with KeypointNeRF [32] on low-resolution. Please see Tab. 7 for metrics.
2309.06492
Discovering Long-lived Particles at DUNE
Long-lived particles (LLPs) arise in many theories beyond the Standard Model. These may be copiously produced from meson decays (or through their mixing with the LLP) at neutrino facilities and leave a visible decay signal in nearby neutrino detectors. We compute the expected sensitivity of the DUNE liquid argon (LAr) and gaseous argon (GAr) near detectors (ND) to light LLP decays. In doing so, we determine the expected backgrounds for both detectors, which have been largely overlooked in the literature, taking into account their angular and energy resolution. We show that searches for LLP decays into muon pairs, or into three pions, would be extremely clean. Conversely, decays into two photons would be affected by large backgrounds from neutrino interactions for both near detectors; finally, the reduced signal efficiency for $e^+ e^-$ pairs leads to a reduced sensitivity for ND-LAr. Our results are first presented in a model-independent way, as a function of the mass of the new state and its lifetime. We also provide detailed calculations for several phenomenological models with axion-like particles (coupled to gluons, to electroweak bosons, or to quark currents). Some of our results may also be of interest for other neutrino facilities using a similar detector technology (e.g. MicroBooNE, SBND, ICARUS, or the T2K Near Detector).
Pilar Coloma, Justo Martín-Albo, Salvador Urrea
2023-09-12T18:05:34Z
http://arxiv.org/abs/2309.06492v3
# Discovering Long-lived Particles at DUNE ###### Abstract Long-lived particles (LLPs) arise in many theories beyond the Standard Model. These may be copiously produced from meson decays (or through their mixing with the LLP) at neutrino facilities and leave a visible decay signal in nearby neutrino detectors. We compute the expected sensitivity of the DUNE liquid argon (LAr) and gaseous argon (GAr) near detectors (ND) to light LLP decays. In doing so, we determine the expected backgrounds for both detectors, which have been largely overlooked in the literature, taking into account their angular and energy resolution. We show that searches for LLP decays into muon pairs, or into three pions, would be extremely clean. Conversely, decays into two photons would be affected by large backgrounds from neutrino interactions for both near detectors; finally, the reduced signal efficiency for \(e^{+}e^{-}\) pairs leads to a reduced sensitivity for ND-LAr. Our results are first presented in a model-independent way, as a function of the mass of the new state and its lifetime. We also provide detailed calculations for several phenomenological models with axion-like particles (coupled to gluons, to electroweak bosons, or to quark currents). Some of our results may also be of interest for other neutrino facilities using a similar detector technology (e.g. MicroBooNE, SBND, ICARUS, or the T2K Near Detector). IFT-UAM/CSIC-23-111 IFIC/23-40, FTUV-23-0823.4331 ###### Contents * I Introduction * II Production and decay of long-lived particles * II.1 Benchmark Model I: Gluon Dominance scenario * II.2 Benchmark Model II: ALPs coupled through electroweak operators * II.3 Benchmark Model III: Charming ALPs * III Previous constraints * III.1 Visible decay searches. * III.2 Invisible decay searches. * III.3 Bounds from \(\mathbf{D}-\mathbf{\bar{D}}\) mixing. * III.4 Astrophysical bounds. * IV Targeting signals of LLP decays at the DUNE Near Detectors * IV.1 Simulation * IV.2 Event selection: \(\mu^{+}\mu^{-}\) decay channel * IV.3 Event selection: \(e^{+}e^{-}\) decay channel * IV.4 Event selection: \(\gamma\gamma\) decay channel * IV.5 Event selection: \(\pi^{+}\pi^{-}\pi^{0}\) decay channel * V Results * V.1 Model-independent sensitivity limits * V.2 Sensitivity limits for specific scenarios * V.2.1 Gluon dominance * V.2.2 ALPs coupled through EW operators * V.2.3 Charming ALPs * VI Summary and conclusions * A Bounds on \(\mathbf{D}\to\mathbf{\pi}\mathbf{a}\) from a reinterpretation of \(\mathbf{D}\to\mathbf{\tau}\mathbf{\nu}\) data Introduction While the evidence pointing to the existence of physics _beyond the Standard Model_ (BSM) is overwhelming, the new physics has so far eluded its discovery in direct-detection experiments and colliders. Nevertheless, until very recently our experimental strategy was mostly focused on unveiling the existence of new states with masses at or above the electroweak scale. Interestingly, current constraints are successfully evaded by a plethora of BSM physics models containing light, feebly interacting states that offer viable solutions to most open problems in the Standard Model (SM) and avoid large corrections to the Higgs mass. Popular examples of this kind of models include those with heavy neutral leptons (HNL), which could explain the observed pattern of neutrino masses and mixing as well as the observed baryon asymmetry of the universe [1; 2; 3]; or models with a rich dark sector, which offer novel candidates for dark matter and may be connected to the SM through renormalizable portals at low energies (see, e.g., Refs. [4; 5; 6; 7]). Thanks to their weak interactions with the SM, models of this sort typically include new particles that are long-lived and decay to SM states with a significant branching ratio. The existence of light, feebly interacting, unstable states can be probed in multiple ways, from their impact on cosmological observables and astrophysical objects, to direct or indirect signals in laboratory experiments and colliders. In the case of unstable particles with masses in the \(\mathcal{O}(0.1)\)-\(\mathcal{O}(10)\) GeV range, searches at fixed-target experiments typically offer the best constraints (see, for example, Refs. [8; 9] for recent reviews). The key in this case is that, once produced, a _long-lived particle_ (LLP) may propagate over tens or even hundreds of meters before decaying into visible final states in nearby detectors. In recent years, the experimental search for LLPs has received considerable attention from the neutrino community. Accelerator-based neutrino experiments are entering a precision era with the primary goal of discovering CP violation in the lepton sector, but they also offer all the necessary ingredients to conduct sensitive fixed-target searches: namely, high-intensity proton beams producing large fluxes of mesons, as well as versatile near detectors. The current generation of accelerator-based neutrino oscillation experiments set already some of the leading constraints for certain LLP models (including dark scalars [10; 11; 12], axion-like particles [13; 14; 15], or HNLs [16; 17; 18; 19], among others). These will surely be improved over the next decade by the two upcoming new-generation long-baseline neutrino oscillation experiments, Hyper-Kamiokande [20] and the Deep Underground Neutrino Experiment (DUNE) [21], currently under construction. In this work, we highlight the unique opportunity offered in this regard by DUNE, which will be exposed to the LBNF neutrino beam [22]. In addition to its very high intensity, which leads to a considerably higher flux of pions and kaons with respect to other facilities, the high proton energy available at LBNF (120 GeV) will allow the production of a significant flux of heavier resonances such as \(D\) mesons. This will enable the DUNE experiment to provide leading constraints on LLPs with masses above the kaon mass, a window that is otherwise challenging to explore for laboratory experiments. In order to make our results as model-independent as possible, we will present them as a function of the mass of the new state and its lifetime, in line with our past works [23; 24; 25; 13] (see also Refs. [26; 27; 28] for related works that also follow a model-independent approach). However, in order to put the expected sensitivities of DUNE in context and to ease their comparison to present limits, it is useful to consider a specific model. Thus, we will also consider models with pseudoscalar particles, which arise naturally as pseudo-Nambu-Goldstone bosons of a spontaneous global symmetry breaking, and are therefore ubiquitous in extensions of the SM. These particles are often referred to as _axion-like particles_ (ALPs), since the best-motivated example is the QCD axion [29; 30]. Generic models with light unstable pseudoscalars may lead to a wide set of new physics signals in neutrino detectors depending on their couplings to the SM, including ALP decays into pair of electrons, muons, photons, or multiple mesons. The DUNE experiment will offer the possibility to study many of these, thanks to its highly-capable suite of near detectors (ND) [31], which will include both a liquid argon (LAr) and a gaseous argon (GAr) time projection chamber (TPC). In particular, while many of the constraints on ALPs rely on their coupling to photons and electrons, ALP couplings to muons are subject to fewer limits. For a wide class of models, these can be particularly relevant in the mass window above \(2\,m_{\mu}\) where the decay channel \(a\to\mu\mu\) dominates. Modern facilities operating with TPC detectors offer excellent opportunities to provide leading constraints for these scenarios, as pointed out in Refs. [13; 32]. Moreover, as the imaging capabilities of TPCs allow the possibility of studying complex final states with multiple particles (a key advantage with respect to other detector technologies used in neutrino experiments), in this work we will also consider ALP decays leading to multiple pions (for similar studies, see Refs. [33; 34]). The multiple possibilities offered by DUNE to search for LLP decays have been pointed out in the literature before (e.g., in Refs. [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]). In most of these studies the effect of the background has been neglected, arguing that it could potentially be reduced to a negligible level by means of appropriate selection cuts. However, it should be kept in mind that the near detectors available at neutrino facilities are exposed to a high-intensity neutrino flux. The whole DUNE ND suite, for instance, will register more than 100 million neutrino interactions per year [31]. Even though the kinematic features of neutrino-nucleus scattering events are different than those expected from an LLP decay, reducing the expected background to a negligible level is a daunting task, as it was shown, for instance, in Refs. [41; 43] for the HNL scenario. The final states expected in the case of HNL decays through mixing are, however, different from those expected for other LLPs. In this work, we compute the expected backgrounds for generic final states involving two photons, two leptons, and multiple pions, with no missing energy. Thus, our background study is, a priori, applicable to BSM extensions including light vectors, scalars, or pseudoscalars. In principle, considering event rates from LLP-argon interactions at the ND, in addition to LLP decays, may provide additional sensitivity to certain BSM scenarios. This is specially so in regions of the parameter space corresponding to very long lifetimes, since the probability for an LLP to decay inside the detector becomes heavily suppressed. As pointed out in Refs. [35; 37; 45], the inclusion of scattering events for ALPs could be useful to close the so-called cosmological triangle; however, for ALPs above the MeV scale, the corresponding bounds are considerably worse than those obtained from decay searches (e.g., see Fig. 3 in Ref. [35]). In our case, the study of scattering signals would demand a more involved simulation and analysis of the relevant backgrounds, falling outside the scope of the present work. This article is structured as follows. Section II describes the computation of the decay signal of a generic LLP. It also summarizes the main details of the ALP benchmark models considered in this work, where we assume that the ALP is coupled to the SM with different sets of effective operators. Current constraints on these benchmark models are then summarized in Section III (as well as in Appendix A). Next, we discuss our evaluation of the backgrounds in Section IV, before presenting our results in Section V. We summarize and conclude in Section VI. ## II Production and decay of long-lived particles The computation of the number of decays of long-lived particles (LLP) in detectors located at a certain distance can be carried out as follows. We start from a Monte Carlo simulation of the production of parent mesons (technical details regarding these simulations are provided in Sec. IV). The number of mesons can be written in terms of the number of _protons on target_ (PoT) and the meson yield per PoT (\(Y_{M}\)) as \[\frac{dn_{M}}{dE_{M}d\Omega_{M}}=N_{\rm PoT}Y_{M}\frac{d^{2}\rho_{M}}{dE_{M}d \Omega_{M}}, \tag{1}\] where \(d^{2}\rho_{M}/(dE_{M}d\Omega_{M})\) is the probability that a given meson is produced with energy in the interval \([E_{M},E_{M}+dE_{M}]\) and with a trajectory defined by a solid angle in \([\Omega_{M},\Omega_{M}+d\Omega_{M}]\). Using Eq. (1), the expected number of particles \(a\) produced in the decay \(M\to a+\ldots\), with a branching ratio \(\text{BR}(M\to a)\), can be computed as \[\frac{dn_{a}}{dE_{a}d\Omega_{a}}=\text{BR}(M\to a)\;N_{\text{PoT}}Y_{M}\int dE _{M}\int d\Omega_{M}\frac{d^{2}\rho_{M}}{dE_{M}d\Omega_{M}}\frac{d^{2}\rho^{M \to a}(E_{M},\Omega_{M})}{dE_{a}d\Omega_{a}}, \tag{2}\] where \(d^{2}\rho^{M\to a}/(dE_{a}d\Omega_{a})\) stands for the differential probability that a meson will produce an LLP \(a\) with energy and trajectory defined by \(E_{a}\) and \(\Omega_{a}\). As we will see, in certain scenarios the LLP may also be produced directly through their mixing with SM mesons. In this case, the LLP differential flux can be approximated by the meson flux in Eq. (1), rescaled with the corresponding mixing angle accordingly (which depends on the masses of the two particles involved). At this point, it is important to note that the meson fluxes depend on the production point (inside the target and in the decay volume region). Similarly, the detector acceptance depends on the production point of the LLP, since this determines whether its trajectory crosses the detector. In what follows, in order to simplify our notation, we remove the dependence with the meson and the LLP production points; we stress, however, that in our numerical calculations this dependence has been fully accounted for. Once it has been produced, the LLP may live long enough to propagate to the detector and decay inside, with a probability that depends on its decay length boosted to the lab frame: \(L_{a}=c\,\tau_{a}\,\gamma_{a}\,\beta_{a}\), where \(\tau_{a}\) is the lifetime of the particle at rest, while \(\beta_{a}\) and \(\gamma_{a}\) are the boost factors. Such probability reads: \[P_{\text{decay}}(\Omega_{a},E_{a},c\tau_{a}/m_{a})=e^{-\ell_{\text{det}}/L_{a} }\cdot\left(1-e^{-\Delta\ell_{\text{det}}/L_{a}}\right)\,, \tag{3}\] where \(\ell_{\text{det}}\) is the propagation distance before the particle enters the detector, and \(\Delta\ell_{\text{det}}\) is the length of the intersection between the trajectory of the particle and the detector (which in most cases approximately coincides with the detector length along the beam axis). Note that both quantities depend on the solid angle \(\Omega_{a}\), as well as on the production point of the LLP. Eventually, the total number of LLP decays inside the detector into a given decay channel \(ch\) is obtained after integration over the LLP variables, and multiplying by the corresponding branching ratio and the detector efficiency for that channel, \(\epsilon_{ch}\): \[N_{dec,ch}=\epsilon_{ch}\;\text{BR}(a\to ch)\int dE_{a}\int_{\Omega_{\text{det }}}d\Omega_{a}P_{\text{decay}}(\Omega_{a},E_{a},c\tau_{a}/m_{a})\frac{dn_{a}}{ dE_{a}d\Omega_{a}}\,, \tag{4}\] where the produced LLP flux and the decay probability are given in Eqs. (2) and (3), respectively, and the integral in solid angle is performed taking into account only those trajectories within the angular acceptance of the detector, \(\Omega_{\text{det}}\). Even though Eq. (4) is exact (and it is, indeed, what we use in our computation of the number of signal events), it might not be very illuminating. We can derive a simpler expression noting that for the experimental setup considered here: (i) the distance to the detector is much longer than the size of the target (where most of the mesons are produced); (ii) the detector size is much smaller than the distance traveled by the LLP before reaching the detector; (iii) the angular acceptance of the detector is small, which mostly selects particles traveling along the beam axis. These allow us to assume that all LLPs are approximately produced at the same point and to neglect the dependence of \(\ell_{\text{det}}\) and \(\Delta\ell_{\text{det}}\) with the trajectory of the LLP. Under these approximations, we obtain: \[N_{dec,ch}\simeq\epsilon_{ch}\;\text{BR}(a\to ch)\text{BR}(M\to a)N_{PoT}Y_{M} \int dE_{a}P_{\text{decay}}(E_{a},c\tau_{a}/m_{a})\frac{d\varepsilon_{\text{ det}}^{M}(m_{a})}{dE_{a}}\,, \tag{5}\] where \(d\varepsilon_{det}^{M}/dE_{a}\) is the angular acceptance of the detector for an LLP with energy \(E_{a}\). Since the detector acceptance relies on the boost of the LLP in the lab frame, it will depend on the energy and momentum distribution of the parent meson \(M\), on the mass of the LLP, and on whether the LLP is produced in a two-body or a three-body decay. As illustration, the differential detector acceptance is shown in Fig. 1 as a function of the energy of the LLP, for \(K\) and \(D\) two-body decays and for two representative values of the mass of the LLP. As can be seen from this figure, the accepted flux ranges from a few GeV to tens of GeV, with some dependence on the parent meson and the LLP mass. Up to this point, the discussion is model-independent and applies to any LLP produced from meson decays at a neutrino beamline. The final sensitivity will be fully determined by the mass of the particle and its lifetime, and it will be optimal for values of \(c\tau_{a}\) which maximize the decay probability in Eq. (3). In order to compute the expected number of events, one just needs to know the parent meson and whether the LLP is produced through a two-body or a three-body decay. Nevertheless, in a given model the production branching ratio and the decay width will typically be partially correlated and depend on a set of common parameters, which will also enter the decay branching ratio into a given channel. The key requirement for these searches to succeed is that the lifetime of the LLP should be long enough so that it reaches the detector before decaying. This pushes into the weakly interacting limit, which demands small couplings and, therefore, typically small production branching ratios. Consequently, in order to reach the maximum sensitivity region, a trade-off must be found in terms of the production branching ratios and lifetime, which is not always easy to do and depends severely on the model. The second aspect to consider is that specific benchmarks will translate into preferred decay channels, which ultimately determine the experimental strategy to follow in order to maximize the signal-to-background ratio. In the rest of this section, we will focus on three _benchmark models_ in the context of ALPs. This will allow us to write specific expressions, in terms of the model parameters, for the ALP production and decay branching ratios, as well as its lifetime. As we will see, the three models considered would lead to very distinct phenomenology in an experimental search. Figure 1: Detector acceptance as a function of the energy of the ALPs. On the left panel, the ALPs have a mass of 300 MeV and originate from the decay \(K^{+}\to\pi^{+}a\). On the right panel, the ALPs have a mass of 1 GeV and are produced via \(D^{+}\to\pi^{+}a\). ### Benchmark Model I: Gluon Dominance scenario We start by considering an anomalous coupling between ALPs and the gluon field, leading to a higher-dimensional effective operator. This has been referred to in the literature as the _gluon dominance_ scenario [46], and is one of the main targets for the vast suite of experiments searching for long-lived particles (see, e.g., Refs. [9; 47; 48]). The sensitivity of DUNE to ALPs in this scenario has been studied previously in Refs. [33; 34] assuming that backgrounds could be reduced to a negligible level for the most relevant decay channels. Here, we revisit their limits taking into consideration the expected background levels, and slight improvements (outlined below) on the calculation of the ALP decay widths. Following the same normalization convention as in Ref. [46], the relevant ALP interaction Lagrangian reads: \[\delta\mathcal{L}_{a,int}=c_{G}\mathcal{O}_{G}=\frac{\alpha_{s}}{8\pi f_{a}}aG _{\mu\nu}^{b}\widetilde{G}^{b\mu\nu}\,, \tag{6}\] where \(G_{\mu\nu}^{b}\) is the gluon field strength, \(\widetilde{G}^{b\mu\nu}\equiv\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}G_{b\rho\sigma}\), with \(\epsilon^{0123}=1\). Also, \(\alpha_{s}\equiv g_{s}^{2}/(4\pi)\), and \(g_{s}\) stands for the strong coupling constant. In this scenario, the ALPs would be mostly produced directly through their mixing with neutral pseudoscalar mesons (\(\pi^{0}\), \(\eta\), \(\eta^{\prime}\)) below 1 GeV, and from gluon fusion for masses above this value. Here, we use the calculation from Ref. [33]. As for their decays, for \(m_{a}<3m_{\pi}\) the only available decay channel is \(a\to\gamma\gamma\), since CP conservation forbids the decay into two pions, whereas for higher masses, hadronic decay modes (with three or more mesons in the final state) will dominate. ALP interactions with pseudoscalar mesons may be described using chiral perturbation theory including ALPs (a\(\chi\)PT), but a proper treatment of the interactions between multiple mesons requires the inclusion of Vector Meson Dominance (VMD) terms in the Lagrangian [49]. Throughout this work, we follow Refs. [50; 51], where a data-driven method (similar to the one developed in Ref. [52]) is used to successfully describe ALP interactions within this framework up to relatively high ALP masses (\(m_{a}\lesssim 3\) GeV). The effective \(a\chi\)PT Lagrangian is then matched onto the perturbative QCD (pQCD) Lagrangian. Figure 2 shows the decay widths and branching ratios for the most relevant decay channels in this scenario, computed following Ref. [51] (see also Ref. [50]). The sharp features observed in the decay width stem from the large mixing between the ALP and the neutral pseudoscalar mesons whenever \(m_{a}\sim m_{\pi^{0}},m_{\eta},m_{\eta^{\prime}}\), indicated by the dotted vertical lines. ### Benchmark Model II: ALPs coupled through electroweak operators Next, we consider an effective Lagrangian that includes ALP couplings to electroweak operators through effective operators arising at a high-energy scale \(\Lambda\) (which we set to \(\Lambda=f_{a}=1\) TeV): \[\delta\mathcal{L}_{a,int}=c_{\phi}\mathcal{O}_{\phi}+c_{B}\mathcal{O}_{B}+c_{ W}\mathcal{O}_{W}=c_{\phi}\frac{\partial^{\mu}a}{f_{a}}\phi^{\dagger}i \overleftrightarrow{D}_{\mu}\phi-c_{B}\frac{a}{f_{a}}B_{\mu\nu}\widetilde{B} ^{\mu\nu}-c_{W}\frac{a}{f_{a}}W_{\mu\nu}^{I}\widetilde{W}_{I}^{\mu\nu}\,, \tag{7}\] where \(B\) and \(W^{I}\) stand for the EW vector bosons, \(\phi\) is the Higgs doublet and \(\phi^{\dagger}i\overleftrightarrow{D}_{\mu}\phi\equiv\phi^{\dagger}\left(iD_{ \mu}\phi\right)-\left(iD_{\mu}\phi\right)^{\dagger}\phi\). It is worth mentioning that \(\mathcal{O}_{\phi}\) can be traded by a set of (flavor-conserving) fermionic operators by means of a hypercharge rotation, as shown in Ref. [53]. This finally leads to: \[\delta\mathcal{L}_{a,int}=\frac{\partial_{\mu}a}{2f_{a}}\sum_{f}c_{ff}\bar{f} \gamma^{\mu}\gamma_{5}f-c_{B}\frac{a}{f_{a}}B_{\mu\nu}\widetilde{B}^{\mu\nu} -c_{W}\frac{a}{f_{a}}W_{\mu\nu}^{I}\widetilde{W}_{I}^{\mu\nu}\,, \tag{8}\] where sum over \(f\) extends to quarks and charged leptons, with \(c_{ff}=c_{\phi}\) for down-type quarks and charged leptons, and \(c_{ff}=-c_{\phi}\) for up-type quarks. Although the operators included in Eq. (8) are flavor conserving, the \(\mathcal{O}_{\phi}\) and \(\mathcal{O}_{W}\) operators can induce flavor-changing neutral current (FCNC) processes at one loop, as detailed, for instance, in Refs. [54; 55; 56]. For this scenario, the mechanism of production we consider will be kaon decays to ALPs (\(K\to\pi a\)), in line with our past work in Ref. [13]. The corresponding width can be computed using the a\(\chi\)PT Lagrangian [53; 57]: \[\Gamma(K\to\pi a)=\frac{m_{K}^{3}[[k_{Q}(\mu)]_{sd}]^{2}}{64\pi f_{a}^{2}}f_{0} \left(m_{a}^{2}\right)\lambda^{1/2}(1,m_{a}^{2}/m_{K}^{2},m_{\pi}^{2}/m_{K}^{2 })\left(1-\frac{m_{\pi}^{2}}{m_{K}^{2}}\right)^{2}\,, \tag{9}\] where \(f_{0}\) is the scalar form factor1 and Footnote 1: For the range of masses we are considering, \(f_{0}\left(m_{a}^{2}\right)\) can be closely approximated to 1, see Ref. [58]. \[\lambda(a,b,c)\ =a^{2}+b^{2}+c^{2}-2ab-2ac-2bc\,. \tag{10}\] In Ref. [13] we computed the running of the couplings from \(\Lambda=1\) TeV down to \(\mu=2\) GeV, following Ref. [59]. This leads to the following matching condition for the effective coupling entering the decay width: \[\frac{[k_{Q}(2\ \mathrm{GeV})]_{sd}}{V_{td}^{*}V_{ts}}\bigg{|}_{\Lambda=1 \mathrm{TeV}}\simeq-9.7\times 10^{-3}c_{W}(\Lambda)+8.2\times 10^{-3}c_{\phi}( \Lambda)-3.5\times 10^{-5}c_{B}(\Lambda)\,. \tag{11}\] We note that a similar effective coupling is generated in this class of models for the decay \(B\to Ka\); however, the production of \(B\) mesons is insufficient at DUNE and will not be considered here. ALPs produced from kaon decays will have a kinematical threshold at \(m_{a}<m_{K}-m_{\pi}\sim 355\ \mathrm{MeV}\). In this mass window, the ALP can only decay into photons or light charged leptons pairs (\(e\) and \(\mu\)). The decay width for the di-photon decay channel is given by \[\Gamma(a\to\gamma\gamma)=|c_{\gamma\gamma}|^{2}\frac{m_{a}^{3}}{4\pi f_{a}^{2} }\,, \tag{12}\] Figure 2: Main ALP decay widths (left) and branching ratios (right) for the gluon dominance scenario in Eq. (6), as a function of its mass. These have been computed following Ref. [51] (see also Ref. [50]). where the effective coupling at low scales is given at one loop by [55; 60] \[c_{\gamma\gamma}= c_{W}\left[s_{w}^{2}\,+\frac{2\,\alpha}{\pi}B_{2}(\tau_{W})\right]+c _{B}\,c_{w}^{2}-c_{\phi}\,\frac{\alpha}{4\pi}\left(B_{0}-\frac{m_{a}^{2}}{m_{ \pi}^{2}-m_{a}^{2}}\right). \tag{13}\] Here, we have written \(c_{i}\equiv c_{i}(\Lambda)\), \(B_{0}\) and \(B_{2}\) are loop functions (which can be found, for example, in Appendix B of Ref. [13]), \(\tau_{W}=4m_{W}^{2}/m_{a}^{2}\) and \(\alpha\) is the fine-structure constant. Finally, the decay width into dilepton pairs is given by \[\Gamma(a\to\ell^{+}\ell^{-})=|c_{\ell\ell}|^{2}\frac{m_{a}m_{\ell}^{2}}{8\pi f _{a}^{2}}\sqrt{1-\frac{4m_{\ell}^{2}}{m_{a}^{2}}}\,, \tag{14}\] where \(c_{\ell\ell}\) has been computed at one loop, and at low energies (\(\mu\sim 2\) GeV) reads [55; 60] \[c_{\ell\ell}=c_{\phi}+\frac{3\,\alpha}{4\pi}\left(\frac{3\,c_{W}}{s_{w}^{2}}+ \frac{5\,c_{B}}{c_{w}^{2}}\right)\log\frac{f_{a}}{m_{W}}+\frac{6\,\alpha}{\pi} \left(c_{B}\,c_{w}^{2}+c_{W}\,s_{w}^{2}\right)\log\frac{m_{W}}{m_{\ell}}\,. \tag{15}\] By direct observation of Eqs. (11)-(15), we note that: * \(\Gamma(a\to\ell^{+}\ell^{-})\) is suppressed by a factor of \(m_{\ell}^{2}/m_{a}^{2}\) with respect to \(\Gamma(a\to\gamma\gamma)\). Therefore, for similar values of \(c_{\gamma\gamma}\) and \(c_{\ell\ell}\), we expect \(\Gamma(a\to\gamma\gamma)\gg\Gamma(a\to\ell^{+}\ell^{-})\). * For ALPs masses \(m_{a}>2m_{\mu}\), the decay channel \(a\to\mu^{+}\mu^{-}\) will completely dominate over the decay channel \(a\to e^{+}e^{-}\), since \(\Gamma(a\to\ell^{+}\ell^{-})\propto m_{\ell}^{2}\). * Due to the suppression with \(\alpha\), \(\Gamma(a\to\ell^{+}\ell^{-})\) is mostly induced by \(\mathcal{O}_{\phi}\), while \(\Gamma(a\to\gamma\gamma)\) is mostly induced by \(\mathcal{O}_{B}c_{w}^{2}+\mathcal{O}_{W}s_{w}^{2}\). * As can be seen in Eq. (11), \(\mathcal{O}_{B}\) has a subdominant effect in the ALP production. However, it can have a significant effect in the ALP decay width, affecting its lifetime. For simplicity, hereafter we set the coefficient of this operator to zero; however, we refer the interested reader to Refs. [13; 55] for a related discussion on this issue. The corresponding decay widths are shown in Fig. 3 for an ALP coupled predominantly through \(\mathcal{O}_{W}\) (left panel) and \(\mathcal{O}_{\phi}\) (right panel). Finally, we conclude this subsection stressing that the production of ALPs from decays of heavier mesons (e.g., \(D^{+}\to\pi^{+}a\)) using couplings to electroweak operators would require to generate the effective flavor-violating coupling \([k_{Q}]_{cu}\) via loop. However, proceeding analogously as for the kaon case, it is easy to see that the corresponding coupling for \(D\) decays will be severely suppressed, since \[\frac{[k_{Q}]_{cu}}{[k_{Q}]_{sd}}\propto\frac{|V_{ub}^{*}V_{cb}|}{|V_{td}^{*} V_{ts}|}\,\frac{y_{b}^{2}}{y_{t}^{2}}\sim 2.5\times 10^{-4}\quad\text{ (for $\mathcal{O}_{W},\mathcal{O}_{B},\mathcal{O}_{\phi}$)}.\] Therefore, no sensitivity to these operators is expected at DUNE for \(m_{a}>m_{K}\). ### Benchmark Model III: Charming ALPs Here, we consider a different set of operators that could induce ALP production from \(D\) decays, which would allow to probe even heavier ALPs. This is the case, for example, of effective operators coupling the ALP to a quark current with off-diagonal couplings in flavor space (see, e.g., the discussion in Ref. [61] and references therein). The inclusion of couplings to quarks implies, however, that the ALP will have a significant decay width into hadronic final states. Consequently, its lifetime and branching ratios will show a significant dependence on the assumed Wilson coefficients and it becomes necessary to choose a particular texture in flavor space. As an illustrative example, we will consider ALPs that couple to right-handed up-quark currents as in Ref. [62], dubbed as _charming ALPs_: \[\delta\mathcal{L}_{a,int}=c_{u_{R}}\mathcal{O}_{u_{R}}=\sum_{i,j}\frac{\partial _{\mu}a}{f_{a}}\left(c_{u_{R}}\right)_{ij}\bar{u}_{Ri}\gamma^{\mu}u_{Rj}\,, \tag{16}\] where \(c_{u_{R}}\) is a Hermitian matrix in flavor space, and \(i,j=1,2,3\) are quark family indices. The flavor-violating coupling \([c_{u_{R}}]_{12}\) in Eq. (16) induces \(D^{+}\to\pi^{+}a\), with a decay width given by [62; 63] \[\Gamma(D^{+}\to\pi^{+}a)=\frac{m_{D}^{3}\left|\left[c_{u_{R}}\right]_{12} \right|^{2}}{64\pi f_{a}^{2}}\left[f_{0}^{D\pi}\left(m_{a}^{2}\right)\right]^ {2}\lambda^{1/2}\left(1,m_{a}^{2}/m_{D}^{2},m_{\pi}^{2}/m_{D}^{2}\right) \left(1-\frac{m_{\pi}^{2}}{m_{D}^{2}}\right)^{2}, \tag{17}\] where \(\lambda\) is defined in Eq. (10) and \(f_{0}^{D\pi}\left(m_{a}^{2}\right)\) is the scalar form factor evaluated at a momentum transfer \(q^{2}=m_{a}^{2}\), which we take from the lattice computation in Ref. [64]. For ALPs produced in \(D\)-meson decays, we can probe a wide mass window up to \(m_{a}=m_{D}-m_{\pi}\lesssim 1.7\ \text{GeV}\). As in the gluon dominance scenario, in this case the dominant decay modes will involve multiple mesons in the final state, which requires the use of the \(a\chi\)PT for masses below approximately 2.5-3 GeV. We note that the calculation in Ref. [51] describes ALP interactions with mesons induced by purely diagonal couplings in flavor space. However, we can use it for the purposes of this work since the only off-diagonal couplings considered involve \(c\) and \(t\) quarks, which are irrelevant for decays in this mass region. In particular, we stress that as long as the ALP couples to up-quarks only, its decay widths are mostly determined by the \([c_{u_{R}}]_{11}\) coupling in Eq. (16), while the remaining couplings only enter at subleading order for some channels (such as \(a\to\gamma\gamma\)). In summary, for the purposes of this work, the phenomenology of this class of models is fully determined by \([c_{u_{R}}]_{12}\) (which controls the production in \(D\) decays) and by \([c_{u_{R}}]_{11}\) (which controls Figure 3: Main decay widths for the ALP considered in our benchmark model II (BM-II), as a function of its mass \(m_{a}\). Results for an ALP coupled predominantly through \(\mathcal{O}_{W}\) are shown in the left panel, while the right panel shows similar results for \(\mathcal{O}_{\phi}\). the decay of the ALP). Hence, it becomes interesting to consider the expected size of these two couplings for well-motivated UV completions leading to the set of operators in Eq. (16). The authors of Ref. [62] studied several possible UV completions of this scenario. Here, we highlight two of them, where the ALP emerges as a pseudo-Nambu-Goldstone boson (pNGB) of a spontaneously-broken symmetry: * The first one is a Froggatt-Nielsen (FN) model, where the ALP corresponds to what is usually called a _flavon_ or a _familion_[65; 66; 67; 68; 69]. The model includes a complex scalar field \(S\), whose radial component is identified with the ALP, and a new \(U(1)\) flavor symmetry. The inclusion of higher-dimensional operators at a high-energy scale \(\Lambda>f_{a}\) generates the operators in Eq. (16) once the scalar acquires a vacuum expectation value, \(\langle S\rangle=f_{a}\). As shown in Ref. [62], an appropriate choice of the charges under the new symmetry is able to generate the correct up-quark masses (thus alleviating the _flavor_ problem) and would lead to \[c_{u_{R}}^{\text{FN}}=\left(\begin{array}{ccc}2&3\epsilon&3\epsilon^{2}\\ 3\epsilon&1&\epsilon\\ 3\epsilon^{2}&\epsilon&\epsilon^{2}\end{array}\right)\,\] (18) where off-diagonal entries are controlled by \(\epsilon=f_{a}/\Lambda\sim m_{c}/m_{t}\). * The second possibility is a dark-QCD (dQCD) model with a confining dark sector that contains \(n_{d}\) dark flavors transforming under \(SU(n_{d})\). A dark confining sector would offer plausible candidates for dark matter among the dark hadrons and therefore is also well-motivated from the theoretical point of view. In addition, the ALP may be identified with one of the CP-odd states in the dark meson spectrum (a _dark pion_, see e.g. Ref. [51]). The operators in Eq. (16) may be generated, for example, through a Yukawa interaction between SM quarks and dark quarks with a heavy scalar field, which is integrated out of the theory at low energies. Adopting the same assumptions and parameter values as in Ref. [62], this leads to a texture for \(c_{u_{R}}\) in the \(u-c\) sector that is very similar to the one in Eq. (18), up to an overall rescaling factor. In summary, both of these UV completions would lead to \([c_{u_{R}}]_{12}/[c_{u_{R}}]_{11}\sim\mathcal{O}(0.01)\)-\(\mathcal{O}(0.03)\), with small differences depending on the particular scenario being considered. Fixing a texture is convenient since it allows to compute the corresponding ALP decay widths for all relevant decay channels, and sets the correlation between the ALP production rates and its lifetime. The corresponding branching ratios for the texture in Eq. (18) are shown in Fig. 4, where we have set the matching scale between the \(a\chi\)PT and the pQCD Lagrangians at 2.9 GeV following Ref. [51]. We see that for \(m_{a}<3m_{\pi}\) the ALP decays exclusively to \(a\to\gamma\gamma\). Conversely for \(m_{a}>3m_{\pi}\) the decay \(a\to\pi^{+}\pi^{-}\pi^{0}\) rapidly dominates, as shown in Fig. 4, with the exception of the regions neighbouring the mass values of \(m_{\eta}\) and \(m_{\eta}^{\prime}\). The resulting branching ratios for a dQCD-inspired model would be very similar to the ones shown here for the FN-inspired model, although the corresponding ALP lifetime would be longer. This would affect the statistics expected at the detector (inducing a change in sensitivity), but the results would otherwise be qualitatively very similar. Therefore in what follows for concreteness we will adopt the texture in Eq. (18) to show our results for this benchmark scenario. We finalize this section by stressing that, while we have based our discussion on theoretically well-motivated examples, our results for this benchmark scenario will apply to a wider class of models, as long as the ALP does not couple to down quarks directly2 and both \([c_{u_{R}}]_{11},[c_{u_{R}}]_{12}\) are generated. Footnote 2: Including a coupling to down quarks would mainly affect the calculation of the decay widths and branching ratios [51]. ## III Previous constraints This section contains a brief summary of the most relevant constraints for the two benchmark models under consideration. Most of the bounds from current and past experiments have been previously computed in the literature, and are just recasted here for the scenarios of interest. In the case of an ALP coupled through electroweak effective operators, a comprehensive discussion on the applicable bounds can be found e.g., in Refs. [48; 63; 70] (see also Ref. [13]). For an ALP coupled to right-handed up-quarks, we follow the discussion in Ref. [62], where the authors derived the most relevant bounds on this scenario. However in this case we have rederived some of the constraints here, finding significant differences as explained in more detail below. ### Visible decay searches. At fixed-target experiments, the ALP can be produced through various processes, depending on its couplings to SM particles: from meson decays, through Primakoff scattering, through its mixing with the neutral pseudoscalar mesons, or via proton or electron Bremsstrahlung. Here we distinguish three different cases: * For an ALP coupled to gluons through \(\mathcal{O}_{G}\) in Eq. (6), constraints of this type are obtained from \(K^{\pm}\to\pi^{\pm}\gamma\gamma\) measurements at NA62 [71], E949 [72], \(K_{L}\to\pi^{0}\gamma\gamma\) at NA48/2 [73] and KTeV [74], and from \(B\to Ka,\ a\to\gamma\gamma\) searches at BaBar [75]. We take these limits from Ref. [48]. Additional limits come from searches for LLP decays into two photons at CHARM [76] (derived in Ref. [34]). * For an ALP coupled through the \(\mathcal{O}_{W}\) operator in Eq. 8, significant constraints are obtained from E137 searches [77] (see e.g. Refs. [78; 79; 80]), which we take from Ref. [78], as well as from \(K\to\pi\gamma\gamma\) (taken from Ref. [48]) and from \(B\to Ka(a\to\gamma\gamma)\)[75]. Additional bounds can be obtained from NA64, for an ALP produced in the forward direction through the Primakoff effect in the vicinity of a nucleus, recasting their search for ALPs decaying into two photons [81]. Finally, ALPs could be produced at LEP through an off-shell photon Figure 4: Main decay widths (left) and branching ratios (right) for the Charming ALP scenario, as a function of its mass. These have been computed following Ref. [51], for the couplings in Eq. (18). (for example, via \(e^{+}e^{-}\to\gamma^{*}\to\gamma a\)) or in photon fusion (\(e^{+}e^{-}\to e^{+}e^{-}a\)) and decay to two photons. The corresponding bounds are taken from Ref. [82]. * For an ALP coupled through the \(\mathcal{O}_{\phi}\) operator in Eq. (8), relevant bounds are obtained recasting CHARM bounds on LLPs decaying into lepton pairs [76; 83]. Here we use the revised bounds obtained in Ref. [84] (see also Refs. [80; 85]). Significant constraints are also obtained from LHCb [86; 87], for ALPs produced from \(B\)-meson decays (via \(B\to Ka\)) and decaying within the detector into \(\mu^{+}\mu^{-}\). We take these from Refs. [84; 55]. Finally, in our previous work [13] we obtained new bounds from a recast of a MicroBooNE search for \(e^{+}e^{-}\) pairs from a long-lived particle pointing to the NuMI absorber [11]. Here we also add to these the corresponding recast for a similar search into \(\mu^{+}\mu^{-}\) pairs [12], following the same methodology as in Ref. [13]. We also note that the ArgoNeuT experiment has recently obtained a bound on axion-like particles for heavier masses (up to 700 MeV) decaying into \(\mu^{+}\mu^{-}\) pairs [15]. These cannot however be easily recasted to the scenario considered here without a proper simulation of the meson fluxes in the NuMI target (in particular, from \(\eta\) meson production), which lies beyond the scope of this work. * For an ALP coupled to right-handed quarks, Eq. (16), relevant constraints can be obtained for CHARM [76], using the null results from a search for \(a\to\gamma\gamma\) as outlined in Ref. [62]. We rederive such constraint here, finding significant differences which we attribute to the different treatment of the decay width of the ALP as well as to a different simulation of the \(D\) meson production. Specifically, we obtain the \(D\)-meson fluxes from Pythia (v8.3.07) [88] for a beam energy of 400 GeV, and compute the signal acceptance of the detector using the same methodology described in Sec. II, for a detector with a decay volume of \(3\times 3\times 35\) m\({}^{3}\) located at \(L_{det}=480\) m from the target. Our computation of the lifetime of the ALP, as well as the branching ratio for the \(a\to\gamma\gamma\) decay, follows Ref. [51]. Finally, we note that data from atmospheric neutrino oscillation experiments may provide relevant constraints as well. For example, in Refs. [23; 24] a model-independent analysis of SK multi-ring data was able to set a constraint on HNL production \(BR(K\to N)\times BR(N\to e-\text{like})\lesssim 5\times 10^{-9}\) for values of \(c\tau\sim\mathcal{O}(\text{km})\). A similar analysis would also constrain the scenarios considered here, for ALP decays into \(\gamma\gamma\) or \(e^{+}e^{-}\). ### Invisible decay searches. For very light masses, or for sufficiently small couplings, the ALPs may exit the detector without decaying (hence the term "invisible decay"). Results for \(K\to\nu\bar{\nu}\) or \(B\to K\nu\bar{\nu}\) may then be reinterpreted as bounds on the branching ratios for \(K\to\pi a\) and \(B\to Ka\)[80; 54; 55]. For heavier masses, while no dedicated search for \(D\to\pi a\) exists, an indirect constraint may be obtained from the reinterpretation of \(D^{+}\to\tau^{+}\nu,\tau^{+}\to\pi^{+}\bar{\nu}\) measurements, as pointed out in Refs. [61; 62]. The strongest limits on \(K\to\pi a\) are obtained from the NA62 experiment [89; 90]. However, the very competitive bounds from E787 & E949 [91] are comparable (and even dominate) in the region close to the pion mass. For \(B\to Ka\) decays, we use the constraint from Belle on \(B^{+}\to K^{+}\nu\bar{\nu}\)[92]. Experimental limits on invisible ALP decays also arise from precision measurements of the pion momentum in \(K\to\pi X\)[93]. Here we draw from our previous work in Ref. [13] where we derived the corresponding constraints for an ALP coupled through the electroweak operators in Eq. (8). For the gluon dominance scenario, we extract the relevant bounds from Ref. [48]. A priori, similar bounds may also be derived for the Charming ALP scenario. However it can be seen that the leading contribution is obtained when the \(c\)-quark is running in the loop. In particular, following Refs. [59; 13], we find \[\left|\frac{[k_{Q}(\mu)]_{ds}}{V_{cd}^{*}V_{cs}}\right|_{\Lambda=1~{}{\rm TeV}} \simeq 3.5\times 10^{-6}\left[c_{u_{R}}(\Lambda)\right]_{22}\qquad({\rm for }~{}{\cal O}_{u_{R}}). \tag{19}\] Using this effective coupling, we re-derive here the corresponding constraints from \(K\to\pi a\), following the procedure described in Ref. [13]; however, while in Ref. [13] the running was taken from \(\Lambda=1~{}{\rm TeV}\), here we take into account the relationship between \(f_{a}\) and \(\Lambda\), see Eq. (18). In other words, for a given value of \(f_{a}\) the running of the coupling constants is carried out from \(\Lambda=f_{a}/\epsilon\) down to \(\mu=2~{}{\rm GeV}\). For the charming ALP scenario, bounds on \(D\to\pi a\) can be derived using CLEO [94] or BESIII [95] measurements of the \(D\to\tau\nu\) decay. Their data is provided as a function of the missing mass squared in both cases, which can be used to derive a limit on \({\rm BR}(D\to\pi a)\) as a function of \(m_{a}^{2}\), as was done in Ref. [62] for CLEO. Here we decide to use BESIII data instead, as for CLEO we are not able to reproduce their SM fit with the information provided in Ref. [94]. Specifically, we take the BESIII data from Fig. 3 in Ref. [95]. We have checked that we are able to reproduce their result for the determination of \(D\to\tau\nu\) in the SM within a reasonable degree of accuracy. Our best-fit curve leads to \(\chi^{2}_{\rm min}/{\rm dof}\sim 29/28\), indicating a good compatibility with the data with a \(p\)-value of \(0.4\). We then use the fit to derive a constraint on the \({\rm BR}(D\to\pi a)\) as a function of the mass of the ALP, see App. A for details. Finally we note that additional bounds arise from searches for mono-photon signals at colliders [96; 97; 54]. The best limits of this class are obtained using BaBar [98; 99] or LEP [100] data. Nevertheless, the resulting bounds are milder than the rest of the limits considered in this work and will not be shown here. ### Bounds from \(D-\bar{D}\) mixing. Neutral meson particle-antiparticle pairs are allowed by the SM interactions to both oscillate and decay to lighter particles. Their oscillations are parametrized by a \(2\times 2\) unitary mixing matrix \(M\). The mixing for \(D^{0}-\bar{D}^{0}\) is very suppressed in the SM, leaving room to probe new physics that might induce such mixing. These bounds are relevant for the charming ALP scenario, since the non-vanishing \(\left[c_{u_{R}}\right]_{21}\) coupling in Eq. (18) leads to non-standard contributions in the form of effective operators at low energies involving four quarks. These contribute to the off-diagonal entry in the mixing matrix, \(M_{12}\). Here we take the bounds derived in Ref. [62] using the mixing constraints on \(M_{12}\) from Ref. [101]. ### Astrophysical bounds. Within the mass range of interest here, the main constraints are derived from supernovae data. These can be further classified into three subcategories: (1) constraints obtained from the requirement that the energy loss induced by ALP emission does not exceed that of neutrino emission [102; 103; 104; 105; 106]; (2) bounds coming from the visible photon signal resulting from the ALP burst [107; 108]; and (3) limits obtained from the observation of low-luminosity core-collapse supernovae, which constrain the total energy deposition in the progenitor star from radiative ALP decays [109]. Additional limits may be derived from X-ray observations of neutron star mergers associated to gravitational waves [110]. Here we take the limits on \(c_{W}\) from Ref. [48], while for the charming ALP scenario we take the limits from Ref. [62]. Finally, while bounds derived on ALPs coupled to muons or electrons do exist in the literature[104; 111; 112; 113; 114; 114], to the best of our knowledge none of these analyses may be easily recasted for our \(\mathcal{O}_{\phi}\) operator (which induces couplings to electrons, muons and, via loop, to photons) within our mass range of interest. Consequently, no bounds from supernovae are included in this case. We stress, nonetheless, that astrophysical bounds are typically relevant for couplings much smaller than those within the expected sensitivity reach for DUNE, and are only shown here for completeness. ## IV Targeting signals of LLP decays at the dune near detectors DUNE [21] is an upcoming long-baseline neutrino oscillation experiment under construction in the United States. Once built, DUNE will consist of two state-of-the-art neutrino detectors exposed to the world's most intense neutrino beam. The so-called _near detector_ (ND) will record neutrino interactions near the source of the beam, at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. The much larger _far detector_ (FD) will be built at a depth of 1.5 km at the Sanford Underground Research Facility (SURF), in South Dakota, 1300 km away from Fermilab. DUNE has a very rich scientific program that includes the precision study of neutrino mixing [115; 116], astroparticle physics [117] and searches for BSM phenomena [118]. In this section, we discuss the prospects for detecting the decay of LLPs at the DUNE ND [31], which will be built in a shallow underground hall located 574 m downstream from the neutrino beam origin. In the first phase of DUNE, the ND will consist of a liquid-argon Time Projection Chamber (TPC), ND-LAr, followed by a downstream muon spectrometer (the so-called _temporary muon spectrometer_, or TMS). In the so-called Phase II of the experiment, TMS will be replaced with ND-GAr, a magnetized, high-pressure gaseous argon TPC surrounded by a calorimeter. We do not consider in our study the magnetized beam monitor known as SAND, which will be present in both phases of the experiment. The search for LLP decays at the DUNE ND will suffer from a significant background from neutrino interactions, given the intensity of the LBNF neutrino beam. Considering this, ND-GAr appears to be the ideal detector for this kind of searches, given its large volume (proportional to the number of signal events) and low mass (proportional to the number of background events). However, ND-GAr will only be available as part of the upgrades contemplated for Phase II [119]. Therefore, in this work we have also estimated the expected sensitivities for the ND-LAr detector, as it will start collecting data much sooner. Optimal selection cuts for ALP signals at ND-LAr and ND-GAr are generally different, due to their different features. Thus, our background analysis is performed for each detector separately. ### Simulation We have used large simulation data sets and a basic event selection to estimate the signal efficiency and background rejection in the DUNE ND for the main LLP decay channels of our benchmark models. Each simulation event represents a single LLP decay or neutrino-argon interaction in the active volume of the detectors, thus ignoring possible pile-up effects (i.e. the cross-contamination of different interactions occurring in the same TPC event) since the ND design will be optimized to make them negligible [31]. For simplicity, detector effects are not simulated, but we do take them into account in our study with the introduction of the typical detection thresholds and resolutions expected from ND-LAr and ND-GAr. The first step in our simulation involves the generation of the LLP fluxes at the ND starting from the decays of their parent mesons. For kaon decays, we make use of publicly-available histogram files [120; 121] that contain the position and three-momentum distributions of the decay of mesons in the LBNF beamline as obtained with G4LBNF [120], the official Geant4 simulation of the LBNF beamline from primary proton beam to hadron absorber. This simulation code includes kaon production in the target, along the 194 m-long decay pipe, and in the absorber at the end of the decay pipe, leading to a kaon production yield \(Y_{K}=0.75\) mesons/PoT. For the \(D\) mesons, we use the same distributions as in Ref. [42]. They were obtained using Pythia (version 8.2.44) [122] to create a pool of events for proton collisions at various momenta, followed by a Geant4 simulation to predict proton inelastic interactions between the 120 GeV primary proton beam and the target. Doing this, the \(D\)-meson production yield is \(Y_{D}=1.2\cdot 10^{-5}\) mesons/PoT. As for the luminosity considered, we take a total nominal exposure of \(1.1\times 10^{22}\) PoT, corresponding to 10 years of DUNE operation [115]. The parent mesons will decay to LLPs (as detailed in Sec. II), which will then propagate towards the DUNE ND, located 574 m away from the target. In our computation, we approximate the ND-LAr as a rectangular cuboid 7 m wide, 3 m high and 5 m deep, with an assumed fiducial volume that excludes 50 cm from the sides and upstream end and 150 cm from the downstream end [31]. The active volume of the HPgTPC is a cylinder with a radius of 260 cm and a length of 500 cm; for the purpose of computing event rates, we define a fiducial volume by excluding the outer 37 cm of radius and 30 cm on each end of the cylinder [31]. We assume that the dominant background source in this search will be neutrino-argon interactions in the active volume of the TPCs. We estimate that other possible background sources (such as neutrino-rock interactions or cosmic muons) will be negligible in comparison, as the resulting events will not be aligned with the direction of the beam, in general. Using the GENIE neutrino Monte Carlo generator (version 3.2.0) [123] and the public DUNE flux histogram files [120; 121], we have produced \(2\times 10^{7}\)\(\nu_{\mu}\)-Ar interactions. Below, we discuss the event selection we have devised for the four main ALP decay channels relevant for the models discussed in Sec. II. Table 1 summarizes our results. We have verified that our estimates do not change significantly for different LLP masses (\(m_{a}\)) in the relevant range for each channel. In this regard, it is worth noting that the reconstructed invariant mass of the LLP would provide a handle for the discrimination of signal and background that we have not exploited in our study. ### Event selection: \(\mu^{+}\mu^{-}\) decay channel A priori, the most important background source for the di-muon decay channel is \(\nu_{\mu}\) charged-current events with charged pions, as it is relatively easy to confuse muon and pion tracks due to their similar stopping power (\(d\)E/\(d\)x) in argon. About 38% of the \(\nu_{\mu}\)-Ar interactions have a charged muon and a charged pion above threshold in the final state, with an expected rate of the order of \(6\times 10^{5}\) events per ton-year at the DUNE ND. Actual di-muon events from charged-current charm production only represent less than one percent of the total background events. We start our event selection requiring candidate signal events to have only two \(\mu\)-like tracks (i.e., \(\mu^{\pm}\) or \(\pi^{\pm}\)) above threshold. This allows the rejection of background events with hadronic activity near the interaction vertex. We consider a proton detection threshold of 40 MeV for the ND-LAr and of 5 MeV for the ND-GAr [31], obtaining similar rejection factors for both detectors: about 0.3% of the initial events meet the above criterion. From this point on, each detector requires slightly different considerations, discussed next. In ND-GAr, the TPC combined with the electromagnetic calorimeter (ECAL) and the muon identification system that surround it will provide superb \(\mu/\pi\) separation capabilities, reaching 100% purity in the identification of muons for a wide range of momenta [31]. Moreover, the magnetic field in ND-GAr will allow the measurement of the charge sign of muons. Thanks to these capabilities, we can reduce the initial sample of background events by a factor \(6\times 10^{-6}\) for, essentially, perfect signal efficiency. Lastly, we can further reduce the background sample taking into account the particular kinematics of signal events (see Fig. 5, top row): (i) The reconstructed LLP transverse momentum should be low; that is, the LLP trajectory points back in the direction of the target. (ii) The muons in signal events are highly boosted, and thus the angle between them should be small. We assume that a momentum resolution of the order of 5% or better and angular resolution of the order of a few degrees can be achieved in ND-GAr for momenta up to 10 GeV/c [31]. Overall, these cuts let us achieve a background rejection in excess of \(10^{7}\) (see Tab. 1), resulting in a background-free search in 10 years of data taking. In the case of ND-LAr, the detector will not be able to fully contain high-energy muons or measure lepton charge, but the downstream spectrometers (TMS in the first phase of DUNE, and ND-GAr in the second one) will measure the charge sign and three-momentum of the muons that enter them. Events with muon kinetic energies below 1 GeV will be contained within ND-LAr, while events with higher energy muons traveling within 20 degrees of the beam direction will exit ND-LAr and enter the spectrometer [31]. TMS will only be able to measure muons up to \(\sim 6\) GeV/c before they range out, corresponding to 40% of our LLP decays.3 We will assume as well that the combination of the \(d\)E/\(d\)x measurement in ND-LAr plus the \(\mu/\pi\) separation capabilities of the TMS -- pions will interact inelastically in the steel layers of TMS with high probability, while muons will behave as minimum ionizing particle -- will be enough to reach essentially perfect purity in the identification of muons, such as in the ND-GAr. Finally, the two kinematical cuts described above are applied, achieving a background-free search in 10 years of data taking. As a point of comparison, the analysis described in Ref. [124] for the identification of di-muon neutrino trident events achieved a background suppresion of 6 orders of magnitude using exclusively kinematical cuts in ND-LAr. Footnote 3: ND-GAr, which will use the curvature in the magnetic field to reconstruct the momentum, will be able to reconstruct muon tracks well up to 10 GeV/c and beyond, improving the selection efficiency to 54% of the decays. \begin{table} \begin{tabular}{c l r r r r} & Selection cut & \multicolumn{2}{c}{Signal efficiency} & \multicolumn{2}{c}{Background rate} \\ \cline{3-6} & & ND-LAr & ND-GAr & ND-LAr & ND-GAr \\ \hline \multirow{4}{*}{\(\times\)} & Two \(\mu\)-like tracks only & 1.00 & 1.00 & 3545674 & 70656 \\ & PID \(\mu\) and opposite charge sign & 0.40 & 1.00 & 6226 & 124 \\ & Transverse momentum \(<0.125\) GeV/c & 0.40 & 0.99 & 99 & 2 \\ & Angle between muons \(<0.7\) rad & 0.40 & 0.94 & 0 & 0 \\ \hline \multirow{4}{*}{\(\times\)} & Two \(e\)-like tracks/showers & 0.10 & 1.00 & 9432 & 145 \\ & Reconstructed ALP direction & **0.10** & 0.99 & 180 & 15 \\ \hline \multirow{4}{*}{\(\times\)} & Two \(\gamma\) showers only & 0.05 & 0.79 & 36276 & 14222 \\ & Reconstructed ALP direction & 0.05 & 0.79 & 6938 & **7923** \\ & Angle between \(\gamma\) showers & **0.05** & — & **1367** & — \\ \hline \multirow{4}{*}{\(\times\)} & Two \(\mu\)-like tracks, two \(\gamma\) showers & 0.04 & 0.81 & 2030490 & 40462 \\ & PID \(\pi^{\pm}\) and charge sign & 0.04 & 0.81 & 431035 & 8589 \\ \cline{1-1} & Transverse momentum \(<0.2\) GeV/c & 0.04 & 0.79 & 17182 & 342 \\ \cline{1-1} & Angle between pions \(<0.15\) rad & **0.04** & 0.69 & **946** & 19 \\ \end{tabular} \end{table} Table 1: Signal efficiencies and background event rates for the different decay channels, before and after event selection according to the cuts discussed in the main text. Results are shown separately for the two DUNE near detectors considered. Background event rates are provided per year, and for the total fiducial volume considered for each detector. We highlight in bold type the large backgrounds expected for some of the decay channels, as well as the reduced LAr ND signal efficiencies for most decay channels considered. Figure 5: Distributions for signal (blue line) and background (grey solid histogram) of the kinematic quantities used in our event selection. The vertical dashed red line indicates the selection cut value used. All histograms are area normalized. Top row: transverse momentum (left panel) and angle between tracks (right) for \(\mu^{+}\mu^{-}\) candidate ALP-decay events in the DUNE ND-GAr. Central row, left: angle with respect to the beam direction of \(e^{+}e^{-}\) events. Central row, right: angle between the two photon showers for \(\gamma\gamma\) candidate events in ND-LAr. Bottom row: transverse momentum (left panel) and angle between the charged-pion tracks of \(\pi^{+}\pi^{-}\pi^{0}\) candidate events. ### Event selection: \(e^{+}e^{-}\) decay channel The dominant background source for this decay channel is \(\nu_{\mu}\) neutral-current single-pion (NC\(\pi^{0}\)) events. They can be mistaken as an \(e^{+}e^{-}\) signal if only one of the photons from the \(\pi^{0}\to\gamma\gamma\) decay -- with a branching ratio close to 99% [125] -- converts in the detector, and no visible hadronic activity occurs at the vertex of the neutrino interaction. Less than 1% of the \(\nu_{\mu}\)-Ar interactions result in a single \(\pi^{0}\), corresponding to an expected rate of about \(1.5\times 10^{4}\) events per ton-year at the DUNE ND. In LAr, the attenuation length for gamma rays in the energy range from 100 MeV to 10 GeV is of the order of 20 cm [126]. This limits considerably the probability that only one of the two photons from a \(\pi^{0}\) decay interacts in ND-LAr. Through Monte Carlo simulation, we have estimated that probability to be 1.3%. Conversely, in high-pressure argon gas, the attenuation length for gammas is well above 10 m. Therefore, in most cases, both gammas from the \(\pi^{0}\) decay will escape the TPC, interacting in the ECAL; only in about 1% of the events the decay will result in a single-gamma conversion in the GAr, with no associated activity in the ECAL [31; 36]. Lastly, pair-conversion events in both ND-LAr and ND-GAr can be suppressed requiring the reconstructed direction of the LLP to be aligned with the neutrino beam (see Fig. 5, middle left panel). Background events, in contrast, will follow a nearly isotropical distribution, as the neutral pions will be coming from neutrino-nucleus interactions. A rather conservative cut of 100 mrad (about \(5^{\circ}\)) for both ND-GAr and ND-LAr suppresses the background events by an order of magnitude. For ND-LAr, as far as we are aware, there is no estimation yet of the reconstruction efficiency of high-energy electron showers. Therefore, taking into account the significant depth of the showers (more than 2 m for an electron of 1 GeV) and the busy DUNE ND environment, we conservatively assume an efficiency of 10%. ### Event selection: \(\gamma\gamma\) decay channel The most important background for this channel is, like in the previous case, the decay of neutral pions resulting from \(\nu_{\mu}\) neutral-current interactions. In the case of ND-GAr, we assume that these events can only be reconstructed with good efficiency and high purity if both gamma rays are detected in the ECAL [31]. Nonetheless, this calorimeter will not be able to contain completely the electromagnetic showers from high-energy photons up to 10 GeV, as it will only be 8-12 radiation-lengths deep [31]. The impact of this shower leakage on the energy and angular resolutions can only be fully understood with a detailed simulation, beyond the scope of this paper. We will only assume here that background events can be suppressed taking into account that signal events should point back in the direction of the neutrino beam. As the pointing accuracy of photon showers reconstructed in the ND-GAr ECAL is low (the resolution for photons of 1 GeV is about \(10^{\circ}\)), a selection cut on the opening angle between the two photons has limited impact and, hence, we do not consider it. In ND-LAr, full reconstruction of both gammas requires the containment of a significant fraction of the two electromagnetic showers within the fiducial volume of the detector. Following Ref. [127], we estimate the reconstruction efficiency to be 5% in our energy range of interest. As ND-LAr should be able to reconstruct with accuracy of a few degrees the direction of the showers, the kinematical cut on the LLP direction results more effective than in the previous case, and we can also discriminate between signal and background applying a cut on the angle between the two photons (see Fig. 5, middle right panel). In this channel, the invariant mass of the di-photon system could be used effectively for background rejection, as the distribution for background events would peak around the \(\pi^{0}\) mass (conversely, this limits as well the sensitivity to LLPs with mass in that energy region). As this selection cut requires a detailed understanding of the invariant mass resolution of the detectors, we are not applying it in this analysis. ### Event selection: \(\pi^{+}\pi^{-}\pi^{0}\) decay channel The most important background source for this decay channel is \(\nu_{\mu}\) neutral-current events with multi-pion production, with an expected rate of the order of 39000 events per ton-year. The same arguments given in previous sections for the reconstruction of \(\mu/\pi\) tracks and the two photon showers from the \(\pi^{0}\) decay apply here. Background events can be suppressed with selection cuts on the transverse momentum and angle between the two charged-pion tracks (see Fig. 5, bottom row). ## V Results To compute the expected sensitivity to the Wilson coefficients in Eq. (8), or to the value of \(f_{a}\) in Eqs. (6) and (16), we perform, for each detection channel, an unbinned Gaussian \(\chi^{2}\) analysis that takes into account the expected background event rates as outlined in Sec. IV (see Table 1). As the backgrounds stem mainly from neutrino neutral-current interactions in the detector, we include one nuisance parameter (\(\xi\)) in order to account for systematic uncertainties affecting their overall normalization. This is done with the pull-method approach [128] taking a prior uncertainty on the background, \(\sigma_{bg}=20\%\), to account for the large uncertainties coming from the corresponding cross section (see, e.g., Refs. [127; 129]). Thus, for a given decay channel \(ch\) with an expected non-zero background rate \(N_{bg,ch}\), we define our \(\chi^{2}\) simply as \[\chi^{2}=\min_{\xi_{bg}}\left\{\left(\frac{T_{ch}(\{\Theta,\xi\})-O_{ch}}{ \sigma}\right)^{2}+\frac{\xi^{2}}{\sigma_{bg}^{2}}\right\}\,, \tag{20}\] where \(T_{ch}\) is the total expected event rate including both signal and background events, while \(O_{ch}\) corresponds to the assumed observed events, and \(\{\Theta\}\) stands for the parameters of the model. We take the observed events as expectation in the absence of a BSM signal, \(O_{ch}=N_{bg,ch}\), and the associated statistical error as \(\sigma=\sqrt{O_{ch}}\). The predicted event rates read: \[T_{ch}(\{\Theta,\xi\})=N_{dec,ch}(\{\Theta\})+(1+\xi)N_{bg,ch}\,. \tag{21}\] Note that the \(\chi^{2}\) definition above, in terms of total event rates, will typically lead to conservative results: an improvement in sensitivity may be obtained for a binned analysis that takes into account the different distributions of the signal versus the background in the kinematic variables of interest, which we leave for future work. Our sensitivity regions are obtained taking the corresponding \(\chi^{2}\) cut at a given confidence level (C.L.), for 1 degree of freedom (d.o.f.). Thus, they can be interpreted as the upper limit that DUNE would be able to set on a given parameter, for an ALP with mass \(m_{a}\), in the absence of a BSM physics signal. Finally, for the channels without SM background, we follow the Feldman-Cousins prescription [130] and require \(N_{dec,ch}>2.44\) for limits at 90% C.L. ### Model-independent sensitivity limits Using the \(\chi^{2}\) analysis just outlined, we first perform a model-independent sensitivity analysis. If we assume that the production branching ratio and the lifetime of the ALPs are independent, the number of decays approximately depends as in Eq. (5). This allows to derive a sensitivity limit on the product of the production and decay branching ratios as a function of \(c\tau_{a}/m_{a}\), shown in Fig. 6. Mild differences are obtained for different masses, however, induced by the dependence of the detector acceptance on \(m_{a}\) (which affects the boost of the particles to the lab frame). Here we follow the same approach as in Ref. [25] and provide our limits as bands, where the width indicates the variation in the obtained limit when the mass of the LLP is varied between 10 MeV (bottom edge of each band) and up to the production threshold in each case (upper edge of the band). In Fig. 6 we show two sets of bands, depending on the parent meson: kaons (blue) and \(D\) mesons (red). Moreover, for each parent meson, we show results for \(a\to\gamma\gamma\) using the ND-GAr, computed taking the corresponding background event rates from Table 1, as well as the limiting sensitivity in the background-free case (which would only be applicable for decays into \(\mu^{+}\mu^{-}\)). Note, however, that the bottom edge of each band corresponds to \(m_{a}=10\) MeV, for which the decays into \(\mu^{+}\mu^{-}\) or multi-pion final states are not kinematically accessible. The best sensitivity for \(a\to\mu^{+}\mu^{-}\) searches is indicated by the dashed lines in each case, corresponding to \(m_{a}\sim 2m_{\mu}\). In order to compare to current limits in the literature it is convenient to pick a specific mass. This exercise is done in Fig. 7, where the show the DUNE sensitivity limits compared to previous constraints, for three representative masses: \(m_{a}=50\) MeV (left) and 300 MeV (center), where the strongest limits at DUNE would be obtained for LLP produced in \(K\) decays; and \(m_{a}=1.2\) GeV (right), which could be probed at DUNE only if the LLP is produced from \(D\) decays. As shown in the figure, the sensitivity limits at DUNE depend heavily on the final state the LLP decays into, due to the different backgrounds expected. In particular, for ALP decays into \(\gamma\gamma\) the analysis would be affected by large backgrounds, leading to a limit that is considerably worse than in the background-free scenario. This should be taken into consideration when comparing Figure 6: Expected sensitivity to LLPs in the model-independent scenario, assuming their branching ratios and lifetime are completely uncorrelated. In the absence of a new physics signal, DUNE is expected to disfavor the region above each line at 90% confidence level (C.L.). The width of the bands indicates the variation in our results when the mass of the LLP is varied between 10 MeV (upper edge of each band) up to the production threshold in each case; see text for details. The dashed lines correspond to \(m_{a}\sim 2m_{\mu}\), and therefore indicate the best sensitivity for searches of \(a\to\mu^{+}\mu^{-}\). Blue (red) bands show the results for LLP produced from kaon (\(D\)-meson) two-body decays. For each parent meson, the different bands are obtained assuming different background rates; see main text for details. our results with similar studies in the literature obtained neglecting the effect of backgrounds. However, we see that DUNE is expected to improve over present constraints for masses below the kaon mass even in the less favorable case where the LLP decays into photons. In the case of decays into lepton pairs, DUNE is expected to reach values of \(\text{BR}(M\to a)\times\text{BR}(a\to\ell^{+}\ell^{-})\sim 10^{-17}\) for optimal values of the LLP lifetime4, many orders of magnitude below current constraints. In case of a heavier LLP (right panel), the expected limits suffer from a significant reduction in the production of the parent meson, and we find that the limit for an ALP that decays purely into photons is not as strong as the one obtained from CHARM data (salmon shaded regions, with dotted edges). However, we see that DUNE will be able to considerably improve over current limits if the ALP decays predominantly into lepton pairs or into \(3\pi\), which would be background-free at the ND-GAr. Footnote 4: Model-independent DUNE sensitivities have been recently computed in Ref. [26], where the authors considered LLPs decaying into \(e^{+}e^{-}\). Our results, when rescaled according to the same number of PoT and assuming negligible backgrounds, show a reasonable agreement. ### Sensitivity limits for specific scenarios The computation of model-independent sensitivity limits (Figs. 6 and 7) is useful since allows our results to be easily recasted to other scenarios. However, one should bear in mind that in particular models, correlations may arise between the production and decay of the LLP if they depend on the same set of model parameters. This changes the relative importance of different sets of constraints, as these may be optimal for different values of the lifetime of the Figure 7: Expected sensitivity to LLPs as a function of their lifetime. The different panels show the results for \(m_{a}=50\) MeV (left), \(300\) MeV (center) and \(1.2\) GeV (right), as indicated. Colored regions are disfavored by present constraints from searches for invisible and visible LLP decays, see Sec. III. The different lines for DUNE (solid, dashed, dotted) are obtained for different background assumptions, depending on the final state being considered (\(a\to\gamma\gamma,a\to\ell^{+}\ell^{-},a\to\pi^{0}\pi^{+}\pi^{-}\)), see Sec. IV. In the case of the right panel, the limit for \(a\to ee\) would coincide with the one shown for \(a\to 3\pi\), as backgrounds would be similar for both final states, while the limit for \(a\to\mu\mu\) would be slightly better since it would be background free. Present constraints are indicated by the shaded areas and correspond to \(90\%\) C.L., with the exception of CHARM and LHCb (shown at \(95\%\) C.L.). LLP. Moreover, while in Sec. V.1 the constraints shown are obtained for searches for invisible or visible LLP decays, in specific scenarios additional bounds arise (e.g. from SN1987A or meson mixing, see Sec. III). Thus, in the rest of this section we evaluate the sensitivity of DUNE for the benchmark scenarios considered in Sec. II, as illustrative examples. #### iv.1.1 Gluon dominance As outlined in Sec. II, the main production mechanism in this scenario is through ALP mixing with neutral pseudoscalar mesons (for ALP masses below \(\sim 1\) GeV) and gluon fusion (for higher masses). Once produced, if the ALPs are sufficiently long-lived they will reach the DUNE detectors before decaying into either a pair of photons or multi-pion final states, see Fig. 2. Here we revisit the results previously computed in Refs. [33; 34] for this scenario, in light of our background estimates in Sec. IV and the refined computation of the relevant decay widths in Ref. [51]. Our results are shown in Fig. 8, where we show separately the expected sensitivity regions for a search for \(a\to\gamma\gamma\) (red lines) and \(a\to 3\pi\) (blue lines). Moreover, due to the different background rejection capabilities, we show separately the results for ND-LAr (dashed lines) and ND-GAr (solid lines). As can be seen from the figure, DUNE is expected to improve over current limits (shown by the shaded gray regions) only for ALP masses large enough to have a significant branching ratio to multi-pion final states, whereas for lighter ALP masses the sensitivity is affected by the large backgrounds expected for the di-photon channel. Also, note that even though the expected background for \(a\to\gamma\gamma\) is higher for ND-GAr than for ND-LAr, its much higher signal efficiency leads to a better performance thanks to the higher signal-to-background ratio. The vertical dotted lines indicate the masses of the neutral pseudoscalar mesons, where the large mixing with the ALP leads to sharp features in our regions (see also Fig. 2). #### iv.1.2 ALPs coupled through EW operators As outlined in Sec. II, the main production mechanism at DUNE for this scenario is through kaon decays, \(K\to\pi a\). Once produced, the ALP may decay into three different final states: \(e^{+}e^{-}\), \(\mu^{+}\mu^{-}\) and \(\gamma\gamma\), while hadronic modes are not allowed in this mass range (ALP decays \(a\to\pi\pi\) and \(a\to\pi^{0}\gamma\) are forbidden by \(CP\) and \(C\), respectively, and the decay into three pions is kinematically not allowed in this mass window). Our resulting sensitivity to ALPs coupled through EW operators is shown in Fig. 9. Let us first discuss the results shown in the left panel, obtained for an ALP coupled predominantly through the \(\mathcal{O}_{\phi}\) operator. As outlined in Sec. II, in this case the ALP tends to decay preferentially to leptons. Once the muon decay channel is open, the ALP lifetime is considerably reduced and the sensitivity to \(a\to e^{+}e^{-}\) is diminished in the region of large couplings. In the region of small couplings, on the other hand, the dependence on the total lifetime of the ALP approximately cancels and the result depends exclusively on the decay width for a given channel, \(a\to ee,\mu\mu\). Overall, we see that DUNE has an excellent opportunity to considerably improve over current constraints for an ALP decaying into \(\mu^{+}\mu^{-}\), by more than one order of magnitude, for both ND-LAr and ND-GAr. In the case of \(a\to e^{+}e^{-}\), an improvement is only expected for ND-GAr, while the results for ND-LAr are severely affected by the reduced signal efficiency, see Tab. 1. Finally, the results for \(a\to\gamma\gamma\) are not shown in this case since the reduced branching ratio, combined with the much larger backgrounds expected, yields sensitivities that are not competitive in light of present bounds. The right panel in Fig. 9 shows the results for an ALP coupled predominantly through the \(\mathcal{O}_{W}\) operator. While in this case the ALP tends to decay predominantly into photons, this channel is affected by much larger backgrounds. Therefore, although the branching ratio into leptons is suppressed for this operator (see Eq. (15)), the final sensitivities obtained for \(a\to\ell\ell\) are similar and even surpass those obtained for \(a\to\gamma\gamma\) for large masses, when the decay channel \(a\to\mu\mu\) is opened. In particular, in the high-mass region we see that DUNE is expected to improve by almost an order of magnitude over current limits by the E137 experiment. #### iii.3.3 Charming ALPs In this section, we present the sensitivity projections for the charming ALPs model described in Section II.3, using the DUNE ND with a total exposure of \(N_{\rm PoT}=1.1\times 10^{22}\). The sensitivity projections are shown in Fig. 4 separately for the final states with dominant branching ratios (\(a\to\gamma\gamma,a\to\pi^{+}\pi^{-}\pi^{0}\)). The same way as in the previous scenarios considered, our sensitivity contours also exhibit here the typical shape of a visible decay search. For large values of the couplings, the ALPs become very short-lived and decay before reaching the detector, leading to a loss in sensitivity induced by the exponential term in the decay probability. On the other hand, small couplings are suppressed by both the production rate in Eq. (17) and the fact that the ALPs become too long-lived. For decays into photon pairs, we include both production from kaon decays and from \(D\) decays. Conversely, hadronic decay channels are only available for \(m_{a}\gtrsim 3m_{\pi}\) and therefore only \(D\to\pi a\) is considered in this case. In the case of \(a\to\gamma\gamma\) it is worth mentioning that in the region below for masses below \(m_{a}<m_{K}-m_{\pi}\) the bound for this model is still dominated by the contribution from \(K\to\pi a\), since the suppressed production branching ratio for this scenario (see Eq. (19)) is Figure 8: DUNE sensitivity projections for the gluon dominance scenario, Eq. (6). Shaded regions are disfavored by present experiments. Red (blue) lines show the expected DUNE sensitivity (at 90% CL) for a search into \(a\to\gamma\gamma\) and \(a\to 3\pi\) final states. Results are shown separately for the ND-LAr (dashed lines) and ND-GAr (solid lines). partly compensated by the large kaon flux available. Again in this case we see that the resulting sensitivities for this decay channel are not competitive with current constraints, with similar results for the two near detectors. As highlighted in Sec. II.3 (see Fig. 4) for low masses the branching ratio is dominated by the decay \(a\to\gamma\gamma\) whereas hadronic decay channels (driven mainly by \(a\to\pi^{+}\pi^{-}\pi^{0}\)) rapidly take over for masses \(m_{a}\simeq 3m_{\pi}\). Since the \(\pi^{0}\) decays promptly to two photons, the final signal topology for those channels will be \(a\to\pi^{+}\pi^{-}2\gamma\). Other relevant decay modes (not included here for simplicity) would be those involving \(\eta\) or \(\eta^{\prime}\) mesons (e.g. \(a\to\pi^{+}\pi^{-}\eta\)), which also decay promptly into two photons and would lead to a similar signature. From Fig. 4 we see that a general improvement is expected over present bounds, specially for ALP masses \(m_{a}>800\) MeV. Note that for ALP masses in the vicinity of \(m_{\eta}\) and \(m_{\eta^{\prime}}\) the decay width increases very rapidly, which translates into a sudden loss of sensitivity. ## VI Summary and Conclusions The DUNE experiment will be exposed to the LBNF high-intensity neutrino beam. Besides the production of pions, kaons and other light mesons (such as \(\eta\) and \(\eta^{\prime}\)), the high proton energy at LBNF will generate of a significant flux of heavier mesons, such as \(D\) and \(D_{s}\). This provides a unique opportunity to study the production of exotic particles feebly coupled to the SM, with masses between \(\mathcal{O}(10)\) MeV and \(\mathcal{O}(2)\) GeV. If unstable, these could decay into SM particles and be searched for in the DUNE near detectors, which include both a liquid argon TPC (ND-LAr) and a gaseous argon TPC (ND-GAr). The potential of DUNE to search for long-lived particles (LLPs) has been studied previously in Figure 9: DUNE sensitivity projections (one Wilson coefficient switched on at a time) for \(c_{\phi}\) (left panel) and \(c_{W}\) (right panel) as a function of \(m_{a}\), assuming \(f_{a}=1\) TeV. Our sensitivity lines are shown at 90% CL, individually for each final state topology as indicated by the labels (\(a\to\gamma\gamma\), \(a\to e^{+}e^{-}\) and \(a\to\mu^{+}\mu^{-}\)). Solid (dashed) lines correspond to ND-GAr (ND-LAr). Shaded gray regions indicate current bounds, see Sec. III. To facilitate the comparison with previous literature we provide on the right axes the corresponding limits to the so-called _photon-dominance_ and _fermion-dominance_ scenarios [46; 47] (see Ref. [13] for the mapping to our Wilson coefficients). the literature. However, most of those studies have neglected the impact of backgrounds, arguing that they could be reduced to a negligible level through appropriate selection cuts. In this sense it is worth noting that most studies in the literature consider the ND-GAr detector, which offers a superb environment for these searches thanks to the lower backgrounds expected from neutrino beam interactions. However, ND-GAr is expected to start taking data only during Phase II of the experiment. In light of this, it becomes pressing to assess the capabilities of ND-LAr for this kind of searches, which are a keystone for the BSM program at DUNE. In this work, we have first computed the background rates according to the expected angular and energy resolution for both ND-GAr and ND-LAr detectors, for decays into pairs of photons, charged leptons or into three pions, with no missing energy. Our results (summarized in Table 1) show that these may be reduced to a negligible level for channels involving pairs of muons in the final state; however, the background is significant for LLPs decaying into two photons, which dramatically reduces the expected final sensitivity. Our results also indicate a reduced ND-LAr signal efficiency for decays into \(e^{+}e^{-}\) or into three pions. In general, we note that our background study would be applicable to a wide range of BSM models with LLPs, including not only light pseudoscalars (which is the focus of this work), but also light scalars or vector bosons, as considered, for example, in the phenomenological studies in Refs. [26, 32, 35, 36, 38, 40]. We have then derived sensitivity regions for generic LLP scenarios using the decay channels outlined above, within a model-independent approach. Our results (shown in Figs. 6 and 7, for the ND-GAr) are provided in terms of the production branching ratio and lifetime of the Figure 10: Sensitivity projections for our Benchmark Model III, as described in Sec. II.3. Sensitivities are shown separately for a search into di-photon final states and for searches into final states with three pions, which are the dominant decay modes for this scenario. Solid (dashed) lines correspond to ND-GAr (ND-LAr). Shaded gray regions are disfavored by current constraints (see also Fig. 11). Vertical lines indicate the masses of the neutral pseudoscalar mesons, which explain the sharp features in the sensitivity regions. LLP, and may be easily recasted to specific scenarios, including both pseudoscalar and scalar particles. Overall, we find that DUNE has the potential to significantly improve over current constraints for LLP decays into muon pairs, where backgrounds may be reduced to a negligible level while keeping relatively high signal efficiencies. Conversely, we find that searches for decays into two-photons are significantly harder, due to the large backgrounds expected from neutrino interactions, contrary to expectation. In order to put our sensitivities in context and to compare the potential of DUNE to present bounds, we then study three benchmark models involving axion-like particles (ALPs) coupling to the SM through effective operators: \((i)\) the so-called _gluon dominance_ scenario; \((ii)\) a scenario where the ALP is coupled to electroweak operators; and \((iii)\) the so-called _charming ALP_ scenario, where the ALP is coupled to right-handed up-quark currents with a non-trivial flavor structure. Our choice of scenarios intends to demonstrate the potential of DUNE to constrain generic ALP models inducing significant couplings not only to photons (as commonly studied in the literature) but to charged leptons and mesons as well, over a wide range of masses. Overall we find that both ND-LAr and ND-GAr will be able to improve significantly over current limits in certain regions of parameter space, for decay channels involving muon pairs or multiple pions. This is particularly so in the case of ALPs produced from \(D\) decays, where the parameter space is largely unconstrained by laboratory experiments due to the intrinsic difficulties associated to the production of \(D\) mesons. However, we note that the sensitivities in the region where the ALP decays preferably into photon pairs would be affected by the large backgrounds expected, and improving over current limits will be challenging for DUNE. Finally, we stress that if the ALPs decay preferraly into \(e^{+}e^{-}\) pairs, ND-GAr will be needed in order to improve over current constraints, while the capabilities of ND-LAr are somewhat limited by the assumed signal efficiency. In summary we have shown that, thanks to the _high-intensity_, _high-energy_ proton beam available at LBNF, and in combination with the excellent capabilities of the _argon gas TPC near detector_, DUNE has the potential to improve significantly over current constraints for a wide landscape of BSM models with light unstable long-lived particles. We stress that similar searches using the liquid Argon TPC near detector may be possible. However, the higher backgrounds and lower efficiencies expected translate into significant reductions in sensitivity in most cases. In this regard, the availability of a gas TPC at DUNE will be a key asset in order to ensure the leadership of DUNE on LLP searches at neutrino facilities. We emphasize that this work uses only publicly available information from the DUNE collaboration, and that the conclusions are our own and do not represent necessarily the official views of the DUNE Collaboration. ###### Acknowledgements. We warmly thank Pilar Hernandez for her involvement in the early stages of this work, and Carlos Pena for illuminating discussions. We also thank Adrian Carmona, Luca Merlo, Laura Molina-Bueno, Christiane Scherb and Edoardo Vitagliano for useful discussions and comments. This work has received partial support from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement no. 860881-HIDDeN and the Marie Sklodowska-Curie Staff Exchange grant agreement no. 101086085-ASYMMETRY. PC acknowledges partial financial support by the Spanish Research Agency (Agencia Estatal de Investigacion) through the grant IFT Centro de Excelencia Severo Ochoa no. CEX2020-001007-S and by the grant PID2019-108892RB-I00 funded by MCIN/AEI/10.13039/501100011033. She is also supported by grant RYC2018-024240-I, funded by MCIN/AEI/10.13039/501100011033 and by "ESF Investing in your future". JM-A acknowledges support from the Plan GenT programme (grant CIDEGENT/2019/049) funded by the Generalitat Valenciana, and from the Ramon y Cajal programme (grant RYC2021-033265-I) funded by the Spanish MCIN/AEI/10.13039/501100011033 and by the EU (NextGenerationEU/PRTR). SU acknowledges support from Generalitat Valenciana through the plan GenT programme (CIDEGENT/2018/019) and from the Spanish Ministerio de Ciencia e Innovacion through the project PID2020-113644GB-I00. ## Appendix A Bounds on \(D\to\pi a\) from a reinterpretation of \(D\to\tau\nu\) data This appendix contains additional details regarding our fit to the BESIII data from Ref. [95]. The collaboration divides their data in two samples, which are fitted simultaneously: a \(\mu\)-like sample, which is dominated by the \(D^{+}\to\mu^{+}\nu\) contribution; and a \(\pi\)-like sample, which contains the \(D^{+}\to\tau^{+}\nu,\tau^{+}\to\pi^{+}\bar{\nu}\) decays. In particular, their data is binned in \(\mathrm{MM}^{2}=E_{\mathrm{miss}}^{2}-|\vec{p}_{\mathrm{miss}}|^{2}\), where \(E_{\mathrm{miss}}=E_{\mathrm{beam}}-E_{\mu(\pi)}\). Data is presented in their Fig. 3 using equally sized bins of width \(\Delta\mathrm{MM}^{2}=0.02\,\,\mathrm{GeV}^{2}\), in the range \(\mathrm{MM}^{2}\in[-0.35,0.35]\,\,\mathrm{GeV}^{2}\). The \(D^{+}\to\tau^{+}\nu\) measurement at BESIII suffers from five main background components: \(D^{+}\to K^{0}_{L}\pi^{+},D^{+}\to\pi^{0}\pi^{+},D^{+}\to K^{0}_{S}\pi^{+},D^ {+}\to\eta\pi^{+}\), and a so-called "smooth" background component, that comes mainly from other \(D\) decays (such as semileptonic decays) and continuum events. An additional (subleading) contribution to the number of events comes from \(D^{+}\to\tau^{+}\nu\) where the \(\tau^{+}\) does not decay into \(\pi^{+}\bar{\nu}\) but to other final states. In our fit, we define a \(\chi^{2}\) that fits simultaneously their data in the \(\mu\)-like and \(\pi\)-like samples. Specifically we define a binned \(\chi^{2}\) function that depends on a set of nuisance parameters \(\xi\): \[\chi^{2}(\{\xi\})=\sum_{i,c,s}\frac{\left[\bar{n}_{i,c,s}-(1+\xi_{c})n_{i,c,s} \right]^{2}}{\sigma_{i,c,s}^{2}}\,, \tag{10}\] where \(n_{i,c,s}\) (\(\bar{n}_{i,c,s}\)) indicates the predicted (observed) number of events for a given contribution \(c\) to a given sample \(s\) (\(s\)=\(\mu\)-like, or \(\pi\)-like) in the \(i\)-th bin. The data points and error bars are taken from Fig. 3 in Ref. [95], and the predicted event rates are taken as the best-fit curves provided in the same figure. The final \(\chi^{2}\) is obtained after minimization over the nuisance parameters in Eq. (10). In doing so, we restrict the \(\mathrm{MM}^{2}\) range to \([-0.16,0.19]\,\,\mathrm{GeV}^{2}\) for the \(\mu\)-like sample, and to \([-0.16,0.15]\,\,\mathrm{GeV}^{2}\) for the \(\pi\)-like sample. This leaves us with 17 bins (15 bins) for the \(\mu\)-like (\(\pi\)-like) sample. The motivation behind this choice is to use only those points for which the error bars can be accurately extracted from the plot, avoiding the region at high \(\mathrm{MM}^{2}\) where the error bars are not provided on a linear scale. This is expected to mostly impact the fit to the \(K^{0}_{L}\pi^{+}\) background; however we have checked that our choice of energy range allows the fit to constrain its normalization from the information of the \(\mu\)-like sample. When fitting the size of the \(D^{+}\to\tau^{+}\nu\) signal, the collaboration fixes the contributions from \(D^{+}\) decays into \(\mu^{+}\nu\), \(\pi^{0}\pi^{+}\), \(\eta\pi^{+}\), \(K^{0}_{S}\pi^{+}\), while they leave free the normalization of the \(K^{0}_{L}\pi^{+}\) and that of the smooth background. Thus, in our fit we proceed in the same way and set \[\xi_{D\to\mu\nu}=\xi_{D\to\pi\pi}=\xi_{D\to\eta\pi}=\xi_{D\to K_{S}\pi}=0\quad \text{(fixed)} \tag{11}\] while we leave completely free the four nuisance parameters associated to \(D\to\tau\nu,\tau\to\pi\nu\), \(D\to\tau\nu,\tau\to\mathrm{non}-\pi\nu\), \(D\to K_{L}\pi\), and to the smooth background component. With this procedure, we first check that the resulting \(\chi^{2}\) gives a good fit to the extracted data within the error bars provided in Ref. [95], considering the number of degrees of freedom in the fit (\(n_{\mathrm{dof}}\)). At the best-fit, we obtain \[\chi^{2}_{\mathrm{min}}=29\quad\text{for}\quad n_{\mathrm{dof}}=28\,.\] This leads to a \(p\)-value of 41%, indicating good compatibility with the data. Indeed, our best-fit curves match very well those in Ref. [95], as they should. Next, we have checked that we approximately reproduce their fit to the \(D^{+}\to\tau^{+}\nu\). Specifically, we obtain a best-fit for the number of events \[N_{ev}(D\to\tau\nu)=128\pm 40 \tag{10}\] which corresponds to an overall precision of about 30%. This reproduces reasonably well the result of the collaboration (\(137\pm 27\) events) albeit with a larger error bar in our case. Thus, our limits may be taken as conservative, keeping in mind that a dedicated analysis (including all data obtained in the full energy range, and a more sophisticated implementation of systematic uncertainties) would probably lead to a better result. Given the good agreement between our result and that obtained in Ref. [95] for the SM case, we then proceed to add a contribution from \(D\to\pi a\) events as a function of \(m_{a}^{2}\) (which can be can be directly identified with \(\text{MM}^{2}\)). To this end, we approximate the energy resolution of the detector in \(\text{MM}^{2}\) by fitting the distribution of \(D\to\mu\nu\) to a Gaussian. We believe this should be approximately correct, given that the signal in this case should be a delta function centered at zero. We obtain the best-fit for a Gaussian with mean \(\sigma\simeq 0.026~{}\text{GeV}^{2}\) and a small bias in its central value, \(\mu=0.016~{}\text{GeV}^{2}\) (in what follows, we assume both parameters are independent of \(\text{MM}^{2}\) in the energy range under consideration). The differential distribution of the expected number of events from \(D\to\pi a\) in each sample \(s\) is computed for each value of the ALP mass, as: \[\frac{dN_{s}(D\to\pi a)}{d\text{MM}^{2}}=\epsilon_{s}\epsilon_{\pi}N_{D}\text {BR}_{D\to\pi a}(f_{a},m_{a}^{2})\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(m_{ a}^{2}-\text{MM}^{2}-\mu)^{2}}{2\sigma^{2}}}P_{exit}(f_{a},m_{a})\,, \tag{11}\] where \(N_{D}\) stands for the total number of \(D\) mesons produced, while \(\epsilon_{s}\) is the efficiency associated to each sample, \(\epsilon_{\pi}\) is the pion detection efficiency (which we set to \(\epsilon_{\pi}=90\%\) according to Ref. [95]), and \(P_{exit}\) is the probability for an ALP to exit the BESIII detector before decaying. Regarding the selection efficiencies, we take \(\epsilon_{\pi-\text{like}}=44\%\), \(\epsilon_{\mu-\text{like}}=1-\epsilon_{\pi-\text{like}}\) following Ref. [95]. The probability \(P_{exit}\) is estimated assuming that the parent \(D\) mesons are produced in pairs, in collisions with a center-of-mass energy \(\sqrt{s}=3.77~{}\text{GeV}\), and taking the detector radius \(\sim 3~{}\text{m}\), following Ref. [131]. Finally, the number of \(D\) mesons is estimated from the observed number of events and the branching ratio reported by the collaboration in Ref. [95], taking \(N_{ev}=N_{D}\text{BR}(D\to\tau\nu)\text{BR}(\tau\to\pi)\epsilon_{\pi}\), \(\text{BR}(D\to\tau\nu)_{exp}=1.2\times 10^{-3}\) and \(N_{ev}=137\). The contribution of the number of \(D\to\pi a\) events to each bin in \(\text{MM}^{2}\) is obtained integrating Eq. (11) within the limits of each bin. These are subsequently added to the total number of predicted signal events, as an additional contribution to the \(\chi^{2}\) in Eq. (10). A limit can then be obtained on the production branching ratio \(\text{BR}(D\to\pi a)\), after marginalization over the same set of nuisance parameters as in the SM case. The resulting limit is shown in Fig. 11 at 90% CL, for a \(\chi^{2}\) with \(n_{\text{dof}}=28\).
2310.00180
MARL: Multi-scale Archetype Representation Learning for Urban Building Energy Modeling
Building archetypes, representative models of building stock, are crucial for precise energy simulations in Urban Building Energy Modeling. The current widely adopted building archetypes are developed on a nationwide scale, potentially neglecting the impact of local buildings' geometric specificities. We present Multi-scale Archetype Representation Learning (MARL), an approach that leverages representation learning to extract geometric features from a specific building stock. Built upon VQ-AE, MARL encodes building footprints and purifies geometric information into latent vectors constrained by multiple architectural downstream tasks. These tailored representations are proven valuable for further clustering and building energy modeling. The advantages of our algorithm are its adaptability with respect to the different building footprint sizes, the ability for automatic generation across multi-scale regions, and the preservation of geometric features across neighborhoods and local ecologies. In our study spanning five regions in LA County, we show MARL surpasses both conventional and VQ-AE extracted archetypes in performance. Results demonstrate that geometric feature embeddings significantly improve the accuracy and reliability of energy consumption estimates. Code, dataset and trained models are publicly available: https://github.com/ZixunHuang1997/MARL-BuildingEnergyEstimation
Xinwei Zhuang, Zixun Huang, Wentao Zeng, Luisa Caldas
2023-09-29T22:56:19Z
http://arxiv.org/abs/2310.00180v1
# MARL: Multi-scale Archetype Representation Learning for Urban Building Energy Modeling ###### Abstract Building archetypes, representative models of building stock, are crucial for precise energy simulations in Urban Building Energy Modeling. The current widely adopted building archetypes are developed on a nationwide scale, potentially neglecting the impact of local buildings' geometric specificities. We present Multi-scale Archetype Representation Learning (MARL), an approach that leverages representation learning to extract geometric features from a specific building stock. Built upon VQ-AE, MARL encodes building footprints and purifies geometric information into latent vectors constrained by multiple architectural downstream tasks. These tailored representations are proven valuable for further clustering and building energy modeling. The advantages of our algorithm are its adaptability with respect to the different building footprint sizes, the ability for automatic generation across multi-scale regions, and the preservation of geometric features across neighborhoods and local ecologies. In our study spanning five regions in LA County, we show MARL surpasses both conventional and VQ-AE extracted archetypes in performance. Results demonstrate that geometric feature embeddings significantly improve the accuracy and reliability of energy consumption estimates. Code, dataset and trained models are available on the project page: [https://github.com/ZixunHuang1997/MARL-BuildingEnergyEstimation](https://github.com/ZixunHuang1997/MARL-BuildingEnergyEstimation). ## 1 Introduction The built environment plays a significant role in global climate change, accounting for approximately 34% of worldwide final energy consumption in 2020 [21]. In light of the urgent worldwide issues concerning sustainability and energy efficiency, there has been a concentrated focus on research in Urban Building Energy Modeling (UBEM) [3]. UBEM has evolved into a crucial computational tool that enables the analysis and forecast of energy consumption at an urban scale. Given the impracticability or infeasibility of modeling each individual building in minute detail, developing building archetypes for the study area has become a critical determinant for precision for urban scale energy estimation. These archetypes represent specific groups of buildings sharing similar characteristics and energy performance within the area of interest. Their strategic use in UBEM allows for efficient and effective energy modeling at the urban scale. Recent studies have established global building archetypes [10, 18, 22]. However, many of these are designed at a country scale, depend on expert input [10], and don't integrate actual building geometry, leading to potential inaccuracies and limiting their scalability, particularly in areas where computational resources and data accessibility present challenges. Additionally, urban-scale energy estimation demands significant computational resources, often inaccessible to disadvantaged communities and neighborhoods. Such disparities can introduce data biases, curtail global applicability, and limit the UBEM's diversity mainly to major cities like Boston [4, 6] and San Francisco [6]. These limitations hinder the effectiveness of building archetypes in energy-efficient design and implementation. In response, we present Multi-scale Archetype Representation Learning (MARL, Figure 1), a method designed to automate local building archetype construction through representation learning. Our proposed method addresses the aforementioned challenges by refining the essential elements of building archetypes for UBEM. Our main contributions are summarized as follows: * We introduce building geometry into the building archetype construction process. * We provided a cost-effective solution for Urban Building Energy Modeling (UBEM). * We propose an image reconstruction-based framework that streamlines the archetype creation process. * We introduce the integration of downstream tasks into the building archetype construction process, which increases the accuracy of our models, thereby enabling more precise and reliable energy consumption estimates at an urban scale. * We conduct open-set experiments to demonstrate that our trained model possesses transferable and reusable characteristics in similar urban and geographical environments. ## 2 Related Studies ### Representation learning Representation learning offers effective strategies for dealing with high-dimensional data forms such as images and 3D models. Integral to these methods are a variety of neural compression algorithms [27], such as normalizing flows [24], variational autoencoders [16], diffusion probabilistic models [15], and generative adversarial networks [13]. Recent advancements have also seen variations of these models, such as Vector Quantized Variational Autoencoders (VQ-VAEs) [25] and Auto Decoders [20]. These techniques leverage the latent space as a compact representation where the high-dimensional data is encoded into lower-dimensional vectors, preserving the underlying features. A wealth of recent research showcases various application areas ranging from natural language processing to image classification [12, 14, 17]. ### Building archetype for energy estimation Building archetypes are a representative subset of buildings to model an entire building stock, and are critical to the process of Urban Building Energy Modeling (UBEM). The prevailing methodologies for constructing the representative building archetype leverage statistical analysis or use generalized assumptions informed by expert knowledge [5, 11], and remain a time-consuming task [11]. Existing archetypes define representative buildings based on categorical attributes such as construction material, vintage, area, stories, and energy system [10, 22]. Nevertheless, these models usually do not incorporate the actual geometries of buildings into the construction of building archetypes, thus potentially overlooking the nuanced variations and distributions of prevailing geometries among different building stocks. Recent years have witnessed exploration into data-driven methods for the creation of building archetypes [3, 19, 23], but most of these methodologies continue to exclude building geometry parameters, largely as a result of technical intricacies [5]. Instead, the building geometry is simplified to numerical data, such as areas and shape ratio, with limited studies directly addressing the actual geometry [9]. Despite the significant impact of building geometry on energy consumption ([5, 7], its exclusion raises the potential for inaccuracies and unreliability for energy estimation in UBEM. This underscores the necessity for methods capable of incorporating building geometry to represent building stock comprehensively, thereby augmenting the precision and granularity of locally tailored building archetypes. ## 3 Method Besides data collection, our method consists of 2 stages: archetype representation learning and clustering, and energy simulation. Our method follows the bottom-up approach for constructing a building archetype model [23] with a representation learning module to characterize the building stock. Multi-scale Architectural Representation Learning (MARL) is developed to encapsulate the implicit features of the input building stock into a compact, reduced-dimensional space, i.e., the latent space. Following this, we apply k-means clustering in the latent space to identify the representative building footprints, which are then converted into to building archetypes with the same parameters documented in PBM [10]. We then conduct energy simulations on PBM and our building archetypes and compare the aggregated energy profiles. The aim is to investigate whether utilizing real-world building geometries for a specific district enhances the accuracy of urban-scale energy estimation and to evaluate the extent to which the accuracy of such energy modeling can be improved. ### Archetype learning and clustering Our model for representation learning (Figure. 2) consists of an auto-encoder and an optional downstream task pool (DTP). The auto-encoder aims at compressing and purifying the geometric contours of the building, while the downstream tasks add a restriction and punishment on it to conserve geometric information related to vital building Figure 1: **Building Archetype Finding Pipeline.** Besides data collection, our method consists of 2 stages: archetype representation learning and clustering, and energy simulation. properties. After the representation learning is finished, we then feed all footprints in a certain region back into our trained encoder and get the clustering centers of the corresponding latent vectors to acquire archetypes in this region. #### 3.1.1 Footprint reconstruction Since the sizes of different footprints vary widely, to ensure that our model is flexible enough in its choice of data about building contours at different scales, a single footprint is scaled into 3 different sizes. Detailed implementation is described in Section 4.2. Both the encoder and decoder for the image reconstruction task are built with a CNN-based structure with residual connections. The scaled footprints are fused together and encoded into a latent vector with a dimension of \(28\times 28\times 32\) by the encoder. A vector quantizer [25] is then used to decouple the implicit representation and map it to discrete embedding vectors (a code book) prior to forwarding the vector to the decoder. During the quantization step, each continuous vector output from the encoder is matched to the nearest vector in the code book, which is then forwarded to the decoder. This helps to better control the quality and diversity of the generated data based on the clustering centers in further steps. Based on VQ-AE, our image reconstruction process incorporates both reconstruction loss and vector quantization loss in the proposed representation learning. The training objective for the reconstruction part becomes equation 1: \[L=log(p(x|z_{q}(x)))+\left\|sg(z_{e}(x))-e\right\|_{2}^{2} \tag{1}\] The first term corresponds to the reconstruction loss, while the second term represents the codebook loss. Here, we use the notations \(x\), \(z_{q}(x)\), \(z_{e}(x)\), and \(e\) to refer to the inputs, decoder inputs, encoder outputs, and embedding vectors, respectively. And the operator \(sg\) stands for the stop gradient operator. #### 3.1.2 Downstream task pool Supervised by a downstream task pool (DTP), our MARL model excels in producing a more valuable latent space compared to normal auto-encoders, making it ideal for architectural purpose-driven applications. By leveraging the Figure 3: Examples of MARL footprint reconstruction. Figure 2: **Model Architecture for multi-scale archetype learning and clustering.** This stage consists of one auto-encoder and multiple downstream tasks. insights from the additional tasks, our model is capable of preserving implicit geometry features closely related to crucial meta information of buildings. The primary task focuses on predicting the building's program or purpose, while the secondary task involves predicting the 'vintage' of a building, defined as the year it was constructed. Some other simple tasks are incorporated as well to gain further constraints on the encoder, such as predicting building heights from image grayscales. All tasks are meticulously tailored and hold a strong connection to traditional building energy modeling. Moreover, they directly or indirectly influence and even shape architectural geometry to a significant extent. The first task aims at predicting the building program, specifically within the context of residential categories in Los Angeles. These categories are manifold, spanning from Mobile Homes, Units, and Rooming Houses to Apartments. The wide scope of categories caters to the diverse types of residential architecture prevalent within the region. For the second task, we operationalize the concept of 'vintage' as a sequence of categorical variables that encapsulate distinct temporal spans: pre-1980, 1980-2004, 2004-2013, and 2013 to the present. These divisions correspond with significant revisions to the California Building Code, serving as crucial epochs for analyzing the energy performance of structures. This segmentation is used for energy performance analysis [26]. By incorporating these tasks, our model is expected to gain a more nuanced understanding of building properties, thereby contributing to more precise and sophisticated predictions. These tasks reinforce the model's robustness and increase its applicability across a range of building types. ### Energy Simulation After generating the representative footprints, we extrude them to their corresponding heights, which are coded in greyscale. We then apply energy parameters, including the window-wall ratio, U-value of the envelope, heating type, etc., consistent with the PBM [10]. Subsequently, we employ Climate Studio, an EnergyPlus plug-in for Rhino3D, to simulate the annual energy use intensity (EUI) of each representative building under the LA airport climate data sourced from [2]. After this, we aggregate the annual EUIs for each cluster based on their respective total areas and sum these to calculate the cumulative EUI for the entire neighborhood. ## 4 Experiments ### Geographic dataset We selected five distinct neighborhoods within Los Angeles County, each encompassing an area in excess of 100 km\({}^{2}\) and comprising a dense residential building population surpassing 10,000 units. These specific areas have been graphically illustrated in Figure 4. The building footprint, alongside other building metadata including vintage, height, and building type, are derived from the Assessor Parcels Data provided by Los Angeles County [8]. In the interest of maintaining focus on residential aspects, we narrowed down our dataset to exclusively incorporate buildings designated for residential usage (single and multi-family buildings). Building heights were normalized into the range of 0-255 and translated into greyscale values. This color-coded scheme allows for intuitive visualization of height distribution within residential buildings. Representative samples of these building footprints are showcased in Figure 5. Region (b), (c), and (e) each consist of 6959, 7736, 6064 footprints. Among them, 90% of randomly selected footprints are utilized for the training process of our model, while the remaining portion is allocated for validation. Regions (a) and (d) are held out and encompass 9870, 11767 footprints each, intended for conducting open-set experiments. ### Implementation details and Metrics A single footprint with dimensions of \(1410\times 1410\) is initially divided into three sizes: \(700\times 700\), \(224\times 224\), and \(112\times 112\). Subsequently, all of these sizes are uniformly resized to \(112\times 112\) before being combined into a concatenated format. The image embedding network consists of a 3-layer CNN and 1 residual block serving as the encoder, along Figure 4: **Dataset Visualization. Selected Neighborhood in Los Angeles County. (a) Santa Monica, Westwood, Brent Wood, Beverly Hills, west Hollywood (b) Manhattan Beach, Hawthorne, El Segundo, Lawndale Redondo Beach and Gardena (c) Rancho Palos and Rolling Hills (d) Downey, south gate, Commerce, East Los Angeles, Pico Rivers, Whitter (e) Long Beach and Lakewood** with a vector quantizer, 3 up-sampling layers, and 1 residual block as the decoder. All meta-information encoders in the downstream task pool are constructed with only 1 convolution layer and 1 fully-connected layer. Feature embedding fed into both the vector quantizer and the downstream task pool is of dimension \(28\times 28\times 32\). Regarding the training strategy for our MARL integrated with DTP, we adopted a pre-training phase focusing solely on the reconstruction loss. Following this, we fully fine-tune the entire model's weights for one epoch through a weighted sum of the reconstruction loss and the downstream task losses. **Ground truth.** Given the privacy concerns, obtaining real-world data regarding building energy consumption poses a challenge. As a workaround, we used a simulated dataset, the Integrated Multisector Multiscale Modeling (IMMM) data [26], as our ground truth for energy estimation. This dataset presents a thorough simulation of energy consumption for every building across the entire Los Angeles County. **Baseline.** For comparison, we adopt two prototype models from Prototype Building Models (PBM) set forth by [10]: single-family housing model and multi-family housing model under climate zone 3C for Los Angeles County. The PBM is a well-established building archetype dataset applicable nationwide and adaptable to various climate conditions. We conduct energy simulations for both single and multi-family models. The energy consumption estimation for the baseline is the sum of the product of energy consumption and the respective area for both single-family and multi-family housing models. **Metrics.** To evaluate the efficacy of the proposed algorithm, we use the accuracy of the aggregated energy consumption for a selected area as the evaluation metrics (3). The absolute error \(AE\) is denoted as the difference between the total estimated energy consumption \(EC_{est}\) and the ground truth \(EC_{gt}\). With the building stock clustered and the representative footprint selected, create a building archetype model based on these footprints, supplementing with energy-related parameters that are identical to PBM. In this way, the only variable parameter remains the building geometry. Following this, we conduct energy simulations for each archetype model and aggregate the energy consumption results by the total area the cluster represents. The resulting aggregated energy consumption is compared with the ground truth and the energy baseline in Table 2. \[AE=|EC_{est}-EC_{gt}| \tag{2}\] \[Accuracy=1-\frac{AE}{EC_{gt}} \tag{3}\] ### Ablation experiments In this experimental section, we would like to answer the following questions: (1) To what extent can our MARL framework improve the accuracy of current UBEM in a fully self-supervised manner with only reconstruction tasks? (2) Whether our MARL framework, when constrained by downstream tasks with a more disciplinary scope, can further improve our accuracy of UBEM? To answer these two questions, we evaluated the performance of our method with or without DTP in each of the three neighborhoods of Los Angeles County as shown in Figure 4: (b) Manhattan Beach, etc., (c) Rancho Palo, etc., and (e) Long Beach, etc. We input the archetype derived from our method into the energy simulations and compare the results to the one derived from the PBM [10]. As residential structures represent a significant portion of energy consumption in densely populated urban zones, our analysis concentrates on these residential areas, particularly single-family housing (SFH) and multi-family housing (MFH). We feed the data from residential buildings across various regions into our trained encoder, thereby securing the latent representation for each structure. For each representative footprint within the clustered building stock, we construct a building archetype for energy estimation. We use the building height and footprint to reconstruct the building envelope. Other specifications, such as the window-wall ratio and material are in alignment with the PBM building archetype from zone 3C [10]. Following the archetype construction, we perform energy simulation under Los Angeles International Airport climate data as extracted from [1]. #### 4.3.1 Single archetype The archetypes provided by PBM only contain one each for SFH and MFH [10]. So we first compute one center of the latent vectors for SFH and one for MFH respectively in Rancho Palos. Taking SFH as an example, when we obtain the center of the latent vectors corresponding to all the buildings in Rancho Palos, we input the nearest samples adjacent to that center as archetypes into the UBEM to obtain Figure 5: Examples of the building footprints. the average energy consumption of that region. This average energy consumption is weighted and summed over the building area to get the energy consumption of the whole region. Table 1 shows the result of this experiment. When there is only the image reconstruction task, the UBEM accuracy from our MARL-provided archetypes is 22 percent higher than the one from PBM's, and when the downstream tasks are added during model training, our method is 24.12 percent more accurate than the traditional method. In the training of our model, our method is not supervised by the ground truth of energy consumption, and the whole process is only supervised by the self-supervision from the building footprints and the labeled supervision from the building meta information. However, it can be seen that when we consider the geometric features and architectural attributes of local buildings in a certain region, \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{2}{|l|}{Archetype offered by} & EUI (\(kWh/m^{2}\)) & Building Area (\(m^{2}\)) & Energy (\(kWh\)) & Accuracy (\(\%\)) \\ \hline \hline \multirow{2}{*}{PBM [10]} & MFH & 75.14 & 861123.64 & \multirow{2}{*}{137344567} & \multirow{2}{*}{71.62} \\ \cline{2-2} \cline{5-6} & SFH & 60.79 & 1194889.74 & & \\ \hline \multirow{2}{*}{MARL (Ours)} & MFH & 89.3 & 861123.64 & \multirow{2}{*}{179539369} & \multirow{2}{*}{93.62} \\ \cline{2-2} \cline{5-6} & SFH & 85.9 & 1194889.74 & & \\ \hline MARL + DTP & MFH & 92.5 & 861123.64 & \multirow{2}{*}{183609344} & \multirow{2}{*}{**95.74**} \\ (Ours) & SFH & 87 & 1194889.74 & & \\ \hline \hline \multicolumn{2}{|l|}{Energy Consumption GT [26]} & \multicolumn{2}{c|}{2056013.38} & 191779982 & \multicolumn{1}{c|}{/} \\ \hline \multirow{2}{*}{Energy Estimation Accuracy Boosted by} & \multicolumn{2}{c|}{**Our Reconstruction Task**} & 22.00 \(\uparrow\) \\ \cline{2-2} \cline{5-6} & \multicolumn{2}{c|}{**Our Downstream Task**} & 2.12 \(\uparrow\) \\ \hline \end{tabular} \end{table} Table 1: **Experiment Results. We tried our method with single archetype generation for MFH and SFH in the region Rancho Palos etc.** \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Region} & Energy Consumption & PBM [10] & MARL with Only & MARL Restricted by \\ & GT[26](\(kWh\)) & Accuracy(\(\%\)) & Reconstruction Task (\(\%\)) & DTP (\(\%\)) \\ \hline \hline Rancho Palo etc. & 191779982 & 71.62 & 90.36 & 18.74 \(\uparrow\) & **91.08** & 18.74 \(\uparrow\) + 0.72 \(\uparrow\) \\ \hline Long Beach etc. & 104117941 & 73.10 & **98.96** & 25.86 \(\uparrow\) & 97.58 & 25.86 \(\uparrow\) - 1.37 \(\downarrow\) \\ \hline Manhattan Beach etc. & 121545524 & 70.51 & 90.43 & 19.92 \(\uparrow\) & **92.88** & 19.92 \(\uparrow\) + 2.45 \(\uparrow\) \\ \hline \hline SUM & 417443447 & 71.66 & 92.52 & 20.86 \(\uparrow\) & **93.23** & 20.86 \(\uparrow\) + 0.70 \(\uparrow\) \\ \hline \end{tabular} \end{table} Table 2: **Experiment Results on Multiple Regions. Our estimation compared with PBM using multiple archetypes.** Figure 6: Within-cluster sum of squares for k-means clustering on latent space for different neighborhoods. Figure 7: UMAP visualization of k-means clustering on latent space of region (c) with corresponding cluster center our method is able to learn more valuable representations and produce more locally characterized archetypes, which leads to a more accurate simulation of building energy consumption. #### 4.3.2 Multiple archetype Based on our trained model, we use k-means clustering to the dimensionally reduced latent space to find representative building footprints prevalent in the selected neighborhoods. To evaluate the clustering results and determine the optimal number of clusters, the within-cluster sum of squares (WCSS) is used as an inertia measure. We select the number of clusters based on the elbow of WCSS, implying the optimal cluster number is within 2 to 5 (Figure 6). With SFH numbers reaching 5537, 5317, and 5510, significantly surpassing the MFH counts of 1422, 2419, and 554 in Regions (b), (c), and (e), respectively, we opt for 4 archetypes among SFH and 2 among MFH for this experiment. We designate the center of each cluster as the representative building for that particular group. An example of the clustered latent space, along with the representative footprint, is visualized by Uniform Manifold Approximation and Projection (UMAP) in Figure 7. We further tried our algorithm on three regions, Table 2 shows our results. The archetypes derived from our algorithm still perform significantly better than the conventional archetype in terms of accuracy on UBEM. And the downstream tasks' restriction further improves the performance of our model in general. ### Open set Energy modeling for all buildings in a region is costly, not all regions have energy consumption data or simulation for each building, and most regions can only use one archetype for each building category [10], which is designed at a country-wide scale. From this perspective, our approach is valuable and efficient because it can provide locale-specific building types, and does not require all building energy data as labels to oversee the entire model training process. We can train on any piece of area as long as GIS data is available. We can even directly use our trained model for encoding building footprints in an open set. The experimental results presented in Table 3 show the performance of our model in two unseen LA regions: (a) Santa Monica etc. and (d) Downey etc. Our experimental results demonstrate that despite not being trained on these regions at all, our model still performs remarkably well in encoding new architectural footprints, leading to excellent representations and archetypes that enable it to succeed in the UBEM tasks. ## 5 Conclusion In this research, we introduce Multi-scale Archetype Representation Learning (MARL) for the automated creation of building archetypes adaptable to diverse regions. This methodology incorporates building geometry into the building archetype construction process. Utilizing representation learning and downstream task restriction, we extract implicit features from building stock and then employ k-means clustering to identify representative building footprints as archetypes. We further refine our approach by constraining the footprint reconstruction process with energy performance-related metadata such as vintage year and building use type. As a result, we propose an automated building archetype construction method. We validate the efficacy of our model by benchmarking its energy estimation performance against conventional building archetypes, both for seen and unseen neighborhoods. Our experimental outcomes demonstrate that our approach outperforms conventional models, accentuating the potential of our method in enhancing the precision and adaptability of Urban Building Energy Modeling, and presenting an advancement in the domain of building archetype development and building energy modeling. By integrating building geometries into the archetype construction process, architects and urban planners can make informed choices about neighborhood configurations, building orientations, and other design elements impacting energy consumption. Our method can also identify areas with suboptimal energy performance, indicating opportunities for retrofitting and design modifications. Furthermore, by focusing on locale-specific geometric nuances, we offer a nuanced approach to building archetypes, fostering designs that harmonize with local characteristics. This promotes energy efficiency and locale-specific design, and provides insights into the wider implications of building morphology on energy and environmental factors. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Region} & GT[26] & \multicolumn{2}{c|}{PBM [10]} & \multicolumn{2}{c|}{MARL+DTP (Ours)} \\ \cline{2-7} & \((kWh)\) & \((kWh)\) & \((\%)\) & \((kWh)\) & \((\%)\) \\ \hline \hline Downey etc. & 187349182 & 144685496 & 77.23 & 201129362 & **92.64** & 15.42 \(\uparrow\) \\ \hline Santa Monica etc. & 211891201 & 151819183 & 71.65 & 192191917 & **90.70** & 19.05 \(\uparrow\) \\ \hline \hline SUM & 399240383 & 296504678 & 74.27 & 393321279 & **98.52** & 24.25 \(\uparrow\) \\ \hline \end{tabular} \end{table} Table 3: **Open Set Experiment Results.** Energy consumption estimation in unseen regions.
2309.04285
Andromeda's Parachute: Time Delays and Hubble Constant
The gravitational lens system PS J0147+4630 (Andromeda's Parachute) consists of four quasar images ABCD and a lensing galaxy. We obtained $r$-band light curves of ABCD in the 2017$-$2022 period from monitoring with two 2-m class telescopes. Applying state-of-the-art curve shifting algorithms to these light curves led to measurements of time delays between images, and the three independent delays relative to image D are accurate enough to be used in cosmological studies (uncertainty of about 4%): $\Delta t_{\rm{AD}}$ = $-$170.5 $\pm$ 7.0, $\Delta t_{\rm{BD}}$ = $-$170.4 $\pm$ 6.0, and $\Delta t_{\rm{CD}}$ = $-$177.0 $\pm$ 6.5 d, where image D is trailing all the other images. Our finely sampled light curves and some additional fluxes in the years 2010$-$2013 also demonstrated the presence of significant microlensing variations. From the measured delays relative to image D and typical values of the external convergence, recent lens mass models yielded a Hubble constant that is in clear disagreement with currently accepted values around 70 km s$^{-1}$ Mpc$^{-1}$. We discuss how to account for a standard value of the Hubble constant without invoking the presence of an extraordinary high external convergence.
Vyacheslav N. Shalyapin, Luis J. Goicoechea, Karianne Dyrland, Håkon Dahle
2023-09-08T12:08:31Z
http://arxiv.org/abs/2309.04285v1
# Andromeda's Parachute: Time Delays and Hubble Constant ###### Abstract The gravitational lens system PS J0147+4630 (Andromeda's Parachute) consists of four quasar images ABCD and a lensing galaxy. We obtained \(r\)-band light curves of ABCD in the 2017\(-\)2022 period from monitoring with two 2-m class telescopes. Applying state-of-the-art curve shifting algorithms to these light curves led to measurements of time delays between images, and the three independent delays relative to image D are accurate enough to be used in cosmological studies (uncertainty of about 4%): \(\Delta t_{\rm AD}=-170.5\)\(\pm\) 7.0, \(\Delta t_{\rm BD}=-170.4\)\(\pm\) 6.0, and \(\Delta t_{\rm CD}=-177.0\)\(\pm\) 6.5 d, where image D is trailing all the other images. Our finely sampled light curves and some additional fluxes in the years 2010\(-\)2013 also demonstrated the presence of significant microlensing variations. From the measured delays relative to image D and typical values of the external convergence, recent lens mass models yielded a Hubble constant that is in clear disagreement with currently accepted values around 70 km s\({}^{-1}\) Mpc\({}^{-1}\). We discuss how to account for a standard value of the Hubble constant without invoking the presence of an extraordinary high external convergence. cosmological parameters -- gravitational lensing: strong -- quasars: individual (PS J0147+4630) + Footnote †: journal: ApJ 0000-0002-8880-7088]Vyacheslav N. Shalyapin 0000-0002-1888-0880]Luis J. Goicoechea 0000-0002-4883-0888]Karianne Dyrland 0000-0002-1887-7885]Hakon Dahle ## 1 Introduction Optical frames from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2019) led to the serendipitous discovery of the strong gravitational lens system with a quadruply-imaged quasar (quad) PS J0147+4630 (Berghea et al., 2017). Due to its position in the sky and the spatial arrangement of the four quasar images, this quad is also called Andromeda's Parachute (e.g., Rubin et al., 2018). The three brightest images (A, B and C) form an arc that is about 3'' from the faintest image D, and the main lens galaxy G is located between the bright arc and D. This configuration is clearly seen in the left panel of Figure 1, which is based on _Hubble Space Telescope_ (\(HST\)) data. Early optical spectra of the system confirmed the gravitational lensing phenomenon and revealed the broad absorption-line nature of the quasar, which has a redshift \(z_{\rm s}\)\(\sim\) 2.36 (Lee, 2017; Rubin et al., 2018). Lee (2018) also performed the first attempt to determine the redshift of G from spectroscopic observations with the 8.1 m Gemini North Telescope (GNT). An accurate reanalysis of these GNT data showed that the first estimate of the lens redshift was biased, by enabling better identification of G as an early-type galaxy at \(z_{\rm l}\) = 0.678 \(\pm\) 0.001 with stellar velocity dispersion \(\sigma_{\rm l}\) = 313 \(\pm\) 14 km s\({}^{-1}\)(Goicoechea & Shalyapin, 2019), in good agreement with the recent measurements of \(z_{\rm l}\) and \(\sigma_{\rm l}\) by Mozumdar et al. (2023). As far as we know, the quasar PS J0147+4630 is the brightest source in the sky at redshifts \(z>1.4\) (apart from transient events such as gamma-ray bursts), and its four optical images can be easily resolved with a ground-based telescope in normal seeing conditions. Thus, it is a compelling target for various physical studies based on high-resolution spectroscopy (e.g., Rubin et al., 2018) and detailed photometric monitoring (e.g., Lee, 2018). Early two-season monitoring campaigns with the 2.0 m Liverpool Telescope (LT; Goicoechea and Shalyapin, 2019) and the 2.5 m Nordic Optical Telescope (NOT; Dyrland, 2019) provided accurate optical light curves of all quasar images, as well as preliminary time delays and evidence of microlensing-induced variations. A deeper look at the optical variability of Andromeda's Parachute is of great importance, since robust time delays and well-observed microlensing variations can be used to determine cosmological parameters (e.g., Treu and Marshall, 2016) and the structure of the quasar accretion disc (e.g., Schmidt and Wambsganss, 2010). This paper is organized as follows. In Sect. 2, we present combined LT and NOT light curves of the four images of PS J0147+4630 spanning six observing seasons from 2017 to 2022. In Sect. 3, using these optical light curves, we carefully analyse the time delays between images and the quasar microlensing variability. In Sect. 4, we discuss the Hubble constant (\(H_{0}\)) value from the measured time delays and lens mass models. Our main conclusions are included in Sect. 5. ## 2 New Optical Light Curves We monitored PS J0147+4630 with the LT from 2017 August to 2022 October using the IO:O optical camera with a pixel scale of \(\sim\)0\(\farcs\)30. Each observing night, a single 120 s exposure was taken in the Sloan \(r\)-band filter, and over the full monitoring period, 212 \(r\)-band frames were obtained. The LT data reduction pipeline carried out three basic tasks: bias subtraction, overscan trimming, and flat fielding. Additionally, the IRAF software1(Tody, 1986, 1993) allowed us to remove cosmic rays and bad pixels from all frames. We extracted the brightness of the four quasar images ABCD through PSF fitting, using the IMFITFITS software (McLeod et al., 1998) and following the scheme described by Goicoechea and Shalyapin (2019). Table 1 includes the position and magnitudes of the PSF star, as well as of other relevant field stars. These data are taken from the Data Release 1 of Pan-STARRS2(Flewelling et al., 2020). Our photometric model consisted of four point-like sources (ABCD) and a de Vaucouleurs profile convolved with the empirical PSF (lensing galaxy G). Positions of components with respect to A and structure parameters of G were constrained from \(HST\) data (Shajib et al., 2019, 2021). Footnote 1: [https://iraf-community.github.io/](https://iraf-community.github.io/) Footnote 2: [http://panstarrs.stsci.edu](http://panstarrs.stsci.edu) We also selected six non-variable blue stars in the field of PS J0147+4630 and performed PSF photometry on five of them (see the calibration stars Call1-Cal5 in Table 1; Cal6 is a saturated star in LT frames). For each of the five calibration stars, we calculated its average magnitude within the monitoring period and magnitude deviations in individual frames (by subtracting the average). In each individual frame, the five stellar magnitude deviations were averaged together to calculate a single magnitude offset, which was then subtracted from the magnitudes of quasar images. After this photometric calibration, we removed 22 observing epochs in which quasar magnitudes deviate appreciably from adjacent values. Thus, the final LT \(r\)-band light curves are based on 190 frames (epochs), and \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Epocha & Ab & err(A)b & Bb & err(B)b & Cb & Cb & Db & err(D)b & Sb & err(S)b & Telc \\ \hline 7970.051 & 15.945 & 0.005 & 16.174 & 0.007 & 16.616 & 0.008 & 18.188 & 0.017 & 15.410 & 0.005 & LT \\ 7976.081 & 15.944 & 0.008 & 16.189 & 0.009 & 16.613 & 0.012 & 18.201 & 0.024 & 15.412 & 0.007 & LT \\ 7982.116 & 15.961 & 0.006 & 16.195 & 0.007 & 16.628 & 0.009 & 18.228 & 0.018 & 15.413 & 0.005 & LT \\ 7985.157 & 15.948 & 0.012 & 16.191 & 0.012 & 16.608 & 0.014 & 18.221 & 0.019 & 15.396 & 0.017 & NOT \\ 7991.048 & 15.956 & 0.006 & 16.204 & 0.007 & 16.630 & 0.009 & 18.234 & 0.018 & 15.410 & 0.005 & LT \\ \hline \end{tabular} Note. – Table 2 is published in its entirety in the machine–readable format. A portion is shown here for guidance regarding its form and content. \end{table} Table 2: New \(r\)–band light curves of PS J0147+4630ABCD and the control star S. Figure 1: Left: Quasar images ABCD and main lens galaxy G of PS J0147+4630 from a public \(HST\)–WFC3 frame of the system in the \(F814W\) band. Right: LT–NOT light curves of PS J0147+4630 from its discovery to 2022 October 30. The \(r\)–band magnitudes of images B, C, and D, and the control star are offset by \(-0.15\), \(-0.3\), \(-1.8\), and \(+1.2\), respectively, to facilitate comparison between them and with image A. the typical uncertainties in the light curves of the quasar images and control star (see Table 1) were estimated from magnitude differences between adjacent epochs separated by no more than 4.5 d (Goicoechea & Shalyapin, 2019). We derived typical errors of 0.0062 (A), 0.0077 (B), 0.0097 (C), 0.0197 (D), and 0.0058 (control star) mag. For the control star, we have also verified that its typical error practically coincides with the standard deviation of all measures (0.0055 mag). To obtain photometric uncertainties at each observing epoch, the typical errors were scaled by the relative signal-to-noise ratio of the PSF star (Howell, 2006). The optical monitoring of PS J0147+4630 with the NOT spanned from 2017 August to 2019 December. We used the ALFOSC camera with a pixel scale of \(\sim\)0\(\farcs\)21 and the \(R\)-Bessel filter. This passband is slightly redder than the Sloan \(r\) band. Each observing night, we mainly took three exposures of 30 s each under good seeing conditions. The full-width at half-maximum (FWHM) of the seeing disc was about 1\(\farcs\)0 (we also estimated FWHM seeing = 1\(\farcs\)35 \(\pm\) 0\(\farcs\)15 from LT frames), and we collected 298 individual frames over the entire monitoring campaign. After a standard data reduction, IMFITITS PSF photometry yielded magnitudes for the quasar images (see above for details on the photometric model). To avoid biases in the combined LT-NOT light curves, the same photometric method was applied to LT and NOT frames. This method differs from that of Dyrland (2019), who used the DAOPHOT package in IRAF (Stetson, 1987; Massey & Davis, 1992) to extract magnitudes from NOT frames. The six calibration stars in Table 1 were used to adequately correct quasar magnitudes (see above), and we were forced to remove 17 individual frames leading to magnitude outliers. We then combined \(R\)-band magnitudes measured on the same night to obtain photometric data of the lensed quasar and control star at 77 epochs. Again, typical errors were derived from magnitudes at adjacent epochs that are separated \(<\) 4.5 d. This procedure led to uncertainties of 0.0122 (A), 0.0122 (B), 0.0144 (C), 0.0197 (D), and 0.0170 (control star) mag. Errors at each observing epoch were calculated in the same way as for the LT light curves. As a last step, we combined the \(r\)-band LT and \(R\)-band NOT light curves. If we focus on the quasar images and consider \(rR\) pairs separated by no more than 2.5 d, the values of the average colour \(\langle r-R\rangle\) are 0.0565 (A), 0.0616 (B), 0.0546 (C), and 0.0652 (D). Brightness records of the ABC images are more accurate than those of D, and thus we reasonably take the average colours of ABC to estimate a mean \(r-R\) offset of 0.0576 mag. After correcting the \(R\)-band curves of the quasar for this offset, we obtain the new records in Table 2. Table 2 contains \(r\)-band magnitudes of the quasar images and the control star at 267 observing epochs (MJD\(-\)50 000). In Figure 1, we also display our new 5.2-year light curves. ## 3 Time delays and microlensing signals Previous efforts focused on early monitorings with a single telescope, trying to estimate delays between the image A and the other quasar images, \(\Delta t_{\rm AX}=t_{\rm A}-t_{\rm X}\) (X = B, C, D), and find microlensing signals (Dyrland, 2019; Goicoechea & Shalyapin, 2019)1. Here, we use the new light curves in Section 2 along with state-of-the-art curve-shifting algorithms to try to robustly measure time delays between images. At the end of this section, we also discuss the extrinsic (microlensing) variability of the quasar. Footnote 1: Goicoechea & Shalyapin (2019) used the notation \(\Delta t_{\rm AX}=t_{\rm X}-t_{\rm A}\) rather than that defined in this paper and Dyrland (2019) As is clear from Figure 1, there are short time delays between images ABC, while it is hard to get an idea about the \(\Delta t_{\rm AD}\) value by eye. Fortunately, there are several cross-correlation techniques to measure time delays between light curves containing microlensing variations (e.g., Liao et al., 2015, and references therein), and thus we considered PyCS3 curve-shifting algorithms2(Tewes et al., 2013; Millon et al., 2020, 2020) to obtain reliable time delays of PS J0147+4630. PyCS3 is a well-tested software toolbox to estimate time delays between images of gravitationally lensed quasars, and we focused on the \(\chi^{2}\) technique, assuming that the intrinsic signal and the extrinsic ones can be modelled as a free-knot spline (FKS). This technique shifts the four light curves simultaneously (ABCD comparison) to better constrain the intrinsic variability, and relies on an iterative nonlinear procedure to fit the four time shifts and splines that minimise the \(\chi^{2}\) between the data and model (Tewes et al., 2013). Results depend on the initial guesses for the time shifts, so it is necessary to estimate the intrinsic variance of the method using a few hundred initial shifts randomly distributed within reasonable time intervals. In addition, a FKS is characterised by a knot step, which represents the initial spacing between knots. The model consists of an intrinsic spline with a knot step \(\eta\) and four independent extrinsic splines with \(\eta_{\rm ml}\) that account for the microlensing variations in each quasar image (Millon et al., 2020). Footnote 2: [https://gitlab.com/cosmograil/PyCS3](https://gitlab.com/cosmograil/PyCS3) To address the intrinsic variability, we considered three \(\eta\) values of 30, 50 and 70 d. Intrinsic knot steps shorter than 30 d fit the observational noise, whereas \(\eta\) values longer than 70 d do not fit the most rapid variations of the source quasar. Intrinsic variations are usually faster than extrinsic ones, and additionally, the software works fine when the microlensing knot step is significantly longer than \(\eta\). Therefore, the microlensing signals were modelled as free-knot splines with \(\eta_{\rm ml}=350{-}400\) d (i.e., values intermediate between those shown in Table 2 of Millon et al., 2020). We also generated 500 synthetic (mock) light curves of each quasar image, optimised every mock ABCD dataset, and checked the similarity between residuals from the fits to the observed curves and residuals from the fits to mock curves. The comparison of residuals was made by means of two statistics: standard deviation and normalised number of runs \(Z_{\rm r}\) (see details in Tewes et al., 2013). For \(\eta=50\) d and \(\eta_{\rm ml}=400\) d, histograms of residuals derived from mock curves (grey) and from the LT-NOT light curves of PS J0147+4630 are included in the top panels of Figure 2. It is apparent that the standard deviations through the synthetic and the observed curves match very well. Additionally, the bottom panels of Figure 2 show distributions of \(Z_{\rm r}\) from synthetic light curves (grey) for \(\eta=50\) d and \(\eta_{\rm ml}=400\) d. These bottom panels also display the \(Z_{\rm r}\) values from the observations (vertical lines), which are typically located at \(\sim\)0.4\(\sigma\) of the mean values of the synthetic distributions. Four pairs of (\(\eta\), \(\eta_{\rm ml}\)) values (see above) led to the set of time delays in Figure 3. We have verified that other feasible choices for \(\eta_{\rm ml}\) (e.g., \(\eta_{\rm ml}=200\) d) do not substantially modify the results in this figure. The black horizontal bars correspond to 1\(\sigma\) confidence intervals after a marginalisation over results for all pairs of knot steps, and those in the left panels and bottom panels of Figure 3 are included in Table 3. We finally adopted the time delays in Table 3, which are symmetric about central values and useful for subsequent studies. It seems to be difficult to accurately determine delays between the brightest images ABC because they are really short. To robustly measure \(\Delta t_{\rm AC}\) in a near future, we will most likely need to follow a non-standard strategy focused on several time segments associated with strong intrinsic variations and weak extrinsic signals. Fortunately, we find an accurate and reliable value of \(\Delta t_{\rm AD}\) (uncertainty of about 4%), confirming the early result from two monitoring seasons with the NOT and a technique different to that we used in this paper (Dyrland, 2019). It is also worth mentioning that the dispersion method ignoring microlensing variations (the simplest approximation with fewer free parameters; Pelt et al., 1996) produces an optimal AD delay separated by only 10 days from that obtained with PyCS3. We also note that \(\Delta t_{\rm BD}\) and \(\Delta t_{\rm CD}\) have errors of 3.5\(-\)3.7%, and thus we present accurate values of the three independent time delays relative to image D. An image comparison spanning 13 years is also depicted in Figure 4. We have downloaded five \(r\)-band warp frames of PS J0147+4630 that are included in the Data Release 2 of Pan-STARRS. These Pan-STARRS frames were obtained on three nights in the 2010\(-\)2013 period, i.e., a few years before the discovery of the lens system. Two frames are available on two of the three nights, so rough photometric uncertainties through average intranight variations are 0.012 (A), 0.008 (B), 0.019 (C), and 0.033 (D) mag. To discuss the differential microlensing variability of the images BCD Figure 2: Top: Distributions of FKS fit residuals for \(\eta=50\) d and \(\eta_{\rm ml}=400\) d. The grey histograms represent the distributions of residuals from the fits to 500 synthetic light curves of each image, while the red, blue, green and magenta histograms correspond to the distributions of residuals from the fits to the LT–NOT light curves. Bottom: Normalised number of runs \(Z_{\rm r}\) for the synthetic data (grey histograms) and the observed brightness records (red, blue, green and magenta vertical lines). with respect to A, Figure 4 shows the original curve of A along with shifted curves of BCD. We used the central values of the delays relative to image A and constant magnitude offsets to shift curves. The offsets \(\Delta m_{\rm AB}\), \(\Delta m_{\rm AC}\), and \(\Delta m_{\rm AD}\) are average differences between magnitudes of A and those of B, C, and D, respectively. The global shapes of the residuals are shown in Figure 5. The residuals are shown in Figure 6. The residuals are shown in Figure 7. The residuals are shown in Figure 8. The residuals are shown in Figure 9. The residuals are shown in Figure 10. The residuals are shown in Figure 11. The residuals are shown in Figure 12. The residuals are shown in Figure 13. The residuals are shown in Figure 14. The residuals are shown in Figure 15. The residuals are shown in Figure 16. The residuals are shown in Figure 17. The residuals are shown in Figure 18. The residuals are shown in Figure 19. The residuals are shown in Figure 19. of the four brightness records indicate the presence of long-term microlensing effects and suggest that PS J0147+4630 is a suitable target for a deeper analysis of its microlensing signals. Over the last six years, it is noteworthy that there is good overlap between the original curve of A and the shifted curve of D. In addition, the differential microlensing variation of C is particularly prominent, showing a microlensing episode with a total amplitude greater than 0.1 mag. ## 4 Lens Mass Models and Hubble Constant Berghea et al. (2017) presented preliminary modelling of the lens mass of PS J0147+4630 from Pan-STARRS data, whereas Shajib et al. (2019, 2021) have recently modelled the lens system using \(HST\) imaging. To model the \(HST\) images, Shajib et al. have considered the distributions of light of the lens and source, and the lens mass distribution. Their solution for the lensing mass relies on a lens scenario consisting of a singular power-law ellipsoid (SPLE; describing the gravitational field of the main lens galaxy G) and an external shear (ES; accounting for the gravitational action of other galaxies). The dimensionless surface mass density (convergence) profile of the SPLE was characterised by a power-law index \(\beta=2.00\)\(\pm\) 0.05, where \(\beta=2\) for an isothermal ellipsoid5. Footnote 5: The original notation for the power–law index in Shajib et al. (2019, 2021) was \(\gamma\), but we have renamed it as \(\beta\) to avoid confusion between this index and the shear We first considered Shajib et al.'s solution, a flat \(\Lambda\)CDM (standard) cosmology with matter and dark energy densities of \(\Omega_{\rm M}=0.3\) and \(\Omega_{\Lambda}=0.7\), respectively6, updated redshifts \(z_{\rm l}=0.678\)(Goicoechea and Shalyapin, 2019) and \(z_{\rm s}=2.357\)(based on emission lines that are observed at near--IR wavelengths), and the time delay in the third column of Table 3 to calculate \(H_{0}^{\rm model}\) and put it into perspective. We obtained \(H_{0}^{\rm model}\) = 100 \(\pm\) 10 km s\({}^{-1}\) Mpc\({}^{-1}\), which significantly exceeds a concordance value of \(\sim\)70 km s\({}^{-1}\) Mpc\({}^{-1}\)(e.g., Jackson, 2015). If additional mass along the line of sight is modelled as an external convergence \(\kappa_{\rm ext}\), then \(H_{0}^{\rm true}=H_{0}^{\rm model}(1-\kappa_{\rm ext})\)(e.g., Rusu et al., 2017). The factor \(1-\kappa_{\rm ext}\) should be \(\sim\)0.7 (\(\kappa_{\rm ext}\sim 0.3\)) to decrease \(H_{0}\) until accepted values. Therefore, the external convergence required to solve the \(H_{0}\) crisis is an order of magnitude higher than typical values of \(\kappa_{\rm ext}\)(e.g., Rusu et al., 2017; Birrer et al., 2020). Footnote 6: Results do not change appreciably for values of \(\Omega_{\rm M}\) and \(\Omega_{\Lambda}\) slightly different from those adopted here The Hubble constant can be also inferred from another lens mass solution based on approaches similar to those of Shajib et al. Adopting a standard cosmology and updated redshifts (see above), the soluti Figure 4: LT–NOT data (smaller symbols) plus photometric data from Pan-STARRS \(r\)–band frames in 2010\(-\)2013 (larger symbols). The original brightness record of A is compared with shifted light curves of B, C, and D. To shift the BCD light curves, we apply the corresponding time delays and constant magnitude offsets (see main text for details). and the three time delays relative to image D (last three columns in Table 3) led to \(H_{0}^{\rm model}\) values in the range 116 to 131 km s\({}^{-1}\) Mpc\({}^{-1}\). Thus, Schmidt et al.'s solution with power-law index \(\beta\) = 2.08 \(\pm\) 0.02 produces even higher \(H_{0}^{\rm model}\) values than those from Shajib et al.'s solution. Although the \(H_{0}\) crisis may be related to an inappropriate (SPLE + ES) lens scenario or a very high external convergence, we have sought for a new mass reconstruction using astrometric and time-delay constraints, a SPLE + ES scenario, updated redshifts, a standard cosmology, and \(H_{0}^{\rm model}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\). In presence of a typical (weak) external convergence, the \(H_{0}^{\rm true}\) value would be consistent with accepted ones. Our standard astrometric constraints consisted of the \(HST\) positions of ABCD (with respect to G at the origin of coordinates; Shajib et al., 2019, 2021). SPLE + ES mass models of quads usually indicate the existence of an offset between the centre of the SPLE and the light centroid of the galaxy (e.g., Sluse et al., 2012; Shajib et al., 2019, 2021). Hence, instead of formal astrometric errors for G, we adopted \(\sigma_{x}=\sigma_{y}=0\farcs 04\). This uncertainty level equals the root-mean-square of mass/light positional offsets for most quads in the sample of Shajib et al. In addition to astrometric data, the set of constraints incorporated the LT-NOT time delays relative to image D (see Table 3). The number of observational constraints and the number of model parameters were 13 and 10, respectively. For three degrees of freedom, the GRAVLENS/LENSMODEL software7(Keeton, 2001, 2004) led to the 1\(\sigma\) intervals in Table 4 (\(\chi^{2}=3.56\) for the best fit). Footnote 7: [http://www.physics.rutgers.edu/~keeton/gravlens/](http://www.physics.rutgers.edu/~keeton/gravlens/) While our solution for the mass of the early-type galaxy G is characterised by a convergence a little shallower than isothermal (\(\beta<2\); see Table 4), Shajib et al.'s and Schmidt et al.'s solutions for the surface mass density are more centrally concentrated (\(\beta\geq 2\)), suggesting this is a key reason to infer such high values of \(H_{0}^{\rm model}\) from previous models (e.g., Refsdal and Surdej, 1994; Kochanek and Schechter, 2004; Jackson, 2015). The only issue with all SPLE + ES mass models is the existence of a significant mass/light misalignment, i.e., the light and mass distributions of the lens galaxy do not match. This misalignment could be genuine or due to an oversimplification of the lens scenario (e.g., Sluse et al., 2012; Shu et al., 2016; Gomer and Williams, 2021). Most early-type galaxies reside in overdense regions, so external tidal fields in their vicinity are expected to have relatively high amplitudes. External shear strengths for quads exceeding 0.1 are consistent with N-body simulations and semianalytic models of galaxy formation (Holder and Schechter, 2003). Using a model consisting of a singular isothermal elliptical potential and external shear, Luhtaru et al. (2021) have also shown that PS J0147+4630 is a shear-dominated system. ## 5 Conclusions In this paper, we performed a comprehensive analysis of the optical variability of the quadruply-imaged quasar PS J0147+4630. Well-sampled light curves from its discovery in 2017 to 2022 were used to robustly measure the three time delays relative to image D. However, these light curves did not allow us to accurately (in terms of fractional uncertainty) determine the very short time delays between the three bright images ABC forming a compact arc. Additionally, the microlensing-induced variation of the C image (with respect to A) was particularly large in the period 2017\(-\)2022. Combining our new brightness records with quasar fluxes from Pan-STARRS imaging in 2010\(-\)2013, the extended light curves also revealed significant long-term microlensing effects. A microlensing analysis of current data and future light curves from a planned optical multi-band monitoring is expected to lead to important constraints on the \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(\beta\) & \(b\) (\({}^{\prime\prime}\)) & \(e\) & \(\theta_{e}\) (\({}^{\circ}\)) & \(\gamma\) & \(\theta_{\gamma}\) (\({}^{\circ}\)) \\ \hline 1.86 \(\pm\) 0.07 & 1.878 \(\pm\) 0.018 & 0.170 \(\pm\) 0.045 & \(-\)70.8 \(\pm\) 3.5 & 0.177 \(\pm\) 0.019 & 10.8 \(\pm\) 0.7 \\ \hline \end{tabular} Note. – We consider \(HST\) astrometry and LT–NOT time delays as constraints. We also adopt updated redshifts, a standard cosmology, and \(H_{0}^{\rm model}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\). Position angles (\(\theta_{e}\) and \(\theta_{\gamma}\)) are measured east of north, and \(\beta\), \(b\), \(e\), and \(\gamma\) denote power–law index, mass scale and ellipticity of the SPLE, and external shear strength, respectively. We show 68% (1\(\sigma\)) confidence intervals. \end{table} Table 4: SPLE + ES mass model of PS J0147+4630. spatial structure of the quasar accretion disc (Eigenbrod et al., 2008; Poindexter et al., 2008; Cornachione et al., 2020; Goicoechea et al., 2020). From \(HST\) imaging of the quad, Shajib et al. (2019, 2021) and Schmidt et al. (2023) have carried out reconstruction of the lensing mass from an Sple + ES scenario. However, using updated redshifts of the source and lens (and assuming a standard cosmology), their mass reconstructions along with measured delays relative to image D led to an unacceptably large value of the Hubble constant. Although the integrated mass from objects along the line of sight to PS J0147+4630 is still unknown, an unexpected (unusually high) external convergence is required to fix this \(H_{0}\) issue. To try to overcome the \(H_{0}\) crisis, we have sought and found a new mass model that is consistent with astrometric and time-delay constraints, a typical external convergence, and currently accepted values for \(H_{0}\) around 70 km s\({}^{-1}\) Mpc\({}^{-1}\)(e.g., see Fig. 2 of Di Valentino et al., 2021). Time delays are very sensitive to the slope of the mass profile of the main lens galaxy G (e.g., Kochanek and Schechter, 2004), and the new model incorporates a surface mass density less centrally concentrated than previous ones. Alternatively, the Sple + ES lens scenario might be an oversimplification of the actual one, since all Sple + ES models indicate that there is a mass/light misalignment. While this misalignment may be true, it could also be due to the presence of non-modelled components such as substructures and/or companions of G (e.g., Sluse et al., 2012; Gomer and Williams, 2021). Further refinement of the lens scenario along with an extension and improvement of the set of observational constraints (future deep photometry and spectroscopy is a pending task of special relevance) will contribute to an accurate determination of \(H_{0}\) and other cosmological parameters (e.g., Bonvin et al., 2017; Birrer et al., 2020). The forthcoming Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory should provide the strong lens community with a strong increase in the number of known lensed quasars with measured time delays. To be able to utilise such a large increase in the statistical sample to provide correspondingly precise and accurate measurements of \(H_{0}\), it is crucial to reliably identify the systems with more complex lens scenarios that could otherwise bias the \(H_{0}\) measurement. PS J0147+4630 provides an interesting case study in this respect. We thank Martin Millon for making publicly available a Jupiter notebook that has greatly facilitated the use of the PyCS3 software. We also thank anonymous comments and suggestions to a preliminary version of this manuscript, which have helped us to build the current version. This paper is based on observations made with the Liverpool Telescope (LT) and the Nordic Optical Telescope (NOT). The LT is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. The NOT is operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. The data presented here were in part obtained with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOTSA. We thank the staff of both telescopes for a kind interaction. We have also used imaging data taken from the Pan-STARRS archive and the Barbara A. Mikulski archive for the NASA/ESA Hubble Space Telescope, and we are grateful to all institutions developing and funding such public databases. VNS would like to thank the Universidad de Cantabria (UC) and the Spanish AEI for financial support for a long stay at the UC in the period 2022-2023. HD acknowledges support from the Research Council of Norway. This research has been supported by the grant PID2020-118990GB-I00 funded by MCIN/AEI/10.13039/501100011033. Liverpool2m (IO:O), NOT (ALFOSC), PS1, HST (WFC3) IRAF (Tody, 1986, 1993), IMFITFITS (McLeod et al., 1998), Python ([https://www.python.org/](https://www.python.org/)), PyCS3 ([https://gitlab.com/cosmograil/PyCS3](https://gitlab.com/cosmograil/PyCS3)),GRAVLENS/LENSMODEL([http://www.physics.rutgers.edu/keeton/gravlens/](http://www.physics.rutgers.edu/keeton/gravlens/)).
2310.00194
Improving Planning with Large Language Models: A Modular Agentic Architecture
Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. Both cognitive neuroscience and reinforcement learning (RL) have proposed a number of interacting functional components that together implement search and evaluation in multi-step decision making. These components include conflict monitoring, state prediction, state evaluation, task decomposition, and orchestration. To improve planning with LLMs, we propose an agentic architecture, the Modular Agentic Planner (MAP), in which planning is accomplished via the recurrent interaction of the specialized modules mentioned above, each implemented using an LLM. MAP improves planning through the interaction of specialized modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate MAP on three challenging planning tasks -- graph traversal, Tower of Hanoi, and the PlanBench benchmark -- as well as an NLP task requiring multi-step reasoning (strategyQA). We find that MAP yields significant improvements over both standard LLM methods (zero-shot prompting, in-context learning) and competitive baselines (chain-of-thought, multi-agent debate, and tree-of-thought), can be effectively combined with smaller and more cost-efficient LLMs (Llama3-70B), and displays superior transfer across tasks. These results suggest the benefit of a modular and multi-agent approach to planning with LLMs.
Taylor Webb, Shanka Subhra Mondal, Ida Momennejad
2023-09-30T00:10:14Z
http://arxiv.org/abs/2310.00194v4
# A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models ###### Abstract Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC). These modules perform functions such as conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are sometimes capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a black box architecture with multiple LLM-based (GPT-4) modules. The architecture improves planning through the interaction of specialized PFC-inspired modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate the combined architecture on two challenging planning tasks - graph traversal and Tower of Hanoi - finding that it yields significant improvements over standard LLM methods (e.g., zero-shot prompting or in-context learning). These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs. ## 1 Introduction Large Language Models (LLMs) (Devlin et al., 2009; Brown et al., 2020) have recently emerged as highly capable generalist systems with a surprising range of emergent capacities (Srivastava et al., 2022; Wei et al., 2022; Webb et al., 2023). They have also sparked broad controversy, with some suggesting that they are approaching general intelligence (Bubeck et al., 2023), and others noting a number of significant deficiencies (Mahowald et al., 2023). A particularly notable shortcoming is their poor ability to plan or perform faithful multi-step reasoning (Valmeekam et al., 2023; Dziri et al., 2023). Recent work (Momennejad et al., 2023) has evaluated the extent to which LLMs might possess an emergent capacity for planning and exploiting _cognitive maps_, the relational structures that humans and other animals utilize to perform planning (Tolman, 1948; Tavares et al., 2015; Behrens et al., 2018). This work found that a variety of LLMs, ranging from small, open-source models (e.g., LLaMA-13B and Alpaca-7B) to large, state-of-the-art models (e.g., GPT-4), displayed systematic shortcomings in planning tasks that suggested an inability to reason about cognitive maps. Common failure modes included a tendency to 'hallucinate' (e.g., to imagine non-existent paths), and to fall into loops. This work raises the question of how LLMs might be improved so as to enable a capacity for planning. In the present work, we take a step toward improving planning in LLMs, by taking inspiration from the planning mechanisms employed by the human brain. Planning is generally thought to depend on the prefrontal cortex (PFC) (Owen, 1997; Russin et al., 2020; Brunec and Momennejad, 2022; Momennejad et al., 2018; Momennejad, 2020; Mattar and Lengyel, 2022), a region in the frontal lobe that is broadly involved in executive function, decision-making, and reasoning (Miller and Cohen, 2001). Research in cognitive neuroscience has revealed the presence of several subregions or modules within the PFC that appear to be specialized to perform certain functions. These include functions such as conflict monitoring (Botvinick et al., 1999); state prediction and state evaluation (Wallis, 2007; Schuck et al., 2016); and task decomposition and task coordination (Rannani and Owen, 2004; Momennejad and Haynes, 2012, 2013). Human planning then emerges through the coordinated and recurrent interactions among these specialized PFC modules, rather than through the activity of a single, monolithic system. An interesting observation is that LLMs often seem to display some of these capacities when probed in isolation, even though they are unable to reliably integrate and deploy these capacities in the service of a goal. For instance, Momennejad et al. (2023) noted that LLMs often attempt to traverse invalid or hallucinated paths in planning problems (e.g., to move between rooms that are not connected), even though they can correctly identify these paths as invalid when probed separately. This suggests the possibility of a PFC-inspired approach, in which planning is carried out through the coordinated activity of multiple LLM modules, each of which is specialized to perform a distinct process. Figure 1: **LLM-PFC architecture.** The agent receives states from the environment and high-level goals. These are processed by a set of specialized LLM modules. The TaskDecomposer receives high-level goals and generates a series of subgoals. The Actor generates proposed actions given a state and a subgoal. The Monitor gates these proposed actions based on whether they violate certain constraints (e.g., task rules) and provides feedback to the Actor. The Predictor predicts the next state given the current state and a proposed action. The Evaluator is used to estimate the value of a predicted state. The Predictor and Evaluator are used together to perform tree search. The TaskCoordinator determines when each subgoal has been achieved, and when the final goal has been achieved, at which point the plan is emitted to the environment as a series of actions. These modules are inspired by the specific PFC subregions depicted on the left. With this goal in mind, we propose LLM-PFC (Figure 1), an architecture composed of modules that are specialized to perform specific PFC-inspired functions. Each module consists of an LLM instance (GPT-4), constructed through a combination of prompting and few-shot in-context learning. We specifically propose modules that perform error monitoring, action proposal, state prediction, state evaluation, task decomposition, and task coordination. It is suggested that the coordinated activity of multiple PFC subregions performs tree search during planning (Owen, 1997; Daw et al., 2005; Wunderlich et al., 2012; Doll et al., 2015). Thus, our approach combines action proposal, state prediction, and state evaluation to perform tree search. We evaluate LLM-PFC on two challenging planning tasks. First, we performed controlled experiments on a set of graph traversal tasks using the CogEval protocol (Momennejad et al., 2023). These tasks require navigation in novel environments based on natural language descriptions, and have been shown to be extremely challenging for LLMs, including GPT-4. Second, we investigate Tower of Hanoi (ToH), a classic problem solving task that requires multi-step planning (Simon, 1975), and for which performance is known to be heavily dependent on PFC function (Goel and Grafman, 1995; Fincham et al., 2002). We find that our approach significantly improves LLM performance on these planning tasks, yielding nearly perfect performance on the graph traversal tasks, and a nearly seven-fold improvement over zero-shot performance on Tower of Hanoi (74% vs. 11% accuracy). Ablation experiments further indicate that each of the individual modules plays an important role in the overall architecture's performance. Taken together, these results indicate the potential of a PFC-inspired approach to improve the reasoning and planning capabilities of LLMs. ## 2 Approach The LLM-PFC architecture is constructed from a set of a specialized LLM modules, each of which performs a specific PFC-inspired function. In the following sections, we first describe the functions performed by each module, and then describe how they interact to generate a plan. ### Modules LLM-PFC contains the following specialized modules, each constructed from a separate LLM instance through a combination of prompting and few-shot (\(\leq 3\) examples) in-context learning (described in greater detail in section A.2): \(\bullet\)**TaskDecomposer**. The \(\mathrm{TaskDecomposer}\) receives the current state \(x\) and a goal \(y\) and generates a set of subgoals \(Z\) that will allow the agent to gradually work toward its final goal. This module is inspired by the anterior PFC (aPFC), which is known to play a key role in task decomposition through the generation and maintenance of subgoals (Rammani and Owen, 2004). In the present work, the \(\mathrm{TaskDecomposer}\) is only utilized to generate a single intermediate goal, though in future work we envision that it will be useful to generate a series of multiple subgoals. \(\bullet\)**Actor**. The \(\mathrm{Actor}\) receives the current state \(x\) and a subgoal \(z\) and proposes \(B\) potential actions \(A=a_{b=1}\ldots a_{b=B}\). The \(\mathrm{Actor}\) can also receive feedback \(\epsilon\) from the \(\mathrm{Monitor}\) about its proposed actions. This module can be viewed as being analogous to the dorsolateral PFC (dlPFC) which plays a role in decision making through top-down control and guidance of lower-order premotor and motor regions (Miller and Cohen, 2001). \(\bullet\)**Monitor**. The \(\mathrm{Monitor}\) assesses the actions proposed by the \(\mathrm{Actor}\) to determine whether they are valid (e.g., whether they violate the rules of a task). It emits an assessment of validity \(\sigma\), and also feedback \(\epsilon\) in the event the action is deemed invalid. This module is inspired by the Anterior Cingulate Cortex (ACC), which is known to play a role in conflict monitoring (Botvinick et al., 1999), i.e., detecting errors or instances of ambiguity. \(\bullet\)**Predictor**. The \(\mathrm{Predictor}\) receives the current state \(x\) and a proposed action \(a\) and predicts the resulting next state \(\tilde{x}\). The \(\mathrm{Predictor}\) is inspired by the Orbitofrontal cortex (OFC), which plays a role in estimating and predicting task states. In particular, it has been proposed that the OFC plays a key role in encoding cognitive maps: representations of task-relevant states and their relationships to one another (Schuck et al., 2016). \(\bullet\)**Evaluator**. The \(\mathrm{Evaluator}\) receives a next-state prediction \(\tilde{x}\) and produces an estimate of its value \(v\) in the context of goal \(y\). This is accomplished by prompting the \(\mathrm{Evaluator}\) (and demon strating via a few in-context examples) to estimate the minimum number of steps required to reach the goal (or subgoal) from the current state. The \(\mathrm{Evaluator}\) is also inspired by the OFC which, in addition to predicting task states, plays a key role in estimating the motivational value of those states Wallis (2007). \(\bullet\)**TaskCoordinator**. The \(\mathrm{TaskCoordinator}\) receives the current state \(x\) and a subgoal \(z\) and emits an assessment \(\Omega\) of whether the subgoal has been achieved. When the \(\mathrm{TaskCoordinator}\) determines that all subgoals (including the final goal) have been achieved, the plan is emitted to the environment as a series of actions. This module is also inspired by the aPFC, which is thought to both identify subgoals and coordinate their sequential execution (Ramnani & Owen, 2004). ### Action proposal loop The \(\mathrm{Actor}\) and \(\mathrm{Monitor}\) interact via the \(\mathrm{ProposeAction}\) function (Algorithm 1). The \(\mathrm{Actor}\) proposes actions which are then gated by the \(\mathrm{Monitor}\). If the \(\mathrm{Monitor}\) determines that the actions are invalid (e.g., they violate the rules of a task), feedback is provided to the \(\mathrm{Actor}\), which then proposes an alternative action. In the brain, a similar process is carried out by interactions between the ACC and dorsolateral PFC (dIPFC). The ACC is thought to recruit the dIPFC under conditions of conflict (e.g., errors or ambiguity), which then acts to resolve the conflict through top-down projections to lower-order control structures (e.g., premotor and motor cortices) (Miller & Cohen, 2001; Shenhav et al., 2013). ``` Function\(\mathrm{ProposeAction}\left(x,y,B\right)\): \(\sigma\leftarrow\) false // Initialize validity \(E\leftarrow\left\{\right\}\) while\(\sigma\) is false do \(A\leftarrow\mathrm{Actor}(x,y,E,B)\) // Sample B actions \(\sigma,\epsilon\leftarrow\mathrm{Monitor}(x,A)\) // Determine validity and provide feedback \(E\gets E\cup\left\{\epsilon\right\}\) // Accumulate feedback end while return\(A\) ``` **Algorithm 1**Action proposal loop. \(\mathrm{ProposeAction}\) takes a state \(x\) and a goal \(y\) and generates \(B\) potential actions \(A=a_{b=1}\dots a_{b=B}\). This is implemented via a loop, in which the \(\mathrm{Actor}\) first proposes potential actions, and the \(\mathrm{Monitor}\) then assesses those actions according to certain constraints (e.g., task rules), providing feedback if any of the actions are deemed to be invalid. This continues until the proposed actions are considered valid. See Sections A.2.2 and A.2.3 for more details. ### Search loop \(\mathrm{ProposeAction}\) is further embedded in a Search loop (Algorithm 2). The actions emitted by \(\mathrm{ProposeAction}\) are passed to the \(\mathrm{Predictor}\), which predicts the states that will result from these actions. A limited tree search is then performed, starting from the current state, and then exploring \(B\) branches recursively to a depth of \(L\) layers. Values are assigned to the terminal states of this search by the \(\mathrm{Evaluator}\), and the action leading to the most valuable predicted state is selected. This approach mirrors that of the human brain, in which search is thought to be carried out through the coordinated activity of multiple regions within the PFC, including dIPFC, ACC, and OFC (Owen, 1997; Mattar & Lengyel, 2022). ### Plan generation Algorithm 3 describes the complete LLM-PFC algorithm. To generate a plan, a set of subgoals is first generated by the \(\mathrm{TaskDecomposer}\) based on the final goal and current state. These subgoals are then pursued one at a time, utilizing the \(\mathrm{Search}\) loop to generate actions until the \(\mathrm{TaskCoordinator}\) determines that the subgoal has been achieved. The actions are accumulated in a plan buffer \(P\) until either the \(\mathrm{TaskCoordinator}\) determines that the final goal has been reached, or the maximum allowable number of actions \(T\) are accumulated. This approach is inspired by the role that aPFC plays in task decomposition. This involves the decomposition of tasks into smaller, more manageable tasks, and the coordinated sequential execution of these component tasks (Ramnani & Owen, 2004). ``` FunctionSearch\((l,L,B,x,y)\); // Initialize value record \(\tilde{X}_{l}\leftarrow\{\}\)// Initialize next-state record \(A_{l}\leftarrow\text{ProposeAction}(x,y,B)\)// Propose B actions for\(b\) in \(1\ldots B\)do \(\tilde{x}_{lb}\leftarrow\text{Predictor}(x,A_{lb})\)// Predict next state \(\tilde{X}_{l}\leftarrow\tilde{X}_{l}\cup\{\tilde{x}_{lb}\}\)// Update next-state record \(\Omega\leftarrow\text{TaskCoordinator}(\tilde{x}_{lb},y)\)// Terminates search if goal achieved if\(l<L\) and \(\Omega\) is false then \(a_{l+1},\tilde{x}_{l+1},v_{l+1}\leftarrow\text{Search}(l+1,L,B,\tilde{x}_{lb},y)\)// Advance search depth \(V_{l}\gets V_{l}\cup\{v_{l+1}\}\)// Update value record else \(v_{lb}\leftarrow\text{Evaluator}(\tilde{x}_{lb},y)\)// Evaluate predicted state \(V_{l}\gets V_{l}\cup\{v_{lb}\}\)// Update value record end if end for \(v_{l}\leftarrow\max(V_{l})\)// Maximum value (randomly sample if equal value) \(a_{l}\gets A_{\text{argmax}(V_{l})}\)// Select action \(\tilde{x}_{l}\leftarrow\tilde{X}_{\text{argmax}(V_{l})}\)// Predicted next-state return\(a_{l},\tilde{x}_{l},v_{l}\) ``` **Algorithm 2**Search loop. Tree search with a depth of \(L\) layers, with \(B\) branches at each layer \(l\). For each branch, a proposed action is sampled, and the \(\text{Predictor}\) predicts the next state \(\tilde{x}\). This process continues recursively until the terminal layer \(L\), at which point the value \(v_{l=L}\) of the terminal states is estimated by the \(\text{Evaluator}\). The values are backpropogated to their parent states in the first layer, and the action that leads to the most valuable state is selected. In our implementation, we accelerate this process by caching the actions and predicted states from deeper search layers and then reusing them in subsequent searches. We also employ the \(\text{TaskCoordinator}\) to prematurely terminate search if the goal state is achieved. ``` Function\(\text{Exact}\)\((l,L,B,x,y)\); // Initialize value record \(V_{l}\leftarrow\{\}\)// Initialize value record \(\tilde{X}_{l}\leftarrow\{\}\)// Initialize next-state record \(A_{l}\leftarrow\text{ProposeAction}(x,y,B)\)// Propose B actions for\(b\) in \(1\ldots B\)do \(\tilde{x}_{lb}\leftarrow\text{Predictor}(x,A_{lb})\)// Predict next state \(\tilde{X}_{l}\leftarrow\tilde{X}_{l}\cup\{\tilde{x}_{lb}\}\)// Update next-state record \(\Omega\leftarrow\text{TaskCoordinator}(\tilde{x}_{lb},y)\)// Terminates search if goal achieved if\(l<L\) and \(\Omega\) is false then \(a_{l+1},\tilde{x}_{l+1},v_{l+1}\leftarrow\text{Search}(l+1,L,B,\tilde{x}_{lb},y)\)// Advance search depth \(V_{l}\gets V_{l}\cup\{v_{l+1}\}\)// Update value record else \(v_{lb}\leftarrow\text{Evaluator}(\tilde{x}_{lb},y)\)// Evaluate predicted state \(V_{l}\gets V_{l}\cup\{v_{lb}\}\)// Update value record end if end for \(v_{l}\leftarrow\max(V_{l})\)// Maximum value (randomly sample if equal value) \(a_{l}\gets A_{\text{argmax}(V_{l})}\)// Select action \(\tilde{x}_{l}\leftarrow\tilde{X}_{\text{argmax}(V_{l})}\)// Predict next-state return\(a_{l},\tilde{x}_{l},v_{l}\) ``` **Algorithm 3**LLM-PFC. \(\text{LLM-PFC}\) takes a state \(x\) and a goal \(y\) and generates a plan \(P\), a series of actions with a maximum length of \(T\). The \(\text{TaskDecomposer}\) first generates a set of subgoals \(Z\). The agent then pursues each individual subgoal \(z\) in sequence, followed by the final goal \(y\). At each time step, Search is called to generate an action and a predicted next-state. Actions are added to the plan until the \(\text{TaskCoordinator}\) determines that the goal has been achieved, or the plan reaches the maximum length \(T\). ``` Function\(\text{LLM-PFC}\)\((x,y,T,L,B)\); // Initialize plan \(P\leftarrow\{\}\)// Initialize plan \(Z\leftarrow\text{TaskDecomposer}(x,y)\)// Generate subgoals for\(g\) in \(1\ldots length(Z)+1\)do if\(g\leq\text{length}(Z)\)then \(z\gets Z_{g}\)// Update current subgoal else \(z\gets y\)// Final goal end if \(\Omega\leftarrow\text{TaskCoordinator}(x,z)\)// Initialize subgoal assessment while\(\Omega\) is false and \(\text{length}(P)<T\)do \(a,x,v\leftarrow\text{Search}(l=1,L,B,x,z)\)// Perform search \(P\leftarrow[P,a]\)// Update plan \(\Omega\leftarrow\text{TaskCoordinator}(x,z)\)// Determine if subgoal is achieved end if end for return\(P\) ``` **Algorithm 4**LLM-PFC. \(\text{LLM-PFC}\). \(\text{LLM-PFC}\) takes a state \(x\) and a goal \(y\) and generates a plan \(P\), a series of actions with a maximum length of \(T\). The \(\text{TaskDecomposer}\) first generates a set of subgoals \(Z\). The agent then pursues each individual subgoal \(z\) in sequence, followed by the final goal \(y\). At each time step, Search is called to generate an action and a predicted next-state. Actions are added to the plan until the \(\text{TaskCoordinator}\) determines that the goal has been achieved, or the plan reaches the maximum length \(T\). ## 3 Experiments ### Tasks **Graph Traversal.** We performed controlled experiments on two multi-step planning tasks based on graph traversal using the CogEval protocol (Momennejad et al., 2023). Natural language descrip tions of a graph are provided with each node assigned to a room (e.g., 'room 4 is connected to room 7'). We focused on a particular type of graph (Figure 2) with community structure (Schapiro et al., 2013) previously found to be challenging for a wide variety of LLMs. The first task, Valuepath, involves finding the shortest path from a given room that results in the largest reward possible. A smaller reward and a larger reward are located at two different positions in the graph. We fixed the two reward locations, and created 13 problems based on different starting locations. The second task, Steppath, involves finding the shortest path between a pair of nodes. We evaluated problems with an optimal shortest path of 2, 3, or 4 steps. We generated 20 problems for each of these conditions by sampling different starting and target locations. **Tower of Hanoi.** We also investigated a classic multi-step planning task called the Tower of Hanoi (ToH) (Figure 3). In the original formulation, there are three pegs and a set of disks of different sizes. The disks are stacked in order of decreasing size on the leftmost peg. The goal is to move all Figure 3: **Tower of Hanoi. Top:** Depiction of the Tower of Hanoi (ToH) puzzle. Disks are stacked in order of decreasing size on the leftmost peg. The goal is to move these disks so that they are stacked in order of decreasing size on the rightmost peg. Only the disk on the top of the stack may be moved, and a disk can only be placed on top of larger disks (or on an empty peg). The version shown involves three disks, but more disks can be used (making the task significantly more difficult). **Bottom:** Modified text-based version of ToH. Three lists are presented, labelled A, B and C. A set of integers is distributed amongst these lists. The goal is to move the numbers so that they are arranged in ascending order in list C. Only the number at the end of the list may be moved, and a number can only be placed in front of a smaller number. Multiple problem instances were created by varying the initial state. Figure 2: **Graph Traversal.** We investigated two graph traversal tasks utilizing a challenging graph with community structure. **Steppath:** Find shortest path between two nodes, e.g. node 3 and node 7. **Valuepath:** Find shortest path from starting location (e.g., node 10) to location with maximum reward (node 8 in depicted example). disks to the rightmost peg, such that the disks are stacked in order of decreasing size. There are a couple of rules that determine which moves are considered valid. First, a disk can only be moved if it is at the top of its stack. Second, a disk can only be moved to the top of another stack if it is smaller than the disks in that stack (or if the peg is empty). More complex versions of the task can be created by using a larger number of disks. We designed an alternative formulation of this task in which the inputs are text-based rather than visual. In this alternative formulation, three lists (A, B, and C) are used instead of the three pegs, and a set of numbers (0, 1, 2, and so on) is used instead of disks of different sizes. The goal is to move all numbers so that they are arranged in ascending order in list C. The rules are isomorphic to ToH. First, a number can only be moved if it is at the end of a list. Second, a number can only be moved to the end of a new list if it is larger than all the numbers in that list. Note that although this novel formulation is isomorphic to ToH (and equally complex), it does not share any surface features with the original ToH puzzle (disks, pegs, etc.), and thus GPT-4 cannot rely on exposure to descriptions of ToH in its training data to solve the problem. We created multiple problem instances by varying the initial state (the initial positions of the numbers). This resulted in 26 three-disk problems and 80 four-disk problems. ### Baselines We compared our model to two baseline methods. The first method involved asking GPT-4 (zero-shot) to provide the solution step by step. For the second method, in-context learning (ICL), we provided GPT-4 with a few in-context examples of a complete solution. We provided two examples for ToH and Valuepath, and 3 examples (one each for 2, 3, and 4 steps) for Steppath. ### Experiment Details We implemented each of the modules using a separate GPT-4 (32K context, '2023-03-15-preview' model index, Microsoft Azure openAI service) instance through a combination of prompting and few-shot in-context examples. We set Top-p to 0 and temperature to 0, except for the Actor (as detailed in section A.2.2). The Search loop explored \(B=2\) branches recursively for a depth \(L=2\). For ToH, we used two randomly selected in-context examples of three-disk problems, and a description of the problem in the prompts for all the modules. For the graph traversal tasks, we used two in-context examples for all modules, except for the Actor and Evaluator in the Steppath task, where we used three in-context examples, one each for 2-, 3-, and 4-step paths. The prompt also described the specific task that was to be performed by each module (e.g., monitoring, task decomposition). For more details about the prompts and specific procedures used for each module, see Section A.2. For three-disk problems, we allowed a maximum of \(T=10\) actions per problem, and evaluated on 24 out of 26 possible problems (leaving out the two problems that were used as in-context examples for the Actor). We also evaluated on four-disk problems, for which we allowed a maximum of \(T=20\) actions per problem. The same three-disk problems were used as in-context examples, meaning that the four-disk problems tested for out-of-distribution (OOD) generalization. For the graph traversal tasks, we allowed a maximum of \(T=6\) actions per problem. We didn't use a separate \(\mathrm{Predictor}\) for the graph traversal tasks, since the action proposed by the Actor gives the next state. We also did not include the \(\mathrm{TaskDecomposer}\) for these tasks, and did not use the Search loop for the Steppath task, as the model's performance was already at ceiling without the use of these components. ## 4 Results Figure 4 shows the results on the Valuepath and Steppath graph traversal tasks (see Section A.1 for all results in Table form). On the Valuepath task, LLM-PFC solved 100% of problems and proposed no invalid actions (e.g., it did not hallucinate the presence of non-existent edges), significantly outperforming both baselines. On the Steppath task, LLM-PFC displayed perfect performance for 2-step and 3-step paths, and near-perfect performance for 4-step paths, again significantly outperforming both baselines. The model also did not propose any invalid actions on this task. Notably, LLM-PFC's proposed plans were close to the optimal number of steps for both tasks. Figure 5 shows the results on Tower of Hanoi (ToH). LLM-PFC demonstrated a significant improvement both in terms of the number of problems solved (left) and the number of invalid actions proposed (middle). On 3-disk problems, LLM-PFC yielded a nearly seven-fold improvement in the number of problems solved over zero-shot performance, and also significantly outperformed standard in-context learning (ICL). For the problems that LLM-PFC solved, the average plan length (5.4) was close to the optimal number of moves (4.4). The model also demonstrated some ability to generalize out-of-distribution (OOD) to more complex 4-disk problems (not observed in any context examples), whereas GPT-4 Zero-shot and GPT-4 ICL solved close to 0% of these problems. Figure 4: **Graph traversal results.** Top: Valuepath results. Bottom: Steppath results. Left: Fraction of solved problems (without proposing any invalid actions; \(\uparrow\) better). Middle: fraction of invalid action proposals (\(\downarrow\) better). Right: Plan length (\(\downarrow\) better; note that these results only reflect problems that were successfully solved, and therefore exclude many problems for the baseline models). GPT-4 Zero-shot and ICL baselines are deterministic, and therefore a single run was performed on all problems. Note that LLM-PFC did not employ tree search on the Steppath task, and did not employ task decomposition on either task, as the performance of the model was already at ceiling without these components. Without tree search, LLM-PFC’s performance is deterministic, and therefore only a single run was performed on the Steppath task. Gray error bars reflect 95% binomial confidence intervals (for models evaluated on a single run). For Valuepath, we performed 5 runs with LLM-PFC, and present average performance \(\pm\) the standard error of the mean (black error bars). Figure 5: **Tower of Hanoi (ToH) results.** Left: Fraction of solved problems (without proposing any invalid actions; \(\uparrow\) better). Middle: Fraction of invalid action proposals (\(\downarrow\) better). Right: Ablation results for 3-disk problems (\(\uparrow\) better). Note that 4-disk problems are out-of-distribution (OOD). GPT-4 Zero-shot and ICL baselines are deterministic and reflect a single run. Gray error bars reflect 95% binomial confidence intervals. Dots reflect values of 0%. LLM-PFC results for 3-disk problems reflect the average over 5 runs \(\pm\) the standard error of the mean (black error bars). LLM-PFC results for 4-disk problems reflect a single run, due to the high computational cost of multiple runs. Notably, LLM-PFC did not propose any invalid actions, even on OOD 4-disk problems, whereas GPT-4 Zero-shot and ICL baselines both proposed a significant number of invalid actions. ### Ablation Study We also carried out an ablation study to determine the relative importance of each of LLM-PFC's major components, focusing on the 3-disk ToH problems. Figure 5 (right) shows the results. We found that the Monitor was the most important component, as ablating this module resulted in significantly fewer solved problems, due primarily to an increased tendency to propose invalid moves (31% invalid moves vs. 0% for other ablation models). Ablating the tree search and TaskDecomposer module also resulted in significantly fewer solved problems. Overall, these results suggest that all major components played an important role in the model's performance. ## 5 Related Work Early work in AI formalized planning as a problem of search through a combinatorial state space, typically utilizing various heuristic methods to make this search tractable (Newell and Simon, 1956; Newell et al., 1959). Problems such as ToH figured prominently in this early research (Simon, 1975), as it affords the opportunity to explore ideas based on hierarchical or recursive planning (in which a larger problem is decomposed into a set of smaller problems). Our proposed architecture adopts some of the key ideas from this early work, including tree search and hierarchical planning. A few recent studies have investigated planning in LLMs. These studies suggest that, although LLMs can perform relatively simple planning tasks (Huang et al., 2022), and can learn to make more complex plans given extensive domain-specific fine-tuning (Pallagani et al., 2022; Wu et al., 2023), they struggle on tasks that require zero-shot or few-shot generation of complex multi-step plans (Valmeekam et al., 2023; Momennejad et al., 2023). These results also align with studies that have found poor performance in tasks that involve other forms of extended multi-step reasoning, such as arithmetic (Dziri et al., 2023). Our approach is in large part motivated by the poor planning and reasoning performance exhibited by LLMs in these settings. Some recent approaches have employed various forms of heuristic search to improve performance in LLMs (Lu et al., 2021; Zhang et al., 2023), but these approaches have generally involved search at the level of individual tokens. This is in contrast to our approach, in which search is performed at the more abstract level of task states (described in natural language). This is similar to other recently proposed black-box approaches in which 'thoughts' - meaningful chunks of natural language - are utilized as intermediate computations to solve more complex problems. These approaches include scratchpads (Nye et al., 2021), chain-of-thought (Wei et al., 2022), tree-of-thoughts (Yao et al., 2023), reflexion (Shinn et al., 2023), Society of Mind (Du et al., 2023), and Describe-Explain-Plan-Select (Wang et al., 2023). All of these approaches can be viewed as implementing a form of controlled, or'system 2', processing (as contrasted with automatic, or'system 1', processing) (Schneider and Shiffrin, 1977; Sloman, 1996; Kahneman, 2011). In the brain, these controlled processes are strongly associated with the prefrontal cortex (Miller and Cohen, 2001). Therefore, in the present work, we leveraged knowledge from cognitive neuroscience about the modular properties of the PFC. The resulting architecture shares some components with other black box approaches (e.g., tree search (Yao et al., 2023)), but also introduces a number of new components (error monitoring, task decomposition, task coordination, state/action distinction), and combines these components in a novel manner inspired by the functional organization of the human brain. There have also been a number of proposals for incorporating modularity into deep learning systems, including neural module networks (Andreas et al., 2016), and recurrent independent mechanisms (Goyal et al., 2019). Our approach is distinguished from these approaches by the proposal of modules that perform specific high-level component processes, based on knowledge of specific subregions within the PFC. Finally, our approach is closely related to a recent proposal to augment deep learning systems with PFC-inspired mechanisms (Russin et al., 2020). LLM-PFC can be viewed as a concrete framework for accomplishing this goal. ## 6 Conclusion and Future Directions In this work, we have proposed the LLM-PFC architecture, an approach aimed at improving the planning ability of LLMs by taking inspiration from the modular architecture of the human PFC. In experiments on two challenging planning domains, graph traversal and Tower of Hanoi, we found that LLM-PFC significantly improved planning performance over standard LLM methods. While these results represent a significant step forward, there is still room for improvement: first, there are more challenging planning tasks (including shortcuts and detour) in Momennejad et al. (2023), which remain to be the topic of future applications of LLM-PFC; and second, the model has less than optimal performance on Tower of Hanoi. This may be due in part to the inherent limitations of prompting and in-context learning as methods for the specialization of LLM-PFC's modules. A promising avenue for further improvement may be to jointly fine-tune the modules across a range of diverse tasks (which requires open-source models), rather than relying only on black box methods (our only option with GPT-4). A white-box approach would also eliminate the need for task-specific prompts, and potentially enable zero-shot planning on novel tasks. We look forward to investigating these possibilities in future work.
2309.04291
Star Colouring of Bounded Degree Graphs and Regular Graphs
A $k$-star colouring of a graph $G$ is a function $f:V(G)\to\{0,1,\dots,k-1\}$ such that $f(u)\neq f(v)$ for every edge $uv$ of $G$, and every bicoloured connected subgraph of $G$ is a star. The star chromatic number of $G$, $\chi_s(G)$, is the least integer $k$ such that $G$ is $k$-star colourable. We prove that $\chi_s(G)\geq \lceil (d+4)/2\rceil$ for every $d$-regular graph $G$ with $d\geq 3$. We reveal the structure and properties of even-degree regular graphs $G$ that attain this lower bound. The structure of such graphs $G$ is linked with a certain type of Eulerian orientations of $G$. Moreover, this structure can be expressed in the LC-VSP framework of Telle and Proskurowski (SIDMA, 1997), and hence can be tested by an FPT algorithm with the parameter either treewidth, cliquewidth, or rankwidth. We prove that for $p\geq 2$, a $2p$-regular graph $G$ is $(p+2)$-star colourable only if $n:=|V(G)|$ is divisible by $(p+1)(p+2)$. For each $p\geq 2$ and $n$ divisible by $(p+1)(p+2)$, we construct a $2p$-regular Hamiltonian graph on $n$ vertices which is $(p+2)$-star colourable. The problem $k$-STAR COLOURABILITY takes a graph $G$ as input and asks whether $G$ is $k$-star colourable. We prove that 3-STAR COLOURABILITY is NP-complete for planar bipartite graphs of maximum degree three and arbitrarily large girth. Besides, it is coNP-hard to test whether a bipartite graph of maximum degree eight has a unique 3-star colouring up to colour swaps. For $k\geq 3$, $k$-STAR COLOURABILITY of bipartite graphs of maximum degree $k$ is NP-complete, and does not even admit a $2^{o(n)}$-time algorithm unless ETH fails.
Shalu M. A., Cyriac Antony
2023-09-08T12:25:12Z
http://arxiv.org/abs/2309.04291v1
# Star Colouring of Bounded Degree Graphs ###### Abstract A \(k\)-star colouring of a graph \(G\) is a function \(f:V(G)\to\{0,1,\ldots,k-1\}\) such that \(f(u)\neq f(v)\) for every edge \(uv\) of \(G\), and every bicoloured connected subgraph of \(G\) is a star. The star chromatic number of \(G\), \(\chi_{s}(G)\), is the least integer \(k\) such that \(G\) is \(k\)-star colourable. We prove that \(\chi_{s}(G)\geq\lceil(d+4)/2\rceil\) for every \(d\)-regular graph \(G\) with \(d\geq 3\). We reveal the structure and properties of even-degree regular graphs \(G\) that attain this lower bound. The structure of such graphs \(G\) is linked with a certain type of Eulerian orientations of \(G\). Moreover, this structure can be expressed in the LC-VSP framework of Telle and Proskurowski (SIDMA, 1997), and hence can be tested by an FPT algorithm with the parameter either treewidth, cliquewidth, or rankwidth. We prove that for \(p\geq 2\), a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable only if \(n\coloneqq|V(G)|\) is divisible by \((p+1)(p+2)\). For each \(p\geq 2\) and \(n\) divisible by \((p+1)(p+2)\), we construct a \(2p\)-regular Hamiltonian graph on \(n\) vertices which is \((p+2)\)-star colourable. The problem \(k\)-Star Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-star colourable. We prove that 3-Star Colourability is NP-complete for planar bipartite graphs of maximum degree three and arbitrarily large girth. Besides, it is coNP-hard to test whether a bipartite graph of maximum degree eight has a unique 3-star colouring up to colour swaps. For \(k\geq 3\), \(k\)-Star Colourability of bipartite graphs of maximum degree \(k\) is NP-complete, and does not even admit a \(2^{o(n)}\)-time algorithm unless ETH fails. ## 1 Introduction The star colouring is a well-known variant of (vertex) colouring introduced by Grunbaum [25] in the 1970s. The scientific computing community independently discovered star colouring in the 1980s and used it for lossless compression of symmetric sparse matrices, which is in turn used in the estimation of sparse Hessian matrices (see the survey [23]). A \(k\)-colouring \(f\) of a graph \(G\) is a \(k\)_-star colouring of \(G\)_ if every bicoloured component of \(G\) under \(f\) is a star (in other words, \(G\) does not contain a bicoloured 4-vertex path as a subgraph). The star chromatic number of \(G\), \(\chi_{s}(G)\), is the least integer \(k\) such that \(G\) is \(k\)-star colourable. Star colouring of a graph \(G\) is known to be linked with orientations of \(G\). Albertson et al. [1] proved that a colouring \(f\) of \(G\) is a star colouring if and only if there exists an orientation \(\overrightarrow{G}\) of \(G\) such that edges in each bicoloured \(3\)-vertex path in \(\overrightarrow{G}\) are oriented towards the middle vertex. Nesetril and Mendez [31] characterized the star chromatic number of \(G\) in terms of orientations of \(G\). Fertin et al. [19] proved that the star chromatic number of a \(d\)-regular hypercube is at least \(\lceil(d+3)/2\rceil\). Xie et al. [39] proved that the star chromatic number of a \(3\)-regular graph is at least four. We generalize this result: the star chromatic number of a \(d\)-regular graph is at least \(\lceil(d+4)/2\rceil\), provided \(d\geq 2\). We show that this lower bound is attained for each \(d\geq 2\) and characterize even-degree regular graphs that attain the lower bound (i.e. \(2p\)-regular \((p+2)\)-star colourable graphs). We introduce a variant of Eulerian orientation named _\(q\)-colourful Eulerian orientation_ (see Section 4). For all \(p\geq 2\), a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable if and only if \(G\) admits a \((p+2)\)-colourful Eulerian orientation. Using this result, we show that a \(2p\)-regular \((p+2)\)-star colourable graph \(G\) does not contain diamond or \(\overline{C_{6}}\) as a subgraph, and thus the clique number of \(G\) is at most three. We also establish the following properties of \(2p\)-regular \((p+2)\)-star colourable graphs: (i) the number of vertices in \(G\) is divisible by \((p+1)(p+2)\), (ii) the independence number of \(G\) is greater than \(n/4\), and (iii) the chromatic number of \(G\) is at most \(3\log_{2}(p+2)\). For every \(p\geq 2\) and every integer \(n\) divisible by \((p+1)(p+2)\), we construct a \(2p\)-regular Hamiltonian graph on \(n\) vertices which is \((p+2)\)-star colourable. For the special case \(p=2\), the graphs constructed are also planar. If a regular graph \(G\) with degree \(d\geq 3\) is a hypercube (i.e., \(G=Q_{d}\)) or contains diamond or \(\overline{C_{6}}\) as a subgraph, then \(\chi_{s}(G)\geq\lceil(d+5)/2\rceil\); this improves on the lower bound \(\chi_{s}(Q_{d})\geq\lceil(d+3)/2\rceil\) given in [19]. For all \(p\geq 2\), we express the problem of testing whether a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable in the Locally Checkable Vertex Subset and Partitioning problems (LC-VSP) framework of Telle and Proskurowski [37]; thus, the problem admits an FPT algorithm with the parameter either treewidth, cliquewidth, rankwidth or booleanwidth (see [8, 9, 34, 36, 37]). We also define a subclass \(\mathscr{G}_{2p}\) of the class of \(2p\)-regular \((p+2)\)-star colourable graphs such that testing membership in \(\mathscr{G}_{2p}\) is NP-complete (provided \(p\geq 2\)). The decision problem \(k\)-Star Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-star colourable. \(3\)-Star Colourability is NP-complete for planar bipartite graphs [1], graphs of arbitrarily large girth [5], and graphs of maximum degree four (in fact, NP-complete for line graphs of subcubic graphs [27]). We prove that the problem is NP-complete for a subclass of the intersection of these three classes: \(3\)-Star Colourability is NP-complete for planar bipartite graphs of maximum degree three and arbitrarily large girth. Besides, it is coNP-hard to test whether a bipartite graph of maximum degree eight has a unique \(3\)-star colouring up to colour swaps. For all \(k\geq 3\), \(k\)-Star Colourability is NP-complete for bipartite graphs. We prove that for all \(k\geq 3\), \(k\)-Star Colourability restricted to bipartite graphs of maximum degree \(k\) is NP-complete and the problem does not even admit a \(2^{o(n)}\)-time algorithm unless the Exponential Time Hypothesis (ETH) fails. ### Fixed-Parameter Tractability For every positive integer \(k\), \(k\)-Star Colourability can be expressed in Monadic Second Order (MSO) logic [26]. In fact, \(k\)-Star Colourability can be expressed in \(\mathrm{MSO}_{1}\), i.e., \(\mathrm{MSO}\) logic without edge set quantification (see supplementary material for the formula). Therefore, for each \(k\), the problem \(k\)-Star Colourability admits FPT algorithms with the parameter either treewidth or cliquewidth by Courcelle's theorem [6, 12]. On the other hand, the reduction from \(k\)-Colourability to \(k\)-Star Colourability produced by Coleman and More [11] is a Polynomial Parameter Transformation (PPT) [21] when both problems are parameterized by treewidth (see Section 2 of supplementary material for details). Thus, we have the following observation since \(k\)-Colourability with parameter treewidth does not admit a polynomial kernel. **Observation 1**.: _For all \(k\geq 3\), \(k\)-Star Colourability with parameter treewidth does not admit a polynomial kernel unless NP \(\subseteq\) coNP/poly. _ The paper is organized as follows. Section 2 contains the definitions used throughout the paper. Section 3 discusses a lower bound for the star chromatic number of \(d\)-regular graphs as well as properties of graphs that attain the lower bound. Section 4 introduces colourful Eulerian orientation and discusses its relation to star colouring. All computational hardness results appear in Section 5. It is divided into two subsections: Subsection 5.1 on 3-star colouring, and Subsection 5.2 on \(k\)-star colouring with \(k\geq 4\). We conclude with Section 6 devoted to open problems and related works. ## 2 Definitions All graphs considered in this paper are finite and simple. We follow West [38] for graph theory terminology and notation. Unless otherwise specified, each graph we consider is undirected (the directed graphs that appear in this paper are orientations of undirected graphs). An _orientation_ of \(G\) is a directed graph obtained from \(G\) by assigning a direction to each edge of \(G\). When the graph is clear from the context, we denote the number of edges of the graph by \(m\) and the number of vertices by \(n\). For a subset \(S\) of the vertex set of \(G\), the _subgraph of \(G\) induced by \(S\)_ is denoted by \(G[S]\). The _girth_ of a graph with a cycle is the length of its shortest cycle. A graph \(G\) is _\(2\)-degenerate_ if there exists a left-to-right ordering of its vertices such that every vertex has at most two neighbours to its left; in other words, we can turn \(G\) into the empty graph by repeatedly removing vertices of degree at most two. A \(k\)-colouring of a graph \(G\) is a function \(f\) from the vertex set of \(G\) to a set of \(k\) colours, say \(\{0,1,\ldots,k-1\}\), such that \(f\) maps every pair of adjacent vertices to different colours. Let us denote the \(i\)th colour class \(f^{-1}(i)\) by \(V_{i}\). A _bicoloured component_ of \(G\) (under \(f\)) is a component of \(G[V_{i}\cup V_{j}]\) for some pair of colour classes \(V_{i}\) and \(V_{j}\). A \(k\)-colouring \(f\) of \(G\) is a _\(k\)-star colouring_ if every bicoloured component of \(G\) under \(f\) is a star (in other words, there is no \(4\)-vertex path in \(G\) bicoloured by \(f\)). The acyclic colouring is a generalization of star colouring. A \(k\)-colouring \(f\) of \(G\) is a _\(k\)-acyclic colouring_ if every bicoloured component of \(G\) under \(f\) is a tree. By definition, every \(k\)-star colouring is a \(k\)-acyclic colouring. The star chromatic number \(\chi_{s}(G)\) is defined analogously to the chromatic number \(\chi(G)\). That is, the star chromatic number of \(G\) is the least integer \(k\) such that \(G\) is \(k\)-star colourable. We say that two colourings \(f_{1}\) and \(f_{2}\) of \(G\) are the same _up to colour swaps_ if there exists a permutation \(\sigma\) of colours such that \(f_{2}(v)=\sigma(f_{1}(v))\) for every vertex \(v\) of \(G\). If \(G\) has exactly one \(k\)-(star) colouring up to colour swaps, then \(G\) is said to have a _unique_\(k\)-(star) colouring (we write the word 'unique' in italics to indicate "unique up to colour swaps"). We say that two colourings \(f_{1}\) and \(f_{2}\) of the same graph are _equivalent under colour swaps_ if they are the same up to colour swaps. For every positive integer \(k\), the decision problem \(k\)-Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-colourable. The problem \(k\)-Star Colourability is defined likewise. To denote the restriction of a decision problem, we write the conditions in parenthesis. For instance, \(3\)-Star Colourability(planar, bipartite, \(\Delta=3\)) denotes the problem \(3\)-Star Colourability restricted to the class of planar bipartite graphs \(G\) with the maximum degree \(\Delta(G)=3\). Let \(k\geq 3\). The decision problem Unique \(k\)-Colouring takes a graph \(G\) as input and asks whether \(G\) has a _unique_\(k\)-colouring. The problem Unique \(k\)-Star Colouring is defined similarly. The problem Another \(k\)-Colouring takes a graph \(G\) and a \(k\)-colouring \(f_{1}\) of \(G\) as input and asks whether \(G\) admits a \(k\)-colouring \(f_{2}\) of \(G\) which cannot be obtained from \(f_{1}\) by merely swapping colours. The problem Another \(k\)-Star Colouring is defined likewise. ## 3 Lower Bound for Star Chromatic Number of Regular Graphs We know that for every star colouring of \(G\), each bicoloured component is a star. By exploiting the simple structure of bicoloured components and employing elementary counting arguments, we produce a tight lower bound for the star chromatic number of regular graphs and characterize even-degree graphs that attain the lower bound. **Theorem 1**.: _Let \(G\) be a \(d\)-regular graph with \(d\geq 2\). Then, \(\chi_{s}(G)\geq\left\lceil\frac{d+4}{2}\right\rceil\)._ Proof.: We have two cases: (i) \(d\) is even, and (ii) \(d\) is odd. If \(d\) is even, say \(d=2k\), then at least \(\left\lceil(d+3)/2\right\rceil=(d+4)/2=k+2\) colours are needed to acyclic colour \(G\)[18, Proposition 1]; hence, at least \(k+2=\left\lceil(d+4)/2\right\rceil\) colours are needed to star colour \(G\). Next, we consider the case when \(d\) is odd, say \(d=2k+1\). To prove that \(\chi_{s}(G)\geq\left\lceil(d+4)/2\right\rceil=k+3\), it suffices to show that \(G\) is not \((k+2)\)-star colourable. Assume that \(G\) admits a \((k+2)\)-star colouring \(f\colon V(G)\to\{0,1,\ldots,k+1\}\). Recall that \(V_{i}=f^{-1}(i)=\{v\in V(G)\ :\ f(v)=i\}\) for every colour \(i\). **Claim 1:** For every bicoloured component \(H\) of \(G\), \(|E(H)|\leq\frac{d}{d+1}|V(H)|\), and equality holds only when \(H\) is isomorphic to \(K_{1,d}\). Since \(H\cong K_{1,q}\) where \(0\leq q\leq d\), we have \(|E(H)|/|V(H)|=q/(q+1)\leq d/(d+1)\) and equaltiy holds only when \(q=d\). This proves Claim 1. **Claim 2:**\(G\) has a bicoloured component \(H\) not isomorphic to \(K_{1,d}\). On the contrary, assume that every bicoloured component of \(G\) is isomorphic to \(K_{1,d}\). Consider an arbitrary bicoloured component \(H\) of \(G\). Let \(u\) be the centre of the star \(H\cong K_{1,d}\), and let \(v_{1},v_{2},\ldots,v_{d}\) be the remaining vertices in \(H\). Without loss of generality, assume that \(u\) is coloured \(0\), and each \(v_{i}\) is coloured \(1\) for \(1\leq i\leq d\). Let \(N_{G}(v_{1})=\{u,w_{1},w_{2},\ldots,w_{d-1}\}\). Clearly, \(f(w_{i})\in\{2,3,\ldots,k+1\}\) for \(1\leq i\leq d-1\) (if \(f(w_{i})=0\), then path \(v_{2},u,v_{1},w_{i}\) is a bicoloured \(P_{4}\)). Hence, at least two of vertices \(w_{1},w_{2},\ldots,w_{d-1}\) should receive the same colour, say colour \(2\). Since \(|\{w_{1},\ldots,w_{d-1}\}|=d-1\), vertex \(v_{1}\) has \(q\) neighbours coloured \(2\) where \(2\leq q\leq d-1\). Thus, the component of \(G[V_{1}\cup V_{2}]\) containing vertex \(v_{1}\) is isomorphic to \(K_{1,q}\not\cong K_{1,d}\). This contradiction proves Claim 2. For every pair of distinct colours \(i\) and \(j\), let \(\mathbb{G}_{ij}\) denote the set of components of \(G[V_{i}\cup V_{j}]\). By Claims 1 and 2, we have \[\sum_{\begin{subarray}{c}i,j\\ i\neq j\end{subarray}}\sum_{H\in\mathbb{G}_{ij}}|E(H)|<\sum_{\begin{subarray}{ c}i,j\\ i\neq j\end{subarray}}\sum_{H\in\mathbb{G}_{ij}}\frac{d}{d+1}|V(H)|=\frac{d}{d+1 }\sum_{\begin{subarray}{c}i,j\\ i\neq j\end{subarray}}\left(\,|V_{i}|+|V_{j}|\,\right).\] Since the set of bicoloured components of \(G\) forms an (edge) decomposition of \(G\), the sum on the left side is \(m\coloneqq|E(G)|\). The sum on the right side is \((k+1)n\) where \(n\coloneqq|V(G)|\) (because \(|V_{i}|\) appears exactly \((k+1)\) times in the sum for each \(i\)). Therefore, the above inequality simplifies to \(m<\frac{d}{d+1}(k+1)n\). Since \(G\) is \(d\)-regular, \(m=\frac{d}{2}n\) and thus the inequality reduces to \(\frac{d}{2}n<\frac{d}{d+1}(k+1)n\). That is, \(\frac{d+1}{2}<k+1\). This is a contradiction because \(d=2k+1\). This completes the proof when \(d=2k+1\). **Remark:** From Theorem 1, it follows that the average degree of a graph \(G\) of maximum degree \(d\) is at most \((\chi_{s}(G)-1)\frac{2d}{d+1}\), and the graph is bipartite if the average degree is equal to this bound (see Corollary 1 in supplementary material). Soon, we show that the lower bound \(\lceil(d+4)/2\rceil\) established by Theorem 1 is tight for each \(d\geq 2\). For every even-degree regular graph with degree \(d=2p\), the bound is \(\lceil(d+4)/2\rceil=p+2\). The next theorem shows various properties of \((p+2)\)-star colourings of \(2p\)-regular graphs (provided \(p\geq 2\)). **Theorem 2**.: _Let \(p\geq 2\). Suppose that a \(2p\)-regular graph \(G(V,E)\) admits a \((p+2)\)-star colouring \(f\). Then, every bicoloured component of \(G\) is isomorphic to \(K_{1,p}\) and all colour classes \(V_{i}=f^{-1}(i)\) have the same cardinality. Besides, \(f\) is not only an equitable colouring but also a fall colouring \((\)see [30] and [15] respectively for definitions\()\)._ Proof.: Note that the set \(S\) of all bicoloured components of \(G\) forms an (edge) decomposition of \(G\). We produce a partition \(\mathscr{P}\) of \(S\) such that \(\sum_{H\in P}|E(H)|\leq\frac{p}{p+1}\sum_{H\in P}|V(H)|\) for each \(P\in\mathscr{P}\) and equality holds only if all members of \(P\) are \(K_{1,p}\). Since \(f\) is a star colouring, every bicoloured component of \(G\) is a star. We first deal with bicoloured components \(H\) with the unique centre - that is, \(H\cong K_{1,\ell}\) where \(\ell\neq 1\). Let \(V^{\prime}\) denote the set of vertices \(v\) in \(G\) such that \(v\) is the centre of a bicoloured component \(H\cong K_{1,\ell}\) with \(\ell>p\). For each \(v\in V^{\prime}\), define \(C_{v}=\{H\in S:H\not\cong K_{1,1}\) and \(v\) is the centre of \(H\}\). **Claim 1:** For each \(v\in V^{\prime}\), \(\sum\limits_{H\in C_{v}}|E(H)|<\frac{p}{p+1}\sum\limits_{H\in C_{v}}|V(H)|\). Let \(v\in V^{\prime}\). Let \(J\) be the set of indices \(j\) such that \(v\) has exactly one neighbour in colour class \(V_{j}\). Let \(x=|J|\). By definition of \(V^{\prime}\), \(v\) has \(p+1\) or more neighbours in some colour class; therefore, \(x<p\) (since \(x+p+1\leq\deg_{G}(v)=2p\)). By definition of \(C_{v}\), for every neighbour \(w\) of \(v\) in \(\cup_{j\in J}V_{j}\), the edge \(vw\) is not counted in the sum \(\sum_{H\in C_{v}}|E(H)|\). On the other hand, the remaining \(2p-x\) edges incident on \(v\) are counted exactly once in the sum \(\sum_{H\in C_{v}}|E(H)|\). So, \(\sum_{H\in C_{v}}|E(H)|=2p-x\). No neighbour \(w\) of \(v\) in \(\cup_{j\in J}V_{j}\) is counted in the sum \(\sum_{H\in C_{v}}|V(H)|\). The remaining \(2p-x\) neighbours of \(v\) are counted exactly once in the sum \(\sum_{H\in C_{v}}|V(H)|\). Also, the vertex \(v\) is counted exactly \(p+1-x\) times in the sum \(\sum_{H\in C_{v}}|V(H)|\) (assuming \(v\in V_{i}\), \(v\) is in exactly one component of \(G[V_{i}\cup V_{k}]\) for each \(k\in\{0,1,\ldots,p+1\}\setminus(J\cup\{i\})\)). Therefore, \(\sum_{H\in C_{v}}|V(H)|=(2p-x)+(p+1-x)=3p-2x+1\). Since \(x<p\) and \(p>1\), we have \[\frac{\sum_{H\in C_{v}}|E(H)|}{\sum_{H\in C_{v}}|V(H)|}=\frac{2p-x}{3p-2x+1}< \frac{p}{p+1}\] because the inequality \((p+1)(2p-x)<p(3p-2x+1)\) simplifies to \((p-1)(p-x)>0\). This proves Claim 1 since \(v\in V^{\prime}\) is arbitrary. For distinct vertices \(u\) and \(v\) in \(V^{\prime}\), the set \(C_{u}\cap C_{v}\) is empty because each member of \(C_{u}\) has vertex \(u\) as its unique centre whereas each member of \(C_{v}\) has vertex \(v\) as its unique centre. We are now ready to construct the partition \(\mathscr{P}\) of \(S\). For each \(v\in V^{\prime}\), include \(C_{v}\) in \(\mathscr{P}\). For each \(H\in S\setminus\bigcup_{v\in V^{\prime}}C_{v}\), include \(\{H\}\) in \(\mathscr{P}\). Observe that for all \(H\in S\setminus\bigcup_{v\in V^{\prime}}C_{v}\), the bicoloured component \(H\) is isomorphic to \(K_{1,\ell}\) with \(\ell\leq p\); as a result, \(|E(H)|\leq\frac{p}{p+1}|V(H)|\) and equality holds only if \(H\cong K_{1,p}\) (as in Claim 1 of Theorem 1). By Claim 1, \(\sum_{H\in P}|E(H)|<\frac{p}{p+1}\sum_{H\in P}|V(H)|\) for each member \(P=C_{v}\) of \(\mathscr{P}\) (where \(v\in V^{\prime}\)). Thus, we have the following claim. **Claim 2:** For each \(P\in\mathscr{P}\), \(\sum_{H\in P}|E(H)|\leq\frac{p}{p+1}\sum_{H\in P}|V(H)|\) and equality holds only if every member of \(P\) is isomorphic to \(K_{1,p}\). **Claim 3:** Every bicoloured component of \(G\) is isomorphic to \(K_{1,p}\). Contrary to Claim 3, assume that there is a bicoloured component \(H^{\prime}\) of \(G\) not isomorphic to \(K_{1,p}\). Then, \(H^{\prime}\) is in some member \(P^{\prime}\) of \(\mathscr{P}\). By Claim 2, \(\sum_{H\in P^{\prime}}|E(H)|<\frac{p}{p+1}\sum_{H\in P^{\prime}}|V(H)|\). Since \(\sum_{H\in P}|E(H)|\leq\frac{p}{p+1}\sum_{H\in P}|V(H)|\) for each \(P\in\mathscr{P}\) and the inequality is strict for \(P^{\prime}\in\mathscr{P}\), we have \[\sum_{P\in\mathscr{P}}\sum_{H\in P}|E(H)|<\frac{p}{p+1}\sum_{P\in\mathscr{P}} \sum_{H\in P}|V(H)|.\] Since \(\cup_{P\in\mathscr{P}}\cup_{H\in P}H\) is a decomposition of \(G\), the sum on the left side is \(m\coloneqq|E(G)|\). Besides, the sum on the right side is \((p+1)n\) where \(n\coloneqq|V(G)|\) because each vertex of \(G\) is counted exactly \(p+1\) times (note that each vertex of \(G\) is in exactly \(p+1\) bicoloured components of \(G\)). Thus, the inequality simplifies to \(m<\frac{p}{p+1}(p+1)n\). That is, \(m<pn\), a contradiction because \(2m=(2p)n\) for every \(2p\)-regular graph. This proves Claim 3. **Claim 4:** For every colour class \(V_{i}\) and every vertex \(v\notin V_{i}\), \(v\) has either exactly \(p\) neighbours or exactly one neighbour in \(V_{i}\). In particular, \(v\) has at least one neighbour in \(V_{i}\). Let \(v\notin V_{i}\); that is, \(v\in V_{j}\) for some \(j\neq i\). By Claim 3, the component of \(G[V_{i}\cup V_{j}]\) containing \(v\) is a star \(H\cong K_{1,p}\). If \(v\) is the centre of \(H\), then \(v\) has exactly \(p\) neighbours in \(V_{i}\). Otherwise, \(v\) has exactly one neighbour in \(V_{i}\). This proves Claim 4. **Claim 5:** Each vertex \(v\) of \(G\), say with colour \(i\), has exactly \(p\) neighbours in some colour class \(V_{j}\) and exactly one neighbour in every other colour class \(V_{k}\), \(k\notin\{i,j\}\) (see Figure 1). We prove Claim 5 for \(i=0\) (proof is similar for other values of \(i\)). Let \(v\in V_{0}\). Neighbours of \(v\) are from the colour classes \(V_{1},\ldots,V_{p+1}\). Since \(\deg_{G}(v)=2p>p+1\), by pigeon-hole principle, \(v\) has at least two neighbours in some colour class \(V_{j}\). Without loss of generality, assume that \(j=1\). Since \(v\) has more than one neighbour in \(V_{1}\), \(v\) has exactly \(p\) neighbours in \(V_{1}\) by Claim 4. Let \(w_{1},\ldots,w_{p},x_{1},\ldots,x_{p}\) be the neighbours of \(v\) where \(w_{1},\ldots,w_{p}\in V_{1}\). The remaining \(p\) neighbours \(x_{1},\ldots,x_{p}\) of \(v\) are in \(\bigcup_{k=2}^{p+1}V_{k}\). Since \(v\) has at least one neighbour in each of the colour classes \(V_{2},\ldots,V_{p+1}\) (by Claim 4), \(v\) has exactly one neighbour in each of the colour classes \(V_{2},\ldots,V_{p+1}\). This proves Claim 5. **Claim 6:** All colour classes have the same cardinality: that is, \(|V_{i}|=\frac{|V|}{p+2}\) for every colour \(i\). We prove Claim 6 for \(i=0\) (proof is similar for other values of \(i\)). By Claim 4, every vertex \(w\in V\setminus V_{0}\) has either exactly \(p\) neighbours or exactly one neighbour in \(V_{0}\). Let \(V^{*}\) denote the set of vertices \(x\in V\setminus V_{0}\) such that \(x\) has exactly \(p\) neighbours in \(V_{0}\). In the sum \(\sum_{v\in V_{0}}|N_{G}(v)|\), each member of \(V^{*}\) is counted exactly \(p\) times (because each \(w\in V^{*}\) has exactly \(p\) neighbours in \(V_{0}\)) and every member of \(V\setminus(V_{0}\cup V^{*})\) is counted exactly once (because each \(w\in V\setminus(V_{0}\cup V^{*})\) has exactly one neighbour in \(V_{0}\)). Hence, \(\sum_{v\in V_{0}}|N_{G}(v)|=|V\setminus(V_{0}\cup V^{*})|+p|V^{*}|\), and thus \[\sum_{v\in V_{0}}|N_{G}(v)|=|V|-|V_{0}|-|V^{*}|+p|V^{*}|. \tag{1}\] By counting the number of edges between the sets \(V_{0}\) and \(V^{*}\), we show that \(|V_{0}|=|V^{*}|\). Consider an arbitrary vertex \(v\in V_{0}\). Let \(w_{1},\ldots,w_{p},x_{1},\ldots,x_{p}\) be the neighbours of \(v\). By Claim 5, \(v\) has exactly \(p\) neighbours in some colour class \(V_{j}\) and exactly one neighbour in every other colour class \(V_{k}\), \(k\notin\{0,j\}\). Without loss of generality, assume that \(w_{1},\ldots,w_{p}\in V_{1}\), and \(x_{r}\in V_{r+1}\) for \(1\leq r\leq p\) (see Figure 1). For \(1\leq r\leq p\), \(v\) is the unique neighbour of \(w_{r}\) in \(V_{0}\) and thus \(w_{r}\notin V^{*}\). In contrast, for \(1\leq r\leq p\), \(x_{r}\) is the unique neighbour of \(v\) in \(V_{r+1}\); hence, \(x_{r}\) must have \(p\) neighbours in \(V_{0}\) (due to Claim 3) and thus \(x_{r}\in V^{*}\). So, \(v\) has exactly \(p\) neighbours in \(V^{*}\), namely \(x_{1},\ldots,x_{p}\). Since \(v\in V_{0}\) is arbitrary, each vertex in \(V_{0}\) has exactly \(p\) neighbours in \(V^{*}\). As a result, the number of edges from \(V_{0}\) to \(V^{*}\) is equal to \(p|V_{0}|\). By definition of \(V^{*}\), each vertex in \(V^{*}\) has exactly \(p\) neighbours in \(V_{0}\), and hence the number of edges from \(V^{*}\) to \(V_{0}\) is equal to \(p|V^{*}|\). Therefore, we have \(p|V_{0}|=p|V^{*}|\) and thus \(|V_{0}|=|V^{*}|\). Since \(G\) is a \(2p\)-regular graph, \(\sum_{v\in V_{0}}|N_{G}(v)|=2p|V_{0}|\). Therefore, equation (1) implies \(2p|V_{0}|=|V|-|V_{0}|-|V_{0}|+p|V_{0}|\). That is, \(2p|V_{0}|=|V|+(p-2)|V_{0}|\) or \(|V_{0}|=|V|/(p+2)\). This proves Claim 6. Since all colour classes have the same cardinality, \(f\) is an equitable colouring. For each vertex \(v\) of \(G\), all \(p+2\) colours are present in the closed neighbourhood of \(v\) in \(G\) by Claim 4. Therefore, \(f\) is a fall colouring of \(G\). Figure 1: An arbitrary vertex \(v\in V_{i}\) and its neighbourhood in \(G\) (here, \(i=0\) and \(j=1\)). **Theorem 3**.: _Let \(G(V,E)\) be a \(2p\)-regular graph on \(n\) vertices where \(p\geq 2\). Then, \(G\) is \((p+2)\)-star colourable if and only if the vertex set of \(G\) can be partitioned into \((p+1)(p+2)\) sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for all \(i\) and \(j\), each vertex in \(V_{i}^{j}\) has exactly \(p\) neighbours in \(\bigcup_{k\notin\{i,j\}}V_{j}^{k}\) and exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\)\((\)see Figure 2\()\). If the vertex set of \(G\) can be partitioned into such sets \(V_{i}^{j}\), then all sets \(V_{i}^{j}\) have the same cardinality and thus \(n\) is divisible by \((p+1)(p+2)\)._ Proof.: First, we prove the characterization of \(2p\)-regular \((p+2)\)-star colourable graphs where \(p\geq 2\). _Necessary part:_ Suppose that \(G\) admits a \((p+2)\)-star colouring \(f\colon V\to\{0,1,\ldots,p+1\}\). By Theorem 2, every bicoloured component of \(G\) (under \(f\)) is isomorphic to \(K_{1,p}\). Moreover, by Claim 5 of Theorem 2, each vertex \(v\) of \(G\), say with colour \(i\), has exactly \(p\) neighbours in some colour class \(V_{j}\) and exactly one neighbour in every other colour class \(V_{k}\), \(k\notin\{i,j\}\). For every pair of distinct colours \(i\) and \(j\), let \(V_{i}^{j}\) denote the set of vertices \(x\in V_{i}\) such that \(x\) has exactly \(p\) neighbours in \(V_{j}\). Since each vertex \(v\in V_{i}\) has exactly \(p\) neighbours in some colour class, \(\{V_{i}^{j}\ :\ 0\leq j\leq p+1\text{ and }j\neq i\}\) is a partition of \(V_{i}\). Therefore, \(\{V_{i}^{j}\ :\ 0\leq i\leq p+1,\ 0\leq j\leq p+1,\text{ and }i\neq j\}\) is a partition of \(V=V(G)\). **Claim 1:** Each vertex in \(V_{i}^{j}\) has exactly \(p\) neighbours in \(\bigcup_{k\notin\{i,j\}}V_{j}^{k}\) and exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\). We prove Claim 1 for \(i=0\) and \(j=1\) (the proof is similar for other values). Let \(v\in V_{0}^{1}\). By definition of set \(V_{0}^{1}\), \(v\) has \(p\) neighbours in \(V_{1}\). By Claim 5 of Theorem 2, \(v\) has exactly one neighbour in \(V_{k}\) for \(2\leq k\leq p+1\). Let \(w_{1},\ldots,w_{p},x_{1},\ldots,x_{p}\) be the neighbours of \(v\) where \(w_{1},\ldots,w_{p}\in V_{1}\) and \(x_{r}\in V_{r+1}\) for \(1\leq r\leq p\) (see Figure 1). Recall that every bicoloured component of \(G\) is isomorphic to \(K_{1,p}\). For \(1\leq r\leq p\), \(v\) is the unique neighbour of \(w_{r}\) in \(V_{0}\) and thus \(w_{r}\notin V_{1}^{0}\). Hence, \(v\) has \(p\) neighbours in \(V_{1}\setminus V_{1}^{0}=\bigcup_{k=2}^{p+1}V_{1}^{k}\). On the other hand, for \(1\leq r\leq p\), \(x_{r}\) is the unique neighbour of \(v\) in \(V_{r+1}\) and thus \(x_{r}\) must have \(p\) neighbours in \(V_{0}\) (if not, the bicoloured component of \(G\) containing edge \(vx_{r}\) is isomorphic to \(K_{1,1}\)); that is, \(x_{r}\in V_{r+1}^{0}\). So, for \(2\leq k\leq p+1\), \(v\) has a neighbour in \(V_{k}^{0}\). Therefore, \(v\) has exactly \(p\) neighbours in \(V_{1}\setminus V_{1}^{0}=\bigcup_{k=2}^{p+1}V_{1}^{k}\) and exactly one neighbour in \(V_{k}^{0}\) for \(2\leq k\leq p+1\). This proves Claim 1, and thus Figure 2: Neighbours of an arbitrary vertex \(v\in V_{i}^{j}\) (here, \(i=0\) and \(j=1\)). completes the proof of the necessary part. _Sufficient part:_ Suppose that the vertex set of \(G\) can be partitioned into \((p+1)(p+2)\) sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for all \(i\) and \(j\), each vertex in \(V_{i}^{j}\) has exactly \(p\) neighbours in \(\bigcup_{k\not\in\{i,j\}}V_{j}^{k}\) and exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\). We claim that the function \(f\) defined as \(f(v)=i\) for all \(v\in V_{i}^{j}\) is a \((p+2)\)-star colouring of \(G\). On the contrary, assume that there is a \(4\)-vertex path \(u,v,w,x\) in \(G\) bicoloured by \(f\). Without loss of generality, assume that \(f(u)=f(w)=0\) and \(f(v)=f(x)=1\). Since \(v\) is coloured \(1\) and it has two neighbours coloured \(0\), \(v\) has exactly \(p\) neighbours in \(V_{0}\) by Claim 4 of Theorem 2. Thus, \(v\in V_{1}^{0}\). Similarly, since \(w\) is coloured \(0\) and it has two neighbours coloured \(1\), \(w\in V_{0}^{1}\). By the condition on the sets \(V_{i}^{j}\), the vertex \(w\in V_{0}^{1}\) has exactly \(p\) neighbours in \(V_{1}\setminus V_{1}^{0}=\bigcup_{k=2}^{p+1}V_{1}^{k}\) and exactly one neighbour in \(V_{k}^{0}\) for \(2\leq k\leq p+1\). In particular, \(w\) has no neighbour in \(V_{1}^{0}\). We have a contradiction since the neighbour \(v\) of \(w\) is in \(V_{1}^{0}\). Hence, \(f\) is indeed a \((p+2)\)-star colouring of \(G\). This completes the proof of the sufficient part. Finally, we prove that the condition on the sets \(V_{i}^{j}\) implies that these sets are of equal size. **Claim 2:** For every pair of indices \(i\) and \(j\), \(|V_{i}^{j}|=\frac{|V|}{(p+1)(p+2)}\). We prove Claim 2 for \(i=0\) and \(j=1\) (the proof is similar for other values). We know that \(G\) admits a \((p+2)\)-star colouring \(f:V\to\{0,1,\ldots,p+1\}\). Besides, the colour class \(V_{0}=\bigcup_{k\neq 0}V_{0}^{k}\) and the colour class \(V_{1}=\bigcup_{k\neq 1}V_{1}^{k}\). Consider the sets \(V_{0}^{1}\), \(V_{0}\setminus V_{0}^{1}\), \(V_{1}^{0}\) and \(V_{1}\setminus V_{1}^{0}\). By definition of \(V_{1}^{0}\), each vertex in \(V_{1}^{0}\) has exactly \(p\) neighbours in \(\bigcup_{k=2}^{p+1}V_{0}^{k}=V_{0}\setminus V_{1}^{0}\). We know that every component of \(G[V_{0}\cup V_{1}]\) is isomorphic to \(K_{1,p}\) (by Theorem 2). So, each vertex in \(V_{1}^{0}\) has exactly \(p\) neighbours in \(V_{0}\setminus V_{0}^{1}\), and each vertex in \(V_{0}\setminus V_{0}^{1}\) has exactly one neighbour in \(V_{1}^{0}\). Hence, \[\#\text{edges between }V_{1}^{0}\text{ and }V_{0}\setminus V_{0}^{1}\ =\ p\,|V_{1}^{0}|\ =\ |V_{0}\setminus V_{0}^{1}|\ =\ |V_{0}|-|V_{0}^{1}|. \tag{2}\] Similarly, each vertex in \(V_{0}^{1}\) has exactly \(p\) neighbours in \(V_{1}\setminus V_{1}^{0}=\bigcup_{k=2}^{p+1}V_{1}^{k}\), and each vertex in \(V_{1}\setminus V_{1}^{0}\) has exactly one neighbour in \(V_{0}^{1}\). Thus, we have \[\#\text{edges between }V_{0}^{1}\text{ and }V_{1}\setminus V_{1}^{0}\ =\ p\,|V_{0}^{1}|\ =\ |V_{1}\setminus V_{1}^{0}|\ =\ |V_{1}|-|V_{1}^{0}|. \tag{3}\] We also know that \(|V_{0}|=|V_{1}|\) (by Theorem 2). Hence, by equations (2) and (3), \(p\,|V_{1}^{0}|+|V_{0}^{1}|=|V_{0}|=|V_{1}|=p\,|V_{0}^{1}|+|V_{1}^{0}|\). The equation \(p\,|V_{1}^{0}|+|V_{0}^{1}|=p\,|V_{0}^{1}|+|V_{1}^{0}|\) simplifies to \((p-1)|V_{1}^{0}|=(p-1)|V_{0}^{1}|\). As \(p\geq 2\), we have \(|V_{1}^{0}|=|V_{1}^{0}|\). Therefore, equation (3) implies \(p\,|V_{0}^{1}|=|V_{1}|-|V_{0}^{1}|\). That is, \(|V_{1}^{1}|=\frac{|V_{1}|}{p+1}\). Since all colour classes under \(f\) have the same cardinality \(\frac{|V|}{p+2}\), we have \(|V_{0}^{1}|=\frac{|V_{1}|}{p+1}=\frac{|V|}{(p+1)(p+2)}\). This proves Claim 2. Since \(|V_{0}^{1}|\) is an integer, \(n=|V|\) is divisible by \((p+1)(p+2)\). The following corollary improves on the known lower bound \(\lceil(d+3)/2\rceil\)[19] for the star chromatic number of the \(d\)-regular hypercube (provided \(d\geq 3\)). **Corollary 1**.: _Let \(G\) be a \(d\)-regular hypercube with \(d\geq 3\). Then, \(\chi_{s}(G)\geq\lceil\frac{d+5}{2}\rceil\)._ Proof.: The lower bound holds for every odd number \(d\geq 3\) since \(\lceil\frac{d+4}{2}\rceil=\lceil\frac{d+5}{2}\rceil\) and \(\chi_{s}(G)\geq\lceil\frac{d+4}{2}\rceil\) (see Theorem 1). Hence, it suffices to establish the lower bound for every even number \(d\geq 4\). Suppose that \(d=2p\) where \(p\geq 2\). To prove that \(\chi_{s}(G)\geq\lceil(d+5)/2\rceil=p+3\), it suffices to show that \(G\) does not admit a \((p+2)\)-star colouring. On the contrary, assume that \(G\) admits a \((p+2)\)-star colouring. By Theorem 3, \(n=|V(G)|\) is divisible by \((p+1)(p+2)\). Since \(p+1\) or \(p+2\) is an odd number greater than one, \(n\) is divisible by an odd number greater than one. Since the number of vertices in a hypercube is a power of two, we have a contradiction. **Corollary 2**.: _Let \(G\) be a \(2p\)-regular \((p+2)\)-star colourable graph where \(p\geq 2\). Then, the following hold: \((i)\)\(G\) is \((\mathrm{diamond},K_{4})\)-free, \((ii)\)\(\alpha(G)>n/4\), \((iii)\)\(\chi(G)\leq 3\log_{2}(p+2)\), \((iv)\)\(G\) admits a \(P_{4}\)-decomposition, and \((v)\) if \(G\) contains no asteroidal triple, then \(G\) is 3-colourable._ The proof of Corollary 2 is deferred to the end of the section. **Theorem 4**.: _For every \(p\geq 2\), there exists a unique \(2p\)-regular \((p+2)\)-star colourable graph \(G_{2p}\) on \((p+1)(p+2)\) vertices. Moreover, \(G_{2p}\) is vertex-transitive, edge-transitive and Hamiltonian for every \(p\geq 2\)._ Proof.: By Theorem 3, if a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable, then the number of vertices in \(G\) is divisible by \((p+1)(p+2)\), and thus \(G\) has at least \((p+1)(p+2)\) vertices. The following claim is a direct consequence of the characterization of \(2p\)-regular \((p+2)\)-colourable graphs given in Theorem 3. **Claim 1:** For \(p\geq 2\), a \(2p\)-regular graph \(G\) on \((p+1)(p+2)\) vertices is \((p+2)\)-star colourable if and only if the vertex set of \(G\) can be partitioned into \((p+1)(p+2)\) singleton sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for all \(i\) and \(j\), the unique vertex in \(V_{i}^{j}\) is adjacent to the unique vertex in \(V_{j}^{k}\) and the unique vertex in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\). The definition of \(G_{2p}\) is motivated by Claim 1. The vertex set of \(G_{2p}\) is \(\{(i,j)\ :\ 0\leq i\leq p+1,\ 0\leq j\leq p+1,\ \text{and}\ i\neq j\}\). A vertex \((i,j)\) of \(G_{2p}\) is adjacent to a vertex \((k,\ell)\) if (i) \(k=j\) and \(\ell\notin\{i,j\}\), or (ii) \(k\notin\{i,j\}\) and \(\ell=i\). The graph \(G_{4}\) is displayed in Figure 3. It is easy to verify that the sets \(V_{i}^{j}\coloneqq\{(i,j)\}\) satisfy the condition in Figure 3: (a) Graph \(G_{4}\), and (b) its plane drawing. Figure 4: (a) A Hamiltonian cycle \(L_{4}\) of \(G_{4}\) containing edge \(\{(0,1),(3,0)\}\), and (b) the Hamiltonian cycle of \(G_{6}\) obtained by replacing edge \(\{(0,1),(3,0)\}\) of cycle \(L_{4}\) by the path \((0,1),(1,4),(4,0),(2,4),(4,1),(3,4),(4,2),(0,4),(4,3),(3,0)\). Claim 1. Therefore, \(G_{2p}\) is \((p+2)\)-star colourable. Moreover, every \(2p\)-regular \((p+2)\)-star colourable graph \(G\) on \((p+1)(p+2)\) vertices is isomorphic to \(G_{2p}\) (call the unique vertex in \(V_{i}^{j}\) as \((i,j)\) for all \(i\) and \(j\)). By the definition of \(G_{2p}\), whether vertex \((i,j)\) is adjacent to vertex \((k,\ell)\) depends only on equality and inequality between indices \(i,j,k,\ell\). Hence, for every bijection \(h\) from \(\{0,1,\ldots,p+1\}\) to itself, relabelling each index \(i\in\{0,1,\ldots,p+1\}\) by \(h(i)\) in vertex labels of \(G_{2p}\) (e.g.: relabel \((0,1)\) as \((h(0),h(1))\)) gives \(G_{2p}\) itself. Thus, we have the following. **Claim 2:** For every bijection \(h\) from \(\{0,1,\ldots,p+1\}\) to itself, the function \(\psi\colon V(G_{2p})\to V(G_{2p})\) defined as \(\psi((x,y))=(h(x),h(y))\) is an automorphism of \(G_{2p}\). With the help of Claim 2, we show that \(G_{2p}\) is vertex-transitive and edge-transitive. **Claim 3:**\(G_{2p}\) is vetex-transitive for each \(p\geq 2\). To construct an automorphism \(\psi\) that maps a vertex \((i,j)\) to a vertex \((k,\ell)\), first choose a bijection \(h\) from \(\{0,1,\ldots,p+1\}\) to itself such that \(h(i)=k\) and \(h(j)=\ell\), and then define \(\psi\big{(}(x,y)\big{)}=\big{(}h(x),h(y)\big{)}\) for all \((x,y)\in V(G_{2p})\). This proves Claim 3. **Claim 4:**\(G_{2p}\) is edge-transitive for each \(p\geq 2\). For each vertex \((i,j)\) of \(G_{2p}\), each neighbour of \((i,j)\) in \(G_{2p}\) is either of the form \((j,k)\) for some \(k\notin\{i,j\}\) or \((k,i)\) for some \(k\notin\{i,j\}\). So, edges incident on \((i,j)\) are \(\{(i,j),(j,k)\}\) where \(k\notin\{i,j\}\) or \(\{(k,i),(i,j)\}\) where \(k\notin\{i,j\}\). As a result, each edge of \(G_{2p}\) is of the form \(\{(q,r),(r,s)\}\) where \(q,r,s\in\{0,1,\ldots,p+1\}\), \(q\neq r\), \(r\neq s\) and \(s\neq q\). To construct an automorphism \(\psi\) that maps an edge \(\{(i,j),(j,k)\}\) to an edge \(\{(q,r),(r,s)\}\), first choose a bijection \(h\) from \(\{0,1,\ldots,p+1\}\) to itself such that \(h(i)=q\), \(h(j)=r\) and \(h(k)=s\), and then define \(\psi\big{(}(x,y)\big{)}=\big{(}h(x),h(y)\big{)}\) for all \((x,y)\in V(G_{2p})\). This proves Claim 4. Next, we prove that \(G_{2p}\) is Hamiltonian for \(p\geq 2\). We employ induction on \(p\geq 2\). _Base case_ (\(p=2\)): Figure 3(a) exhibits a Hamiltonian cycle in \(G_{4}\). This proves the base case. _Induction step_ (\(p\geq 3\)): Assume that \(G_{2(p-1)}\) is Hamiltonian. Since \(G_{2(p-1)}\) is edge-transitive (see Claim 4), \(G_{2(p-1)}\) has a Hamiltonian cycle \(L_{2(p-1)}\) containing the edge \(\{(0,1),(p,0)\}\). In \(G_{2p}\), \(L_{2(p-1)}\) is a cycle, and the only vertices not in \(L_{2(p-1)}\) are \((0,p+1),(1,p+1),\ldots,(p,p+1)\), \((p+1,0),(p+1,1),\ldots,(p+1,p)\). Replacing the edge \(\{(0,1),(p,0)\}\) of the cycle \(L_{2(p-1)}\) in \(G_{2p}\) by the path \((0,1),(1,p+1),(p+1,0),(2,p+1),(p+1,1),\ldots,(p,p+1),(p+1,p-1),(0,p+1),(p+1,p),( p,0)\) gives a Hamiltonian cycle of \(G_{2p}\) (see Figure 4 for an example). This proves the induction step. Next, we show that the lower bound \(\lceil(d+4)/2\rceil\) for the star chromatic number of \(d\)-regular graphs established by Theorem 1 is attained for all \(d\geq 2\). **Theorem 5**.: _For all \(d\geq 2\), there exists a \(d\)-regular graph \(G\) with \(\chi_{s}(G)=\left\lceil\frac{d+4}{2}\right\rceil\)._ Proof.: By Theorem 1, \(\chi_{s}(G)\geq\lceil(d+4)/2\rceil\) for every \(d\)-regular graph \(G\) with \(d\geq 2\). Hence, to prove the theorem, it suffices to show that for every \(d\geq 2\), there exists a \(d\)-regular graph \(G\) with \(\chi_{s}(G)\leq\lceil(d+4)/2\rceil\). First, we consider the case when \(d\) is even, say \(d=2p\). If \(p=1\), then there exists a \(2\)-regular graph \(C_{4}\) such that \(\chi_{s}(C_{4})=3=p+2\). For \(p\geq 2\), there exists a \(2p\)-regular graph \(G_{2p}\) with \(\chi_{s}(G_{2p})\leq p+2\) by Theorem 4. So, for all \(p\geq 1\), there exists a \(2p\)-regular graph \(G\) with \(\chi_{s}(G)\leq p+2=\lceil(d+4)/2\rceil\). That is, for every even number \(d\geq 2\), there exists a \(d\)-regular graph with the star chromatic number at most \((d+4)/2\), and thus the lower bound \(\lceil(d+4)/2\rceil\) for the star chromatic number is attained for \(d\). Next, we consider the case when \(d\) is odd, say \(d=2p-1\) for some \(p\geq 2\). We know that \(G_{2p}\) is a \(2p\)-regular Hamiltonian graph on \((p+1)(p+2)\) vertices (see Theorem 4). Since \(G_{2p}\) is a Hamiltonian graph on an even number of vertices, \(G_{2p}\) admits a perfect matching \(M\) (pick alternate edges from a Hamiltonian cycle). Hence, \(H_{2p}\coloneqq G_{2p}-M\) is a \((2p-1)\)-regular \((p+2)\)-star colourable graph. So, \(\chi_{s}(H_{2p})\leq p+2=\lceil(d+4)/2\rceil\). This proves that the lower bound \(\lceil(d+4)/2\rceil\) for the star chromatic number of \(d\)-regular graphs is attained for every odd number \(d\geq 3\) as well. Next, we show that the structure of \(2p\)-regular \((p+2)\)-star colourable graphs proved in Theorem 3 can be expressed in the Locally Checkable Vertex Subset and Partitioning problems (LC-VSP) framework of Telle and Proskurowski [37]. For a fixed integer \(q\geq 1\) and a fixed \(q\times q\) matrix \(D_{q}\) each entry of which is a subset of \(\mathbb{Z}_{0}^{+}\coloneqq\{0,1,\dots\}\), the _\(\exists D_{q}\)-partition_ problem in the LC-VSP framework is the decision problem that takes a graph \(G\) as input and asks whether the vertex set of \(G\) can be partitioned into \(q\) sets \(U_{1},U_{2},\dots,U_{q}\) such that for every \(i\) and \(j\), each vertex \(v\in U_{i}\) satisfy \(|N_{G}(v)\cap U_{j}|\in D_{q}[i,j]\) (we write the \((i,j)\)-th entry of \(D_{q}\) as \(D_{q}[i,j]\)). By Theorem 3, a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable if and only if the vertex set of \(G\) can be partitioned into sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\dots,p+1\}\) and \(i\neq j\) such that for all \(i\) and \(j\), each vertex in \(V_{i}^{j}\) has exactly one neighbour in \(V_{k}^{i}\) for each \(k\neq j\) and exactly \(p\) neighbours in \(\bigcup_{k\neq i}V_{j}^{k}\). We can rephrase the condition on sets \(V_{i}^{j}\) as follows: for all \(i\) and \(j\), each vertex \(v\) in \(V_{i}^{j}\) has exactly one neighbour in \(V_{k}^{i}\) for each \(k\neq j\), \(v\) has no neighbour in \(V_{j}^{i}\), and \(v\) has no neighbour in \(V_{\ell}^{\ell}\) for each \(k\neq j\) and \(\ell\neq i\). This formulation of the structure can be directly expressed as the \(\exists D_{q}\)-partition problem in the LC-VSP framework where \(q=(p+1)(p+2)\) and \(D_{q}\) is the symmetric \(q\times q\) matrix whose rows represent sets \(V_{i}^{j}\) and the entry of \(D_{q}\) at the intersection of the row for \(V_{i}^{j}\) and the column for \(V_{k}^{\ell}\) is \(\{1\}\) if \(k\neq j\) and \(\ell=i\), the entry is \(\mathbb{Z}_{0}^{+}\) if \(k=j\) and \(\ell\neq i\), and the entry is \(\{0\}\) in all other cases. For the special case \(p=2\), the matrix \(D_{q}\) is given below. \[\begin{array}{ccccccccccccc}&V_{0}^{1}&V_{0}^{2}&V_{0}^{3}&V_{1}^{0}&V_{1} ^{2}&V_{1}^{3}&V_{2}^{0}&V_{2}^{1}&V_{2}^{3}&V_{3}^{0}&V_{3}^{1}&V_{3}^{2}\\ &V_{0}^{1}&\{0\}&\{0\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{1 \}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}\\ &V_{0}^{2}&\{0\}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0 }^{+}&\mathbb{Z}_{0}^{+}&\{1\}&\{0\}&\{0\}\\ &V_{0}^{3}&\{0\}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\mathbb{Z}_{0 }^{+}&\mathbb{Z}_{0}^{+}\\ &V_{1}^{0}&\{0\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{0\}&\{1 \}&\{0\}&\{0\}&\{1\}&\{0\}\\ &V_{1}^{2}&\{1\}&\{0\}&\{0\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}&\mathbb{Z}_{0 }^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{1\}&\{0\}\\ &V_{1}^{3}&\{1\}&\{0\}&\{0\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}&\mathbb{Z}_{0 }^{+}&\{0\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}\\ &V_{2}^{0}&\mathbb{Z}_{0}^{+}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}&\mathbb{Z}_{0}^{+}& \{0\}&\{0\}&\{0\}&\{0\}&\{1\}\\ &V_{2}^{1}&\{0\}&\{1\}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}& \mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{0\}&\{1\}\\ &V_{2}^{3}&\{0\}&\{1\}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}& \mathbb{Z}_{0}^{+}&\{0\}\\ &V_{3}^{0}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}& \{1\}&\{0\}&\{0\}&\{0\}\\ &V_{3}^{1}&\{0\}&\{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}& \{0\}&\{1\}&\{0\}&\{0\}\\ &V_{3}^{2}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0 }^{+}&\{0\}&\{0\}&\{0\}&\{0\}\\ &V_{3}^{2}&\{0\}&\{1\}&\{0\}&\{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}& \{0\}&\{0\}&\{0\}&\{0\}\\ &V_{3}^{0}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{0\}&\{1\}& \{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\{0\}\\ &V_{3}^{1}&\{0\}&\{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}& \{0\}&\{1\}&\{0\}&\{0\}&\{0\}\\ &V_{3}^{2}&\{0\}&\{0\}&\{1\}&\{0\}&\{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\mathbb{Z}_{0 }^{+}&\{0\}&\{0\}&\{0\}&\{0\}&\{0\}\\ \end{array}\] The entry of the matrix \(D_{12}\) at the intersection of the first row (i.e., row for \(V_{0}^{1}\)) and the seventh column (i.e., column for \(V_{2}^{0}\)) is \(D_{12}[1,7]=\{1\}\). Allow us to explain the first row of \(D_{12}\) in detail. Consider an arbitrary vertex \(v\) in \(V_{0}^{1}\). The first row contains two \(\{1\}\) entries: one for column \(V_{2}^{0}\) and one for column \(V_{3}^{0}\). These entries demand that \(v\) has exactly one neighbour in \(V_{2}^{0}\) and exactly one neighbour in \(V_{3}^{0}\). The first row also contains two \(\mathbb{Z}_{0}^{+}\) entries: one for column \(V_{1}^{2}\) and one for column \(V_{1}^{3}\). These entries put no restriction on the number of neighbours of \(v\) in sets \(V_{1}^{2}\) and \(V_{3}^{1}\). Every other entry of the first row is \(\{0\}\). These entries demand that \(v\) has no neighbour outside of \(V_{2}^{0}\cup V_{3}^{0}\cup V_{1}^{2}\cup V_{1}^{3}\). Hence, by the first row of \(D_{12}\), \(v\) has exactly one neighbour in \(V_{2}^{0}\), exactly one neighbour in \(V_{3}^{0}\), and exactly two neighbours in \(V_{1}^{2}\cup V_{1}^{3}\) (note that \(\deg_{G}(v)=4\) since \(G\) is a regular graph with degree \(2p=4\)). For each \(p\geq 2\), a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable if and only if \(G\) admits a \(D_{q}\)-partition; also, each entry of the degree constraint matrix \(D_{q}\) is either finite or co-finite. Thanks to results in [8, 9, 37], this proves that for each \(p\geq 2\), the problem of testing whether a \(2p\)-regular graph is \((p+2)\)-star colourable admits an FPT algorithm with the parameter either treewidth, cliquewidth, rankwidth, or booleanwidth (see also [34, 36]). In particular, for graph classes with bounded treewidth (resp. cliquewidth, rankwidth, or booleanwidth), the problem is polynomial-time solvable. Moreover, by results from [3], the problem is polynomial-time solvable in several other graph classes including interval graphs, permutation graphs, trapezoid graphs, convex graphs and Dilworth-\(k\) graphs. It is worth mentioning that the problem also fits in the framework of Gerber and Kobler [24] since every entry in \(D_{q}\) is a set of consecutive integers. Motivated by the structure of \(G_{2p}\), we define a subclass \(\mathscr{G}_{2p}\) of the class of \(2p\)-regular \((p+2)\)-star colourable graphs. A graph \(G\) belongs to \(\mathscr{G}_{2p}\) if and only if the vertex set of \(G\) can be partitioned into \((p+1)(p+2)\) sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for all \(i\) and \(j\), each vertex in \(V_{i}^{j}\) has exactly one neighbour in \(V_{j}^{k}\) and exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\). In other words, a graph \(G\) is in \(\mathscr{G}_{2p}\) if and only if \(G\) admits an \(E_{q}\)-partition where \(q=(p+1)(p+2)\) and \(E_{q}\) is the matrix obtained from \(D_{q}\) by replacing entries \(\mathbb{Z}_{0}^{+}\) by \(\{1\}\). Hence, \(2p\)-regular graphs in \(\mathscr{G}_{2p}\) are \((p+2)\)-star colourable. Observe that for each \(p\geq 2\), \(E_{q}\) is precisely the matrix obtained from the adjacency matrix of \(G_{2p}\) by replacing entries \(0\) by \(\{0\}\) and entries \(1\) by \(\{1\}\). This natural connection between graph \(G_{2p}\) and the class \(\mathscr{G}_{2p}\) can be expressed by the notion of locally bijective homomorphisms. Given a graph \(H\) with vertex set \(\{u_{1},u_{2},\ldots,u_{r}\}\), a graph \(G\) is said to have a _locally bijective homomorphism_ to \(H\) (also called an \(H\)-cover) if the vertex set of \(G\) can be partitioned into \(r\) sets \(U_{1},U_{2},\ldots,U_{r}\) such that the following hold: (i) \(u_{i}\) is adjacent to \(u_{j}\) in \(H\) implies that each vertex in \(U_{i}\) is adjacent to exactly one vertex in \(U_{j}\) in \(G\), and (ii) \(u_{i}\) is not adjacent to \(u_{j}\) in \(H\) implies that no vertex in \(U_{i}\) is adjacent to any vertex in \(U_{j}\) in \(G\)[20]. In other words, \(G\) has a locally bijective homomorphism to \(H\) if we can label the vertices of \(G\) by names of vertices in \(H\) such that the following hold: (i) each vertex of \(G\) labelled \(u\) (where \(u\in V(H)\)) is called a copy of \(u\) in \(G\), and (ii) for each \(u\in V(H)\) and each copy \(u^{(s)}\) of \(u\) in \(G\), \(\deg_{G}(u^{(s)})=\deg_{H}(u)\) and neighbours of \(u^{(s)}\) in \(G\) are exactly copies of neighbours of \(u\) in \(H\). Clearly, a graph \(G\) is in \(\mathscr{G}_{2p}\) if and only if \(G\) has a locally bijective homomorphism to \(G_{2p}\) (label members of \(V_{i}^{j}\) by \((i,j)\)). It is known that for every (simple) \(d\)-regular graph \(H\) with \(d\geq 3\), it is NP-complete to test whether an input graph \(G\) has a locally bijective homomorphism to \(H\)[20, Theorem 20]. Therefore, we have the following result by using \(H=G_{2p}\). **Observation 2**.: _For all \(p\geq 2\), it is NP-complete to test whether a \(2p\)-regular graph belongs to \(\mathscr{G}_{2p}\). _ This motivates the following problem. **Problem 1**.: For \(p\geq 2\), is it NP-complete to test whether a \(2p\)-regular graph is \((p+2)\)-star colourable? The special case \(p=2\) of Observation 2 says that it is NP-complete to test whether a \(4\)-regular graph is in \(\mathscr{G}_{4}\). The following is a decomposition result for members of \(\mathscr{G}_{4}\) (see supplementary material for proof). **Observation 3**.: _Every graph \(G\in\mathscr{G}_{4}\) can be decomposed into cycles of length divisible by three._ Theorem 5 showed that for all \(p\geq 2\), \(G_{2p}\) is a \(2p\)-regular \((p+2)\)-star colourable graph. The graph \(G_{4}\) is planar as shown in Figure 3 whereas \(G_{2p}\) is non-planar for \(p>2\) (because planar graphs are \(5\)-degenerate). Next, we construct infinitely many planar \(4\)-regular connected graphs that are \(4\)-star colourable (recall that a \(4\)-regular graph \(G\) is \(4\)-star colourable only if \(n=|V(G)|\) is divisible by twelve). **Theorem 6**.: _For every integer \(n\) divisible by twelve, there exists a planar 4-regular Hamiltonian graph on \(n\) vertices which is 4-star colourable._ Proof.: For every positive integer \(t\), we construct a planar \(4\)-regular Hamiltonian graph \(G^{(t)}\) on \(12t\) vertices which is \(4\)-star colourable. Recall that \(G_{4}\) is Hamiltonian by Theorem 4. Choose an edge \(vw\) of \(G_{4}\) which is part of a Hamiltonian cycle of \(G_{4}\). Choose a plane drawing of \(G_{4}\) such that edge \(vw\) appear in the outer face. The graph \(G^{(t)}\) is made of \(t\) copies \(H^{(0)},H^{(1)},\ldots,H^{(t-1)}\) of \(H\coloneqq G_{4}-vw\) and edges between them in the cyclic order. For each vertex \(u\) of \(G_{4}\), let us denote the copy of \(u\) in \(H^{(s)}\) by \(u^{(s)}\) for all \(s\in\mathbb{Z}_{t}\). For each \(s\in\mathbb{Z}_{t}\), add an edge \(v^{(s)}w^{(s+1)}\) where superscript \((s+1)\) is modulo \(t\). Examples are exhibited in Figure 5. Since \(G^{(t)}\) is composed of \(t\) copies of the planar graph \(H\) and edges between those copies in the cyclic order, \(G^{(t)}\) is planar (where \(t\in\mathbb{N}\)). Next, we show that \(G^{(t)}\) is Hamiltonian for every \(t\in\mathbb{N}\). Since the edge \(vw\) of \(G_{4}\) is part of a Hamiltonian cycle of \(G_{4}\), there is a Hamiltonian path from \(w\) to \(v\) in the graph \(H=G_{4}-vw\) (one such path is highlighted in Figure 5a). So, for each \(s\in\mathbb{Z}_{t}\), the graph \(H^{(s)}\) has a Hamiltonian path from \(w^{(s)}\) to \(v^{(s)}\). For \(s\in\mathbb{Z}_{t}\), let \(w^{(s)},P^{(s)},v^{(s)}\) denote one such Hamiltonian path in \(H^{(s)}\) (where \(P^{(s)}\) stands for a segment of the path). Then, \((w^{(0)},P^{(0)},v^{(0)},w^{(1)},P^{(1)},v^{(1)},\ldots,w^{(t-1)},P^{(t-1)},v^ {(t-1)})\) is a Hamiltonian cycle in \(G^{(t)}\). So, \(G^{(t)}\) is Hamiltonian for every \(t\in\mathbb{N}\). Therefore, for each \(t\in\mathbb{N}\), \(G^{(t)}\) is a planar \(4\)-regular Hamiltonian graph on \(12t\) vertices. Finally, we show that \(G^{(t)}\) is \(4\)-star colourable for each \(t\in\mathbb{N}\). To prove this, it suffices to show that \(G^{(t)}\) is in \(\mathscr{G}_{4}\). Note that for each \(u\in V(G_{4})\) and each copy \(u^{(s)}\) of \(u\) in \(G^{(t)}\) (where \(0\leq s\leq t-1\)), \(\deg_{G^{(t)}}(u^{(s)})=\deg_{G_{4}}(u)\) and the neighbours of \(u^{(s)}\) in \(G^{(t)}\) are exactly copies of neighbours of \(u\) in \(G_{4}\). Therefore, labelling each vertex \(u^{(s)}\) of \(G^{(t)}\) by \(u\) for all \(u\in V(G_{4})\) and \(0\leq s\leq t-1\) gives a locally bijective homomorphism from \(G^{(t)}\) to \(G_{4}\) (see the alternate definition of locally bijective homomorphism on Page 14). Since there is a locally bijective homomorphism from \(G^{(t)}\) to \(G_{4}\), \(G^{(t)}\) is in \(\mathscr{G}_{4}\). In particular, \(G^{(t)}\) is \(4\)-star colourable. Figure 5: Graphs \(G^{(1)}\), \(G^{(2)}\) and \(G^{(3)}\) in Theorem 6. **Theorem 7**.: _Let \(p\geq 2\). For every integer \(n\) divisible by \((p+1)(p+2)\), there exists a planar \(2p\)-regular Hamiltonian graph on \(n\) vertices which is \((p+2)\)-star colourable._ The proof is similar to Theorem 6 and hence moved to supplementary material. Next, we provide proof of Corollary 2. **Corollary 2** (Restated).: _Let \(G\) be a \(2p\)-regular \((p+2)\)-star colourable graph where \(p\geq 2\). Then, the following hold: \((i)\)\(G\) is \((\mathrm{diamond},K_{4})\)-free, \((ii)\)\(\alpha(G)>n/4\), \((iii)\)\(\chi(G)\leq 3\log_{2}(p+2)\), \((iv)\)\(G\) admits a \(P_{4}\)-decomposition, and \((v)\) if \(G\) contains no asteroidal triple, then \(G\) is 3-colourable._ Proof.: Let \(f\) be a \((p+2)\)-star colouring of \(G\). By Theorem 2, every bicoloured component of \(G\) under \(f\) is isomorphic to \(K_{1,p}\). By Theorem 3, \(f\) induces a partition of the vertex set of \(G\) into sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that the following hold for every pair of indices \(i\) and \(j\): (i) each vertex in \(V_{i}^{j}\) has exactly \(p\) neighbours in \(\bigcup_{k\notin\{i,j\}}V_{j}^{k}\) and exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\), and (ii) \(V_{i}=\bigcup_{j\neq i}V_{i}^{j}\) for every colour \(i\) (where \(V_{i}=f^{-1}(i)\)). **Claim 1:** If \(u\in V_{i}\), \(v\in V_{j}\) and \(uv\) is an edge in \(G\), then either \(u\in V_{i}^{j}\) or \(v\in V_{j}^{i}\). Suppose that \(u\in V_{i}\), \(v\in V_{j}\) and \(uv\) is an edge in \(G\). By Theorem 2, the component of \(G[V_{i}\cup V_{j}]\) containing edge \(uv\) is a star \(H\cong K_{1,p}\). So, either \(u\) or \(v\) is the centre of \(H\). If \(u\) is the centre of \(H\), then \(u\) has \(p\) neighbours in \(V_{j}\) and thus \(u\in V_{i}^{j}\). If \(v\) is the centre of \(H\), then \(v\) has \(p\) neighbours in \(V_{i}\) and thus \(v\in V_{j}^{i}\). This proves Claim 1. To prove that \(G\) is (diamond, \(K_{4}\))-free, it suffices to show that diamond is not a subgraph of \(G\). On the contrary, assume that \(G\) contains diamond as a subgraph; that is, there exist vertices \(a,b,x,y\) of \(G\) such that \(ax,ay,bx,by\) and \(xy\) are edges in \(G\). Without loss of generality, assume that \(x\in V_{0}\) and \(y\in V_{1}\). Since \(xy\) is an edge, either \(x\in V_{0}^{1}\) or \(y\in V_{1}^{0}\) by Claim 1. Without loss of generality, assume that \(x\in V_{0}^{1}\). Due to Claim 5 of Theorem 2, \(x\) has exactly \(p\) neighbours in \(V_{1}\) and exactly one neighbour in each of the colour classes \(V_{2},V_{3},\ldots,V_{p+1}\). In particular, the vertices \(a\) and \(b\) are in different colour classes (if not, \(a,b\in V_{i}\) for some \(i\in\{2,3,\ldots,p+1\}\) and hence \(x\) has two neighbours in \(V_{i}\); a contradiction). Hence, we may assume without loss of generality that \(a\in V_{2}\) and \(b\in V_{3}\). Since \(\{V_{i}^{j}\ :\ 0\leq i\leq p+1,\ 0\leq j\leq p+1,\) and \(i\neq j\}\) is a partition of \(V(G)\), no vertex of \(G\) is in two distinct sets \(V_{i}^{j}\) and \(V_{k}^{\ell}\) (where \(k\neq i\) or \(j\neq\ell\)). Since \(ax\) is an edge, either \(a\in V_{2}^{0}\) or \(x\in V_{0}^{2}\) by Claim 1. Since \(x\in V_{0}^{1}\), \(x\notin V_{0}^{2}\) and hence \(a\in V_{2}^{0}\). Since \(bx\) is an edge, either \(b\in V_{3}^{0}\) or \(x\in V_{0}^{3}\) by Claim 1. Since \(x\in V_{0}^{1}\), \(x\notin V_{0}^{3}\) and hence \(b\in V_{3}^{0}\). Since \(ay\) is an edge, either \(a\in V_{2}^{1}\) or \(y\in V_{1}^{2}\) by Claim 1. Since \(a\in V_{2}^{0}\), \(a\notin V_{2}^{1}\) and hence \(y\in V_{1}^{2}\). Since \(by\) is an edge, either \(b\in V_{3}^{1}\) or \(y\in V_{1}^{3}\) by Claim 1. Since \(b\in V_{3}^{0}\), \(b\notin V_{3}^{1}\) and hence \(y\in V_{1}^{3}\). Since \(y\in V_{1}^{2}\) and \(y\in V_{1}^{3}\), we have a contradiction to \(V_{1}^{2}\cap V_{1}^{3}=\emptyset\). Hence, \(G\) does not contain diamond as a subgraph, and thus \(G\) is \((\mathrm{diamond},K_{4})\)-free. This proves (i). Next, we show that the independence number \(\alpha(G)>n/4\). By Claim 1, if \(u\in V_{0}\) is adjacent to \(v\in V_{1}\), then either \(u\in V_{0}^{1}\) or \(v\in V_{1}^{0}\). Therefore, \((V_{0}\cup V_{1})\setminus(V_{0}^{1}\cup V_{1}^{0})\) is an independent set in \(G\). For the same reason, \(\bigcup_{i=0}^{2}\bigcup_{j=3}^{p+1}V_{i}^{j}=(V_{0}\cup V_{1}\cup V_{2}) \setminus(V_{0}^{1}\cup V_{0}^{2}\cup V_{1}^{0}\cup V_{1}^{2}\cup V_{2}^{0} \cup V_{2}^{1})\) is an independent set in \(G\). In general, for \(0<t<p+1\), the set \(I_{t}\coloneqq\bigcup_{i=0}^{t-1}\bigcup_{j=t}^{p+1}V_{i}^{j}\) is an independent set of \(G\) with cardinality \(\sum_{i=0}^{t-1}\sum_{j=t}^{p+1}|V_{i}^{j}|\). By Theorem 3, each set \(V_{i}^{j}\) has cardinality \(n/(p+1)(p+2)\) and thus \(|I_{t}|=\sum_{i=0}^{t-1}\sum_{j=t}^{p+1}\frac{n}{(p+1)(p+2)}=t(p+2-t)\frac{n}{(p +1)(p+2)}\) for \(0<t<p+1\). In particular, for \(t=\left\lceil\frac{p+2}{2}\right\rceil\), \(I_{t}\) is an independent set of size \(\left\lfloor\frac{p+2}{2}\right\rfloor\left\lceil\frac{p+2}{2}\right\rceil \frac{n}{(p+1)(p+2)}>\frac{n}{4}\). Hence, \(\alpha(G)>n/4\). This proves (ii). Next, we show that the chromatic number of \(G\) is at most \(3\log_{2}(p+2)\). From Theorem 3, we know that \(G\) admits a \(D_{(p+1)(p+2)}\)-partition where \(D_{q}\)-partition problem is defined on Page 13 and the matrix \(D_{(p+1)(p+2)}\) is defined on Page 13. The next claim follows from the definition of the \(D_{q}\)-partition problem and the definition of the matrix \(D_{(p+1)(p+2)}\). Note that the definition of \(D_{(p+1)(p+2)}\) is valid for all \(p\geq 0\). **Claim 2:** A graph \(G^{*}\) admits a \(D_{(p+1)(p+2)}\)-partition if and only if the vertex set of \(G^{*}\) can be partitioned into sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for each pair \(i,j\) (with \(i\neq j\)) and each vertex \(v\in V_{i}^{j}\), \(v\) has exactly one neighbour in \(V_{k}^{i}\) for each \(k\notin\{i,j\}\), and every neighbour of \(v\) is either in \(V_{k}^{i}\) for some \(k\notin\{i,j\}\) or in \(V_{j}^{k}\) for some \(k\notin\{i,j\}\). Note that the notation \(k\notin\{i,j\}\) abbreviates \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\). We prove by induction on \(p\) that every graph \(G^{*}\) which admits a \(D_{(p+1)(p+2)}\)-partition satisfies \(\chi(G^{*})\leq 3\lfloor\log_{2}(p+2)\rfloor\). Note that \(G^{*}\) is not necessarily a regular graph. Base Case (\(p=0,1\)): If \(p=0\) and \(G^{*}\) admits a \(D_{(p+1)(p+2)}\)-partition, then \(V(G^{*})\) can be partitioned into two independent sets \(V_{0}^{1}\) and \(V_{1}^{0}\) such that there is no edge between \(V_{0}^{1}\) and \(V_{1}^{0}\). That is, \(G^{*}\) is the complement of a complete graph. Since \(G^{*}\) is \(1\)-colourable, \(\chi(G)\leq 3\lfloor\log_{2}(p+2)\rfloor=3\) is true. If \(p=1\) and \(G^{*}\) admits a \(D_{(p+1)(p+2)}\)-partition, then \(V(G^{*})\) can be partitioned into six sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,2\}\) and \(i\neq j\) such that the number of neighbours of a vertex \(v\in V_{i}^{j}\) in the set \(V_{k}^{\ell}\) is determined by the matrix \(D_{6}\) given below. \[D_{6}=\begin{array}{ccccccccc}&V_{0}^{1}&V_{0}^{2}&V_{1}^{0}&V_{1}^{2}&V_{2 }^{0}&V_{2}^{1}\\ &V_{0}^{1}&\begin{array}{ccccccccc}\{\textbf{0}\}&\{\textbf{0}\}&\{0\}&\{0 \}&\mathbb{Z}_{0}^{+}&\{1\}&\{0\}\\ \{\textbf{0}\}&\{\textbf{0}\}&\{1\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}\\ \end{array}\\ &D_{6}=\begin{array}{ccccccccc}V_{1}^{0}&\begin{array}{ccccccccc}\{0\}& \mathbb{Z}_{0}^{+}&\{\textbf{0}\}&\{\textbf{0}\}&\{0\}&\{0\}\\ \{\textbf{1}\}&\{0\}&\{0\}&\{0\}&\mathbb{Z}_{0}^{+}&\{0\}\\ \end{array}\\ &V_{2}^{0}&\begin{array}{ccccccccc}\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{1\}&\{ \textbf{0}\}&\{0\}\\ \{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{0\}\\ \end{array}\\ &D_{6}^{0}&\begin{array}{ccccccccc}\mathbb{Z}_{0}^{+}&\{0\}&\{0\}&\{1\}&\{ \textbf{0}\}&\{\textbf{0}\}\\ \{0\}&\{1\}&\mathbb{Z}_{0}^{+}&\{0\}&\{\textbf{0}\}&\{0\}\\ \end{array}\\ \end{array}\] Clearly, \(V_{0}^{1}\cup V_{0}^{2}\), \(V_{1}^{0}\cup V_{1}^{2}\) and \(V_{2}^{0}\cup V_{2}^{1}\) are independent sets in \(G^{*}\). Since these three independent sets form a partition of \(V(G^{*})\), \(G^{*}\) is \(3\)-colourable. So, the inequality \(\chi(G^{*})\leq 3\lfloor\log_{2}(p+2)\rfloor=3\) is true. Induction Step (\(p\geq 2\)): Suppose that \(G^{*}\) admits a \(D_{(p+1)(p+2)}\)-partition. By Claim 2, the vertex set of \(G^{*}\) can be partitioned into sets \(V_{i}^{j}\) with indices \(i,j\in\{0,1,\ldots,p+1\}\) and \(i\neq j\) such that for each pair \(i,j\) (with \(i\neq j\)) and each vertex \(v\in V_{i}^{j}\), the following hold: (i) \(v\) has exactly one neighbour in \(V_{k}^{k}\) for each \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\), and (ii) for every neighbour \(w\) of \(v\), either \(w\in V_{k}^{i}\) for some \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\) or \(w\in V_{j}^{k}\) for some \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\). **Claim 3:** If \(u\in\bigcup_{q\neq i}V_{i}^{q}\), \(v\in\bigcup_{q\neq j}V_{j}^{q}\), and \(uv\) is an edge in \(G^{*}\), then \(u\in V_{i}^{j}\) or \(v\in V_{j}^{i}\). On the contrary, assume that \(u\in V_{i}^{r}\), \(v\in V_{j}^{s}\), and \(uv\) is an edge in \(G^{*}\) where \(r,s\notin\{i,j\}\). Due to Claim 2, every neighbour of \(u\) is either in \(V_{k}^{i}\) or in \(V_{r}^{k}\) for some \(k\in\{0,1,\ldots,p+1\}\setminus\{i,r\}\). Since \(v\in V_{j}^{s}\) is a neighbour of \(u\) and \(s\neq i\), the only possibility is \(V_{j}^{s}=V_{r}^{k}\); that is, \(r=j\) and \(k=s\). This is a contradiction because \(r\notin\{i,j\}\). This proves Claim 3. Due to Claim 3, \((\bigcup_{k\notin\{0,1\}}V_{0}^{k})\cup(\bigcup_{k\notin\{0,1\}}V_{1}^{k})\) is an independent set in \(G^{*}\). Similarly, as in proof of \(\alpha(G)>n/4\), for each \(t\in\{1,2,\ldots,p\}\), the set \(I_{t}\coloneqq\bigcup_{i=0}^{t-1}\bigcup_{j=t}^{p+1}V_{i}^{j}\) is an independent set in \(G^{*}\). We partition the vertex set of \(G^{*}\) into five sets \(A,B,C,W_{1}\) and \(W_{2}\) defined as \[A=\!\!\!\bigcup_{i=0}^{\left\lfloor\frac{p}{2}\right\rfloor}\!\!\!\bigcup_{j =\left\lfloor\frac{p}{2}\right\rfloor+1}^{p+1}V_{i}^{j},\quad B=\bigcup_{i= \left\lceil\frac{p}{2}\right\rceil+1}^{p+1}\bigcup_{j=0}^{p+1}V_{i}^{j},\quad W _{1}=\bigcup_{i=\bigcup_{\begin{subarray}{c}j=0\\ (j\neq i)\end{subarray}}^{\left\lfloor\frac{p}{2}\right\rfloor}\bigcup_{ \begin{subarray}{c}j=0\\ (j\neq i)\end{subarray}}^{\left\lfloor\frac{p}{2}\right\rfloor}V_{i}^{j},\quad W _{2}=\bigcup_{i=\left\lceil\frac{p}{2}\right\rceil+1}^{p+1}\bigcup_{ \begin{subarray}{c}j=\left\lceil\frac{p}{2}\right\rfloor+1\\ (j\neq i)\end{subarray}}^{p+1}V_{i}^{j},\] and \(C=\bigcup_{j\neq(p+1)/2}V_{(p+1)/2}^{j}\) if \(p\) is odd, and \(C=\emptyset\) otherwise (see Figure 6). Note that \(A\) is precisely the independent set \(I_{t}\) for \(t=\lfloor p/2\rfloor+1\). So, \(A\) is an independent set in \(G^{*}\). By similar arguments, \(B\) is an independent set in \(G^{*}\) (due to Claim 3). Since \(C\) is either the empty set or \(C=\bigcup_{j\neq(p+1)/2}V_{(p+1)/2}^{j}\), \(C\) is also an independent set. Since \(A,B\) and \(C\) are independent sets, \(\chi(G^{*})\leq\chi(G^{*}[W_{1}\cup W_{2}])+3\). We know that for each pair \(i,j\) (with \(i\neq j\)) and for each vertex \(v\) in \(V_{i}^{j}\) and each neighbour \(w\) of \(v\), either \(w\in V_{k}^{i}\) for some \(k\notin\{i,j\}\) or \(w\in V_{j}^{k}\) for some \(k\notin\{i,j\}\). In particular, if \(k\notin\{i,j\}\) and \(\ell\notin\{i,j\}\), then no vertex in \(V_{i}^{j}\) has a neighbour in \(V_{k}^{\ell}\). As a result, there is no edge between \(W_{1}\) and \(W_{2}\) in \(G^{*}\). Hence, we have \(\chi(G^{*}[W_{1}\cup W_{2}])=\max\{\chi(G^{*}[W_{1}]),\chi(G^{*}[W_{2}])\}\). **Claim 4:**\(G^{*}[W_{1}]\) admits a \(D_{q_{0}}\)-partition with \(q_{0}=\lfloor\frac{p}{2}\rfloor(\lfloor\frac{p}{2}\rfloor+1)\). We know that the following hold for each \(i,j\in\{0,1,\ldots,p+1\}\) with \(i\neq j\) and each vertex \(v\in V_{i}^{j}\): (i) \(v\) has exactly one neighbour in \(V_{k}^{i}\) for each \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\), and (ii) for each neighbour \(w\) of \(v\), either \(w\in V_{k}^{i}\) for some \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\) or \(w\in V_{j}^{k}\) for some \(k\in\{0,1,\ldots,p+1\}\setminus\{i,j\}\). Note that \(\{V_{i}^{j}\ :\ 0\leq i\leq\lfloor p/2\rfloor,\ 0\leq j\leq\lfloor p/2\rfloor, \text{ and }i\neq j\}\) is a partition of the vertex set of \(G^{*}[W_{1}]\). By the property of the sets \(V_{i}^{j}\) stated in the previous paragraph, the following hold in the graph \(G^{*}[W_{1}]\) for each \(i,j\in\{0,1,\ldots,\lfloor p/2\rfloor\}\) with \(i\neq j\) and each vertex \(v\) in \(V_{i}^{j}\): (i) \(v\) has exactly one neighbour in \(V_{k}^{i}\) for each \(k\in\{0,1,\ldots,\lfloor p/2\rfloor\}\setminus\{i,j\}\), and (ii) for every neighbour \(w\) of \(v\), either \(w\in V_{k}^{i}\) for some \(k\in\{0,1,\ldots,\lfloor p/2\rfloor\}\setminus\{i,j\}\) or \(w\in V_{j}^{k}\) for some \(k\in\{0,1,\ldots,\lfloor p/2\rfloor\}\setminus\{i,j\}\). Therefore, by Claim 2, \(G^{*}[W_{1}]\) admits a \(D_{q_{0}}\)-partition where \(q_{0}=\lfloor\frac{p}{2}\rfloor(\lfloor\frac{p}{2}\rfloor+1)\). This proves Claim 4. Thanks to Claim 4, by induction hypothesis, \(\chi(G^{*}[W_{1}])\leq 3\log_{2}(\lfloor\frac{p}{2}\rfloor+1)=3\log_{2}( \lfloor\frac{p+2}{2}\rfloor)\). Similarly, \(G^{*}[W_{2}]\) admits a \(D_{q_{0}}\)-partition and thus \(\chi(G^{*}[W_{2}])\leq 3\log_{2}(\lfloor\frac{p+2}{2}\rfloor)\). Figure 6: Examples of partitioning \(V(G^{*})\) into sets \(A,B,C,W_{1}\) and \(W_{2}\). So, \(\chi(G^{*}[W_{1}\cup W_{2}])=\max\{\chi(G^{*}[W_{1}]),\chi(G^{*}[W_{2}])\}\leq 3 \log_{2}(\lfloor\frac{p+2}{2}\rfloor)\). Therefore, \(\chi(G^{*})\leq 3\log_{2}(\lfloor\frac{p+2}{2}\rfloor)+3\leq 3(\log_{2}( \frac{p+2}{2})+1)=3\log_{2}(p+2)\). Thus, by mathematical induction, for all \(p\geq 0\), \(\chi(G^{*})\leq 3\log_{2}(p+2)\) if \(G^{*}\) admits a \(D_{(p+1)(p+2)}\)-partition. Since \(G\) is \(2p\)-regular and \((p+2)\)-star colourable, \(G\) admits a \(D_{(p+1)(p+2)}\)-partition by Theorem 3. Hence, \(\chi(G)\leq 3\log_{2}(p+2)\). This proves (iii). Next, we show that \(G\) admits a \(P_{4}\)-decomposition. Oksimets [33] proved that a \(2p\)-regular graph \(G^{*}\) admits a \(P_{4}\)-decomposition if and only if \(|E(G^{*})|\) is divisible by three. By Theorem 3, the number of vertices in \(G\) is divisible by \((p+1)(p+2)\). Hence, \(|E(G)|=np\) is divisible by \(p(p+1)(p+2)\) which is in turn divisible by three. Hence, \(G\) admits a \(P_{4}\)-decomposition. This proves (iv). Finally, we prove that \(G\) is \(3\)-colourable if \(G\) contains no asteroidal triple. Stacho established that every (diamond, \(K_{4}\))-free graph without asteroidal triple is \(3\)-colourable [35, Theorem 1.3]. From (i), we know that \(G\) is \((\text{diamond},K_{4})\)-free. This proves (v). ## 4 Star Colourings and Orientations Let \(G\) be an (undirected) graph. Recall that an _orientation_ of \(G\) is a directed graph obtained from \(G\) by assigning a direction to each edge of \(G\). Star colouring of \(G\) is known to be linked with orientations of \(G\). Albertson et al. [1] proved that a colouring \(f\) of \(G\) is a star colouring if and only if there exists an orientation \(\overrightarrow{G}\) of \(G\) such that edges in each bicoloured \(3\)-vertex path in \(\overrightarrow{G}\) are oriented towards the middle vertex. Nesetril and Mendez [31] characterized the star chromatic number of \(G\) in terms of orientations of \(G\) (see the next paragraph for details). Motivated by Claim 5 in proof of Theorem 2, we introduce a new type of orientation named colourful Eulerian orientation and reveal its connection to star colouring in even-degree regular graphs. For each orientation \(\overrightarrow{G_{i}}\) of \(G\), let us define \(G_{i}^{+}\) as the undirected graph with \(V(G_{i}^{+})=V(G)\) and \(E(G_{i}^{+})=E(G)\cup E_{i}^{+}\) where \(E_{i}^{+}=\{uv\ :\ u,v\in V(G),\ \text{and}\ \exists w\in N(u)\cap N(v)\) such that \((u,w)\notin E(\overrightarrow{G_{i}})\) or \((v,w)\notin E(\overrightarrow{G_{i}})\}\). In other words, \(G_{i}^{+}\) is obtained from \(G\) by adding edges \(uv\) whenever \(u\) and \(v\) have a common neighbour \(w\) such that at least one edge in path \(u,w,v\) is oriented away from the middle vertex \(w\). For every graph \(G\), the star chromatic number \(\chi_{s}(G)=\min_{i\in I}\chi(G_{i}^{+})\) where \(I\) is an index set and \(\{\overrightarrow{G}_{i}\ :\ i\in I\}\) is the set of all orientations of \(G\)[31, Corollary 3] (a different notation is used in [31]). Let us see a few definitions first. Recall that a colouring \(f\) of \(G\) is a star colouring if and only if there exists an orientation \(\overrightarrow{G}\) of \(G\) such that edges in each bicoloured \(3\)-vertex path in \(\overrightarrow{G}\) are oriented towards the middle vertex [1]. If \(f\) is a star colouring of \(G\), an _in-orientation of \(G\) induced by \(f\)_ is an orientation \(\overrightarrow{G}\) of \(G\) obtained by orienting edges in each bicoloured \(3\)-vertex path in \(G\) towards the middle vertex, and then orienting the remaining edges arbitrarily. The notion of in-orientation \(\overrightarrow{G}\) induced by \(f\) is the same as the notion of 'colored in-orientation' \((f,\overrightarrow{G})\) in [16]. If \(f\) is a star colouring of \(G\) and no bicoloured component of \(G\) under \(f\) is isomorphic to \(K_{1,1}\), then the in-orientation of \(G\) induced by \(f\) is unique (because every edge in \(G\) is part of a bicoloured \(3\)-vertex path). An orientation \(\overrightarrow{G}\) is an _Eulerian orientation_ if the number of in-neighbours of \(v\) equals the number of out-neighbours of \(v\) for every vertex \(v\) of \(\overrightarrow{G}\) (i.e., in-degree\((v)=\) out-degree\((v)\)\(\forall v\in V(\overrightarrow{G})\)) [29]. If \(G\) admits an Eulerian orientation, then clearly every vertex of \(G\) is of even degree. Conversely, if every vertex of \(G\) is of even degree, then \(G\) admits an Eulerian orientation [29]. Connected graphs \(G\) such that every vertex of \(G\) is of even degree is called an _Eulerian graph_ because \(G\) admits an Eulerian tour. Let \(G\) be an Eulerian graph. Then, \(G\) admits Eulerian orientations. We say that an Eulerian orientation \(\overrightarrow{G}\) of \(G\) is a _\(q\)-colourful Eulerian orientation_ if there exists a \(q\)-colouring \(f\) of \(G\) such that the following hold under \(f\) for every vertex \(v\) of \(\overrightarrow{G}\): (i) in-neighbours of \(v\) have the same colour, say colour \(c_{v}\), (ii) no out-neighbour of \(v\) has colour \(c_{v}\), and (iii) out-neighbours of \(v\) have pairwise distinct colours. An Eulerian orientation \(\overrightarrow{G}\) of \(G\) is said to be a _colourful Eulerian orientation_ if \(\overrightarrow{G}\) is a \(q\)-colourful Eulerian orientation for some integer \(q\). We remark that there exist Eulerian graphs which do not admit a colourful Eulerian orientation (see Theorem 9). The next theorem shows for all \(p\geq 2\), a \(2p\)-regular graph \(G\) is \((p+2)\)-star colourable if and only if \(G\) admits a \((p+2)\)-colourful Eulerian orientation. **Theorem 8**.: _Let \(p\geq 2\), and let \(G\) be a \(2p\)-regular graph. Then, \(G\) is \((p+2)\)-star colourable if and only if \(G\) admits a \((p+2)\)-colourful Eulerian orientation._ Proof.: Suppose that \(G\) admits a \((p+2)\)-star colouring \(f\colon V(G)\to\{0,1,\ldots,p+1\}\). By Theorem 2, every bicoloured component of \(G\) under \(f\) is isomorphic to \(K_{1,p}\). Hence, the in-orientation \(\overrightarrow{G}\) of \(G\) induced by \(f\) is unique. Also, for every bicoloured component \(H\cong K_{1,p}\) of \(G\), the edges in \(H\) are oriented by \(\overrightarrow{G}\) towards the centre of \(H\). We claim that \(\overrightarrow{G}\) is a \((p+2)\)-colourful Eulerian orientation of \(G\) with \(f\) as the underlying \((p+2)\)-colouring. To show this, it suffices to prove the following claim (recall that we denote the colour class \(f^{-1}(i)\) by \(V_{i}\) for every colour \(i\)). **Claim 1:** For each colour \(i\in\{0,1,\ldots,p+1\}\) and each vertex \(v\in V_{i}\), all \(p\) in-neighbours of \(v\) are in some colour class \(V_{j}\), and every other colour class \(V_{k}\), \(k\notin\{i,j\}\), contains exactly one out-neighbour of \(v\). We prove Claim 1 under the assumption \(i=0\) (the proof is similar for other values of \(i\)). Let \(v\in V_{0}\). By Claim 5 of Theorem 2, \(v\) has exactly \(p\) neighbours in some colour class \(V_{j}\) and exactly one neighbour in every other colour class \(V_{k}\), \(k\notin\{i,j\}\) (see Figure 7). Figure 7: Colours on the neighbourhood of an arbitrary vertex \(v\in V_{0}\) (here, \(j=1\)). Without loss of generality, assume that \(j=1\). Let \(w_{1},\ldots,w_{p},x_{1},\ldots,x_{p}\) be the neighbours of \(v\) in \(G\) where \(w_{1},\ldots,w_{p}\in V_{1}\) and \(x_{r}\in V_{r+1}\) for \(1\leq r\leq p\). The subgraph of \(G\) induced by the set \(\{v,w_{1},\ldots,w_{p}\}\) is a bicoloured component \(H\) of \(G\), and hence edges in \(H\) are oriented by \(\overrightarrow{G}\) towards the centre \(v\) of \(H\). So, \(w_{1},\ldots,w_{p}\) are in-neighbours of \(v\) (in \(\overrightarrow{G}\)). On the other hand, for \(1\leq r\leq p\), \(v\) has exactly one neighbour \(x_{r}\) in \(V_{r+1}\) and hence \(x_{r}\) has exactly \(p\) neighbours in \(V_{0}\). Since \(x_{r}\) together with its neighbours in \(V_{0}\) induce a bicoloured component with \(x_{r}\) as the centre, \(x_{r}\) is an out-neighbour of \(v\) (provided \(1\leq r\leq p\)). Therefore, the orientation of edges incident on \(v\) are as shown in Figure 8. This proves Claim 1. Therefore, \(\overrightarrow{G}\) is a \((p+2)\)-colourful Eulerian orientation of \(G\). Conversely, suppose that \(G\) admits a \((p+2)\)-colourful Eulerian orientation with a \((p+2)\)-colouring \(f\) as the underlying colouring. By the definition of a \((p+2)\)-colourful Eulerian orientation, each bicoloured component \(H\) of \(G\) under \(f\) consists of some vertex \(v\) and all \(p\) in-neighbours of \(v\); this implies that \(H\) is isomorphic to \(K_{1,p}.\) Since every bicoloured component of \(G\) under \(f\) is a star, \(f\) is a \((p+2)\)-star colouring of \(G\). This proves the converse part. The next theorem shows that the presence of some subgraphs makes it impossible for a graph to admit a colourful Eulerian orientation. **Theorem 9**.: _Let \(G\) be a graph that contains at least one of the graphs in Figure 9 as a subgraph. Then, \(G\) does not admit a colourful Eulerian orientation. In particular, if \(G\) is \(d\)-regular, then \(\chi_{s}(G)\geq\lceil(d+5)/2\rceil\)._ See supplementary material for the proof of Theorem 9 and a discussion of the complexity of colourful Eulerian orientations. ## 5 Hardness Results A graph \(G\) is \(1\)-star colourable if and only if \(G\) is the complement of a complete graph. A graph \(G\) is \(2\)-star colourable if and only if \(G\) is a disjoint union of stars. Hence, \(k\)-Star Colourability is polynomial-time solvable for \(k\leq 2\). We prove that for all \(k\geq 3\), \(k\)-Star Colourability in NP-complete for graphs of maximum degree \(k\). We present hardness results on \(3\)-star colouring in Subsection 5.1 and results on \(k\)-star colouring with \(k\geq 4\) in Subsection 5.2. Figure 8: Orientation of edges incident on the neighbourhood of an arbitrary vertex \(v\in V_{0}\). ### 3-Star Colouring It is known that 3-Star Colourability is NP-complete for planar bipartite graphs [1]. We prove that the problem remains NP-complete when further restricted to graphs of maximum degree three and arbitrarily large girth. Let us start with a simple observation on 3-star colourings, which is quite useful in our reductions. By our convention, every 3-star colouring uses colours 0,1 and 2. **Observation 4**.: _Let \(f\) be a 3-star colouring of a graph \(G\). If \(u,v,w\) is a path in \(G\) which is bicoloured, say with colours 0 and 1, then every neighbour of \(w\) except \(v\) must be coloured 2. Moreover, if \(v\) has two neighbours \(x_{1}\) and \(x_{2}\) besides \(v\), then their neighbours \(y\neq w\) must be coloured \(f(v)\) for the same reason (see Figure 10)._ To make gadgets, we use the bipartite graph displayed in Figure 10(a); let us call it the gadget component (only some of its vertices are labelled; these labels are essential to proof of Lemma 1). Lemma 1 shows that the gadget component admits a _unique_ 3-star colouring (recall that 'unique' in italics indicates "unique up to colour swaps"). **Lemma 1**.: _The colouring displayed in Figure 10(b) is the unique 3-star colouring of the gadget component. In particular, \(f(x)=f(y)=f(z)\) for every 3-star colouring \(f\) of the gadget component._ Figure 10: Under a 3-star colouring, colours are forced on neighbours of endpoints of bicoloured \(P_{3}\)’s. Figure 9: Some obstructions to colourful Eulerian orientation. Proof.: Let \(f\) be a \(3\)-star colouring of the gadget component. Without loss of generality, assume that \(f(w)=0\). First, we prove that \(f\) must use the same colour on vertices \(x,y\) and \(z\). Since only colours \(1\) and \(2\) are available for vertices \(x,y\) and \(z\), at least two of them should get the same colour. Without loss of generality, assume that \(f(y)=f(z)=1\). Since \(y,w,z\) is a bicoloured \(P_{3}\), neighbours of \(y\) on the outer cycle must be coloured \(2\) due to Observation 4. Repeated application of Observation 4 reveals that colours are forced on vertices of the gadget component as shown in Figure 12. **Claim:**\(f(x)=1\). On the contrary, assume that \(f(x)=2\). If \(f(a_{3})=0\), then the bicoloured path \(a_{1},a_{2},a_{3}\) forces colour \(2\) at \(a_{4}\) by Observation 4; this is a contradiction because \(f(x)=2\). So, \(f(a_{3})=2\). Similarly, \(f(a_{6})=2\). Now, \(f(a_{4})\neq 1\) (if not, \(a_{2},a_{3},a_{4},x\) is a bicoloured \(P_{4}\)). Hence, \(f(a_{4})=0\). Similarly, \(f(a_{5})=0\). Then, \(a_{3},a_{4},x,a_{5}\) is a bicoloured \(P_{4}\); a contradiction (see Figure 13). This proves the claim. Therefore, \(f(x)=f(y)=f(z)\). Thus, by symmetry, the colouring in Figure 10(b) is the _unique_\(3\)-star colouring of the gadget component. For every construction in this paper, the output graph is made up of gadgets. For every gadget, only some of the vertices in it are allowed to have edges to vertices outside the gadget; we call these vertices as _terminals_. In diagrams, we draw a _circle around each terminal_. **Construction 1**.: _Input:_ A graph \(G\) of maximum degree four. _Output:_ A bipartite graph \(G^{\prime}\) of maximum degree three and girth eight. _Guarantee 1:_\(G\) is \(3\)-colourable if and only if \(G^{\prime}\) is \(3\)-star colourable. _Guarantee 2:_\(G^{\prime}\) has only \(O(n)\) vertices where \(n=|V(G)|\). _Guarantee 3:_ If \(G\) is planar, then \(G^{\prime}\) is planar (and the construction can be done in polynomial time). _Steps:_ First, replace each vertex \(v\) of \(G\) by a vertex gadget as shown in Figure 14. For each vertex \(v\) of \(G\), the vertex gadget for \(v\) has four terminals \(v_{1},v_{2},v_{3},v_{4}\) which accommodate the edges incident on \(v\) in \(G\) (each terminal takes at most one edge; order does not matter). Replacement of vertices by vertex gadgets converts each edge \(uv\) of \(G\) to an edge \(u_{i}v_{j}\) between two terminals (i.e., there exists a unique \(i\in\{1,2,3,4\}\) and a unique \(j\in\{1,2,3,4\}\) such that \(u_{i}v_{j}\) is an edge). Finally, replace each edge \(u_{i}v_{j}\) between two terminals by an edge gadget as shown in Figure 15 (the edge gadget is not symmetric; it does not matter which way we connect). Figure 16 shows an overview. The graph \(G^{\prime}\) is bipartite (small dots form one part and big dots form the other part in the bipartition; see Figure 16). Observe that the vertex gadget and the edge gadget have maximum degree three and girth eight. Since a terminal in \(G^{\prime}\) is shared by at most two gadgets in \(G^{\prime}\), \(G^{\prime}\) has maximum degree three. Since the distance between two terminals in \(G^{\prime}\) is at least eight, there is no cycle of length less than eight in \(G^{\prime}\). So, \(G^{\prime}\) has girth eight. Proof of Guarantee 1.: The following claim demonstrates how the vertex gadget and the edge gadget serve their respective roles. **Claim 1:** The colourings shown in Figure 17 are the _unique_\(3\)-star colouring of the vertex gadget and the edge gadget. In particular, under every \(3\)-star colouring, all four terminals of a vertex gadget must get the same colour whereas the terminals of an edge gadget must get different colours. Recall that Figure 10(b) exhibits the _unique_\(3\)-star colouring of the gadget component by Observation 1. This fixes colours on the gadget component within the vertex gadget Figure 14: Replacement of vertex by vertex gadget. (resp. edge gadget). We obtain Claim 1 by repeated application of Observation 4. The _unique_\(3\)-star colouring of the vertex gadget (resp. edge gadget) exhibited in Figure 17 ensures the following claim. **Claim 2:** If \(Q^{*}\) is a \(3\)-vertex path in a vertex gadget (resp. edge gadget) and \(Q^{*}\) contains a terminal of the gadget, then \(Q^{*}\) is tricoloured. Suppose that \(G\) admits a \(3\)-colouring \(f\). We produce a \(3\)-colouring \(f^{\prime}\) of \(G^{\prime}\) by assigning \(f^{\prime}(v_{i})=f(v)\) for each vertex \(v\) of \(G\) and \(1\leq i\leq 4\), and extending it to Figure 16: Construction of \(G^{\prime}\) from \(G\). Only vertices \(u,v\) and edge \(uv\) in \(G\) and corresponding gadgets in \(G^{\prime}\) are shown. Figure 17: The _unique_\(3\)-star colouring of the vertex gadget and the edge gadget. Figure 15: Replacement of edge between terminals by edge gadget. vertex gadgets and edge gadgets using the schemes in Figure 17. **Claim 3:**\(f^{\prime}\) is a 3-star colouring of \(G^{\prime}\). On the contrary, assume that there is a 4-vertex path \(Q\) in \(G^{\prime}\) bicoloured by \(f^{\prime}\). Since star colouring schemes are used on gadgets in \(G^{\prime}\), \(Q\) must contain vertices from two gadgets. As a result, the terminal shared by the two gadgets must be in \(Q\) and the segment of \(Q\) in one of the gadgets is a 3-vertex path \(Q^{*}\). Clearly, the shared terminal must be in \(Q^{*}\). Since \(Q^{*}\) is a 3-vertex path in a gadget and \(Q^{*}\) contains a terminal of the gadget, \(Q^{*}\) is tricoloured by Claim 2. This is a contradiction to the assumption that \(Q\) is bicoloured. This proves Claim 3. Therefore, if \(G\) is 3-colourable, then \(G^{\prime}\) is 3-star colourable. Conversely, suppose that \(G^{\prime}\) admits a 3-star colouring \(f^{\prime}\). By Claim 1, \(f^{\prime}\) must use (i) the same colour on terminals of each vertex gadget, and (ii) different colours on terminals of each edge gadget. Hence, the function \(f\) defined as \(f(v)=f^{\prime}(v_{1})\) for all \(v\in V(G)\) is a 3-colouring of \(G\). Proof of Guarantee 2.: Let us count the number of vertices and edges in \(G^{\prime}\). The vertex gadget has 29 vertices and 31 edges. The edge gadget has 29 vertices excluding the terminals, and 33 edges (let us count terminals as part of vertex gadgets). So, \(G^{\prime}\) has \(29n+29m\) vertices and \(31n+33m\) edges where \(n=|V(G)|\) and \(m=|E(G)|\). As \(\Delta(G)=4\), we have \(m\leq n\Delta(G)/2=2n=O(n)\). Therefore, \(G^{\prime}\) has only \(O(n)\) vertices and \(O(n)\) edges. Proof of Guarantee 3.: Suppose that \(G\) is planar. Fix a plane drawing of \(G\). For each vertex \(v\) of \(G\), the cyclic order of edges around \(v\) in \(G\) (usually called the rotation system at \(v\)) can be computed in time polynomial in the size of the input \(G\)[32]. Hence, it is possible to construct \(G^{\prime}\) in such a way that \(G^{\prime}\) is planar, and the construction still requires only time polynomial in the size of \(G\). Remark: If \(N=\Theta(n)\), \(g(n)=h(N)\) and \(h(N)=2^{o(N)}\) (resp. \(2^{o(\sqrt{N})}\)), then \(g(n)=2^{o(n)}\) (resp. \(2^{o(\sqrt{n})}\)). **Theorem 10**.: \(3\)_-Star Colourability(planar, bipartite, \(\Delta=3\), girth \(=8\)) is NP-complete, and the problem does not admit a \(2^{o(\sqrt{n})}\)-time algorithm unless ETH fails. Moreover, the problem \(3\)-Star Colourability(bipartite, \(\Delta=3\), girth \(=8\)) does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails._ Proof.: We employ Construction 1 to establish a reduction from \(3\)-Colourability(planar, \(\Delta=4\)) to \(3\)-Star Colourability(planar, bipartite, \(\Delta=3\), girth \(=8\)). Let \(G\) be an instance of \(3\)-Colourability(planar, \(\Delta=4\)). From \(G\), construct an instance \(G^{\prime}\) of \(3\)-Star Colourability(planar, bipartite, \(\Delta=3\), girth \(=8\)) by Construction 1. By Guarantee 1 of Construction 1, \(G\) is 3-colourable if and only if \(G^{\prime}\) is 3-star colourable. Since \(G\) is planar, the graph \(G^{\prime}\) is planar by Guarantee 3 of Construction 1. By Guarantee 2 of Construction 1, the number of vertices in \(G^{\prime}\) is \(N=O(n)\) where \(n=|V(G)|\). Hence, \(N=\Theta(n)\). Since \(3\)-Colourability(planar, \(\Delta=4\)) is NP-complete [22] and the problem does not admit a \(2^{o(\sqrt{n})}\)-time algorithm unless ETH fails [4], \(3\)-Star Colourability(planar, bipartite, \(\Delta=3\), girth \(=8\)) is NP-complete, and the problem does not admit a \(2^{o(\sqrt{n})}\)-time algorithm unless ETH fails. Similarly, Construction 1 establishes a reduction from \(3\)-Colourability\((\Delta=4)\) to \(3\)-Star Colourability(bipartite, \(\Delta=3\), girth \(=8\)). Since \(3\)-Colourability(\(\Delta=4\)) does not admit a \(2^{o(n)}\)-time algorithm [13, Lemma 2.1] and \(|V(G^{\prime})|=\Theta(|V(G)|)\) in Construction 1 (see Guarantee 2), \(3\)-Star Colourability(bipartite, \(\Delta=3\), girth \(=8\)) does not admit a \(2^{o(n)}\)-time algorithm. Note that in Construction 1, a \(3\)-colouring \(f\) of \(G\) can be extended into a \(3\)-star colouring \(f^{\prime}\) of \(G^{\prime}\) in \(2^{n}\) ways where \(n=|V(G)|\) (since the colours on the terminals are fixed, each edge gadget has exactly one \(3\)-star colouring whereas two choices are available for each vertex gadget; e.g., swapping colour \(0\) with colour \(2\) is possible in Figure 17a). Therefore, the reduction in Theorem 10 is weakly parsimonious. Thus, we have the following theorem since it is #P-complete to count the number of \(3\)-colourings of a graph of maximum degree four [7]. **Theorem 11**.: _It is #P-complete to count the number of \(3\)-star colourings of a bipartite graph of maximum degree three and girth eight. _ The output graph \(G^{\prime}\) in Construction 1 has girth eight. We can modify Construction 1 to give \(G^{\prime}\) arbitrarily large girth. The modification required is to replace the gadget component by the new one displayed in Figure 18 below. For \(s=1\), the new gadget component is as shown in Figure 19. The graph produced by this modification has girth \(6s+8\). To prove that the new construction preserves the guarantees of Construction 1, it suffices to show that the new gadget component admits a _unique_\(3\)-star colouring similar to the _unique_\(3\)-star colouring of the old gadget component. Lemma 2 below proves this for \(s=1\); the proof is similar for higher values of \(s\). Figure 18: New gadget component. **Lemma 2**.: _For \(s=1\), the colouring shown in Figure 1(a) is the unique 3-star colouring of the new gadget component (see Figure 19 for the new gadget component)._ Proof.: Suppose \(f\) is a 3-star colouring of the new gadget component. Without loss of generality, assume that \(f(w)=0\). First, we prove that \(f\) must use the same colour on vertices \(x,y\) and \(z\). Since only colours 1 and 2 are available for vertices \(x,y\) and \(z\), at least two of them should get the same colour. Without loss of generality, assume that \(f(y)=f(z)=1\). Since \(y,w,z\) is a bicoloured \(P_{3}\), neighbours of \(y\) on the outer cycle must be coloured 2 due to Observation 4. Repeating application of Observation 4 reveals that colours are forced on the new gadget component as shown in Figure 1(b). We claim that \(f(x)=1\). To produce a contradiction, assume that \(f(x)=2\). Clearly, \(f(a_{7})=0\) or 1. First, we show that \(f(a_{7})=0\) leads to a contradiction. Assume that \(f(a_{7})=0\). Since \(w,x,a_{7}\) is a bicoloured \(P_{3}\), \(f(a_{6})=1\) and \(f(l_{4})=1\) (by Observation 4). Since \(l_{4},a_{7},a_{6}\) is a bicoloured \(P_{3}\), \(f(a_{5})=2\) and \(f(l_{3})=2\). Since \(l_{3},a_{6},a_{5}\) is a bicoloured \(P_{3}\), \(f(a_{4})=0\) and \(f(l_{2})=0\). Vertex \(a_{3}\) can get only colour 2. So, \(a_{3},a_{4},a_{5},l_{2}\) is a bicoloured \(P_{4}\). This contradiction proves that \(f(a_{7})\neq 0\). Therefore, \(f(a_{7})=1\). By symmetry, \(f(a_{8})=1\) as well. Since \(a_{8},x,a_{7}\) is a bicoloured \(P_{3}\), \(f(a_{6})=0\) and \(f(l_{4})=0\). Since \(l_{4},a_{7},a_{6}\) is a bicoloured \(P_{3}\), \(f(a_{5})=2\) and \(f(l_{3})=2\). Since \(l_{3},a_{6},a_{5}\) is a bicoloured \(P_{3}\), \(f(a_{4})=1\) and \(f(l_{2})=1\). Since \(l_{2},a_{5},a_{4}\) is a bicoloured \(P_{3}\), \(f(a_{3})=0\). But, then \(a_{4},a_{3},a_{2},a_{1}\) is a bicoloured \(P_{4}\). This contradiction proves that \(f(x)\neq 2\). Hence, \(f(x)=1\). Therefore, by symmetry, the colouring shown in Figure 1(a) is the _unique_ 3-star colouring of the new gadget component. The new vertex gadget (resp. edge gadget) is constructed from its old counterpart by replacing the old gadget component by the new gadget component (new gadgets are displayed as Figure 3 and Figure 4 in the supplementary material). It is easy to see that the new vertex gadget and the new edge gadget preserve the following properties of their old counterparts: (i) for every 3-star colouring of the vertex gadget, Figure 19: New gadget component when \(s=1\). its terminals should get the same colour, (ii) for every \(3\)-star colouring of the edge gadget, its terminals should get different colours, (iii) there exist a \(3\)-star colouring of the vertex gadget (resp. edge gadget) such that each \(P_{3}\) in it containing a terminal is tricoloured. Thus, we have the following theorems. **Theorem 12**.: _Let \(g\geq 8\). The problem \(3\)-Star Colourability\((\)planar, bipartite, \(\Delta=3\), girth \(\geq g)\) is NP-complete, and it does not admit a \(2^{o(\sqrt{n})}\)-time algorithm unless ETH fails. Moreover, the problem \(3\)-Star Colourability\((\)bipartite, \(\Delta=3\), girth \(\geq g)\) does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails. _ **Theorem 13**.: _For all \(g\geq 8\), it is #P-complete to count the number of \(3\)-star colourings of a bipartite graph of maximum degree three and girth at least \(g\)._ It is known that for all \(k\geq 3\), testing whether a graph has a _unique_\(k\)-colouring is coNP-hard. As far as we know, there is no hardness result on _unique_ star colouring. We show that it is coNP-hard to check whether a graph has a _unique_\(3\)-star colouring. The decision problem Unique \(k\)-Colouring takes a graph \(G\) as input and asks whether \(G\) has a _unique_\(k\)-colouring. The problem Unique \(k\)-Star Colouring is defined likewise. The problem Another \(k\)-Colouring is closely related to the problem Unique \(k\)-Colouring. The problem Another \(k\)-Colouring takes a graph \(G\) and a \(k\)-colouring \(f_{1}\) of \(G\) as input and asks whether \(G\) admits a \(k\)-colouring \(f_{2}\) of \(G\) which cannot be obtained from \(f_{1}\) by merely swapping colours. The problem Another \(k\)-Star Colouring is defined likewise. Let \(k\geq 3\). Dailey [14] established a reduction from \(k\)-Colourability to Another \(k\)-Colouring. So, Another \(k\)-Colouring is NP-hard. That is, given a graph \(G\) and a \(k\)-colouring of \(G\), it is NP-hard to test whether \(G\) has _another_\(k\)-colouring (i.e., \(G\) is a no instance of Unique \(k\)-Colouring). Hence, given a (\(k\)-colourable) graph \(G\), it is coNP-hard to check whether \(G\) has a _unique_\(k\)-colouring. Therefore, Unique \(k\)-Colouring is coNP-hard even when restricted to the class of \(k\)-colourable graphs. It is easy to observe that Dailey's construction provides a reduction from the problem \(3\)-Colourability\((\Delta=4)\) to Another \(3\)-Colouring\((\Delta=8)\). So, Another Figure 20: (a) A \(3\)-star colouring of new gadget component (\(s=1\)), and (b) Colours forced in the new gadget component (\(s=1\)). 3-Colouring(\(\Delta=8\)) is NP-complete. We prove that Another 3-Star Colouring is NP-complete for 2-degenerate bipartite graphs of maximum degree eight and arbitrarily large girth. As a result, Unique 3-Star Colouring is coNP-hard for the same class. Using the construction of Coleman and More [11], one can show that Another 3-Star Colouring is NP-complete for 2-degenerate bipartite graphs of maximum degree twenty-four. We utilize ideas from Construction 1 to reduce the degree bound to eight. **Construction 2**.: _Input:_ A graph \(G\) of maximum degree eight. _Output:_ A 2-degenerate bipartite graph \(G^{\prime}\) of maximum degree eight and girth eight. _Guarantee:_ The number of 3-colourings of \(G\) up to colour swaps equals the number of 3-star colourings of \(G^{\prime}\) up to colour swaps. _Steps:_ Replace each edge \(e=uv\) of \(G\) by an edge gadget as shown in Figure 21. Clearly, \(G^{\prime}\) is bipartite, \(\Delta(G^{\prime})=8\) and \(\operatorname{girth}(G^{\prime})=8\). Also, \(G^{\prime}\) is 2-degenerate because we can remove all vertices from \(G^{\prime}\) by repeatedly removing vertices of degree one or two. Proof of Guarantee.: Note that the edge gadget is identical to the edge gadget in Construction 1. In Construction 1, the following properties of the edge gadget are proved: (i) for every 3-star colouring of the gadget, the terminals get different colours, and every 3-vertex path containing a terminal is tricoloured, and (ii) the edge gadget has a _unique_ 3-star colouring, namely the scheme in Figure 16(b) (the scheme is repeated here as Figure 22 for convenience). **Claim 1:** For distinct colours \(i,j\in\{0,1,2\}\), an edge gadget with terminals \(u\) and \(v\) has exactly one 3-star colouring \(f\) such that \(f(u)=i\) and \(f(v)=j\). Let \(i,j\in\{0,1,2\}\) be distinct colours, and let \(c\) be the third colour (i.e., \(\{i,j,c\}=\{0,1,2\}\)). Using the scheme in Figure 22 and swapping colour 1 with colour \(i\) and colour \(0\) with colour \(j\) gives a 3-star colouring \(f\) of the edge gadget with \(f(u)=i\) and \(f(v)=j\). We need to prove that \(f\) is the unique 3-star colouring of the edge gadget with this property. Suppose that \(f^{*}\) is a 3-star colouring of the edge gadget such that \(f^{*}(u)=i\) and \(f^{*}(v)=j\). We know that the edge gadget has a _unique_ 3-star colouring (i.e., unique up to colour swaps; see Claim 1 in proof of Guarantee 1 in Construction 1). Hence, we can obtain \(f^{*}\) from \(f\) by merely swapping colours. Since \(f^{*}(u)=f(u)=i\) Figure 21: Replacement of edge by edge gadget in Construction 2. colour \(i\) is not swapped when we go from \(f\) to \(f^{*}\); that is, every vertex of the gadget with colour \(i\) under \(f\) gets colour \(i\) under \(f^{*}\) as well. Similarly, since \(f^{*}(v)=f(v)=j\), every vertex of the gadget with colour \(j\) under \(f\) gets colour \(j\) under \(f^{*}\) as well. Since \(c\) is the only remaining colour, every vertex of the gadget with colour \(c\) under \(f\) gets colour \(c\) under \(f^{*}\). Therefore, \(f^{*}=f\). This proves the uniqueness of \(f\) and thus Claim 1. We define a function \(\phi\) from the set of \(3\)-colourings of \(G\) to the set of \(3\)-star colourings of \(G^{\prime}\). Each \(3\)-colouring \(f\) of \(G\), \(f:V(G)\rightarrow\{0,1,2\}\), can be extended into a \(3\)-star colouring \(f^{\prime}\) of \(G^{\prime}\) by applying the \(3\)-star colouring scheme shown in Figure 22 on each edge gadget (colour swaps may be needed). Note that \(f^{\prime}\) will be a \(3\)-star colouring of \(G^{\prime}\) because (i) \(3\)-star colouring schemes are used on edge gadgets, and (ii) every \(3\)-vertex path containing a terminal is tricoloured by \(f^{\prime}\). The extension of \(f\) into a \(3\)-star colouring of \(G^{\prime}\) is unique because the edge gadget has exactly one \(3\)-star colouring once the colours on the terminals are fixed (see Claim 1). We define \(\phi\) as the function that maps each \(3\)-colouring \(f\) of \(G\) to the unique \(3\)-star colouring extension \(f^{\prime}\) of \(f\) into \(V(G^{\prime})\). Note that for every \(3\)-colouring \(f\) of \(G\), the restriction of \(\phi(f)\) into \(V(G)\) is precisely \(f\) (i.e., \(\phi(f)_{\mid V(G)}=f\)). The function \(\phi\) is one-one because \(\phi(f_{1})=\phi(f_{2})\) implies that \(f_{1}=\phi(f_{1})_{\mid V(G)}=\phi(f_{2})_{\mid V(G)}=f_{2}\). We claim that \(\phi\) is onto. Let \(f^{\prime}\) be a \(3\)-star colouring of \(G^{\prime}\). Then, \(f^{\prime}(u)\neq f^{\prime}(v)\) whenever there is an edge gadget between terminals \(u\) and \(v\) in \(G^{\prime}\) because \(f^{\prime}\) assigns different colours to terminals of each edge gadget. For every edge \(uv\) of \(G\), there is an edge gadget between terminals \(u\) and \(v\) in \(G^{\prime}\). Hence, \(f^{\prime}(u)\neq f^{\prime}(v)\) for every edge \(uv\) of \(G\). So, the restriction of \(f^{\prime}\) into \(V(G)\) is a \(3\)-colouring of \(G\). This proves that \(\phi\) is onto. So, \(\phi\) is a bijection from the set of \(3\)-colourings of \(G\) to the set of \(3\)-star colourings of \(G^{\prime}\). If \(f_{1}\) and \(f_{2}\) are two colourings of the same graph and \(f_{2}\) can be obtained from \(f_{1}\) by merely swapping colours, then we say that \(f_{1}\) and \(f_{2}\) are _equivalent under colour swaps_. If two \(3\)-colourings \(f_{1}\) and \(f_{2}\) of \(G\) are equivalent under colour swaps, then their images \(\phi(f_{1})\) and \(\phi(f_{2})\) are equivalent under colour swaps because they are the unique extensions of \(f_{1}\) and \(f_{2}\) respectively as \(3\)-star colourings of \(G^{\prime}\). Also, if two \(3\)-star colourings \(\phi(f_{1})\) and \(\phi(f_{2})\) are equivalent under colour swaps, then their pre-images \(f_{1}=\phi(f_{1})_{\mid V(G)}\) and \(f_{2}=\phi(f_{2})_{\mid V(G)}\) are equivalent under colour swaps. So, two \(3\)-colourings \(f_{1}\) and \(f_{2}\) of \(G\) are non-equivalent under colour swaps if and only if \(\phi(f_{1})\) and \(\phi(f_{2})\) are non-equivalent under colour swaps. Therefore, the number of \(3\)-colourings of \(G\) up to colour swaps is equal to the number Figure 22: The _unique_\(3\)-star colouring of the edge gadget. of \(3\)-star colourings of \(G^{\prime}\) up to colour swaps. Thanks to Construction 2, we have the following theorem. **Theorem 14**.: _For 2-degenerate bipartite graphs of maximum degree eight and girth eight, Another \(3\)-Star Colouring is NP-complete and Unique \(3\)-Star Colouring is coNP-hard._ Proof.: The reduction is from Another \(3\)-Colouring(\(\Delta=8\)). Let \((G,f)\) be an instance of the source problem. From \(G\), produce a graph \(G^{\prime}\) by Construction 2. In Construction 2, it is established that there is a bijection \(\phi\) from the set of \(3\)-colourings of \(G\) to the set of \(3\)-star colourings of \(G^{\prime}\). In particular, \(f^{\prime}=\phi(f)\) is a \(3\)-star colouring of \(G^{\prime}\). Recall that if \(f_{1}\) and \(f_{2}\) are two colourings of the same graph and \(f_{2}\) can be obtained from \(f_{1}\) by merely swapping colours, then we say that \(f_{1}\) and \(f_{2}\) are _equivalent under colour swaps_. By guarantee in Construction 2, the number of \(3\)-colourings of \(G\) up to colour swaps is equal to the number of \(3\)-star colourings of \(G^{\prime}\) up to colour swaps. In particular, we have the following. **Claim 1:**\(G\) has at least two \(3\)-colourings non-equivalent under colour swaps if and only if \(G^{\prime}\) has at least two \(3\)-star colourings non-equivalent under colour swaps. **Claim 2:**\((G,f)\) is a yes instance of Another \(3\)-Colouring if and only if \((G^{\prime},f^{\prime})\) is a yes instance of Another \(3\)-Star Colouring. Suppose that \((G,f)\) is a yes instance of Another \(3\)-Colouring. Hence, \(G\) admits a \(3\)-colouring not equivalent to \(f\). So, \(G\) has at least two \(3\)-colourings non-equivalent under colour swaps. By Claim 1, this implies that \(G^{\prime}\) has at least two \(3\)-star colourings non-equivalent under colour swaps. Therefore, \(G^{\prime}\) has a \(3\)-star colouring not equivalent to \(f^{\prime}\). That is, \((G^{\prime},f^{\prime})\) is a yes instance of Another \(3\)-Star Colouring. Conversely, suppose that \((G^{\prime},f^{\prime})\) is a yes instance of Another \(3\)-Star Colouring. Hence, \(G^{\prime}\) admits a \(3\)-star colouring not equivalent to \(f^{\prime}\). So, \(G^{\prime}\) has at least two \(3\)-star colourings non-equivalent under colour swaps. By Claim 1, this implies that \(G\) has at least two \(3\)-colourings non-equivalent under colour swaps. Therefore, \(G\) has a \(3\)-colouring not equivalent to \(f\). That is, \((G,f)\) is a yes instance of Another \(3\)-Colouring. This proves the converse part and thus Claim 2. Thanks to Claim 2, we have a reduction from Another \(3\)-Colouring(\(\Delta=8\)) to Another \(3\)-Star Colouring(\(2\)-degenerate, bipartite, \(\Delta(G)=8\), \(\operatorname{girth}(G)=8\)). Therefore, Another \(3\)-Star Colouring is NP-complete for \(2\)-degenerate bipartite graphs of maximum degree eight and girth eight, and thus Unique \(3\)-Star Colouring is coNP-hard for the same class. Recall that Theorem 12 improved on the girth of \(G^{\prime}\) in Construction 1 by replacing the gadget component by a new gadget component. Applying the same idea to Construction 2 gives the following result. **Theorem 15**.: _Let \(g\geq 8\) be a fixed integer. For 2-degenerate bipartite graphs of maximum degree eight and girth at least \(g\), Another \(3\)-Star Colouring is NP-complete and Unique \(3\)-Star Colouring is coNP-hard._ ### \(k\)-Star Colouring with \(k\geq 4\) For \(k\geq 4\), it is known that \(k\)-Star Colourability is NP-complete for bipartite graphs [11]. We prove that the problem remains NP-complete when further restricted to graphs of maximum degree \(k\). We employ Construction 3 to this end. The gadget component used in the construction is displayed in Figure 23. The set \(W_{1}\) is an independent set of cardinality \(k-2\). Similarly, \(W_{2}\), \(W_{3}\), \(U_{1}\), \(U_{2}\) and \(U_{3}\) are independent sets of cardinality \(k-3\). Also, for each \(j\in\{1,2,3\}\), every vertex in \(W_{j}\) is adjacent to vertices \(x_{j},y_{j},z_{j}\) and members of \(U_{j}\). In upcoming diagrams, the gadget component is drawn by the symbol in Figure 1(a). When \(k=5\), the gadget component is as in Figure 1(b). Note that the gadget component has maximum degree \(k\), and it is bipartite (small dots form one part and big dots form the other part; see Figure 1(b)). The following lemma shows the usefulness of the gadget component. **Lemma 3**.: _Every \(k\)-star colouring of the gadget component must use the same colour on vertices \(a,b\) and \(c\)._ Proof.: We start with a simple claim. **Claim 1:** Let \(f\) is a \(k\)-star colouring of a graph \(G\), and let \(w^{\prime}\) and \(w^{\prime\prime}\) be two vertices in \(G\) with \(k\) common neighbours. Then, \(f(w^{\prime})\neq f(w^{\prime\prime})\). Suppose that \(v_{1},v_{2},\ldots,v_{k}\) are \(k\) common neighbours of \(w^{\prime}\) and \(w^{\prime\prime}\) in \(G\). Clearly, the colour \(f(w^{\prime})\) is unavailable for vertices \(v_{1},v_{2},\ldots,v_{k}\). Since only \(k-1\) colours are available for vertices \(v_{1},v_{2},\ldots,v_{k}\), at least two of these vertices have the same colour, say, \(f(v_{1})=f(v_{2})\). Hence, we have \(f(w^{\prime})\neq f(w^{\prime\prime})\) (otherwise, \(w^{\prime},v_{1},w^{\prime\prime},v_{2}\) will be a bicoloured \(P_{4}\); a contradiction). This proves Claim 1. Figure 23: The gadget component in Construction 3. For each \(j\in\{1,2,3\}\), vertices in \(W_{j}\) are adjacent to \(x_{j},y_{j},z_{j}\) and members of \(U_{j}\). Let \(f\) be a \(k\)-star colouring of the gadget component. The following claim deals with \(W_{1}\), \(W_{2}\), and \(W_{3}\). **Claim 2:** For each \(j\in\{1,2,3\}\), vertices in \(W_{j}\) get pair-wise distinct colours under \(f\). Since every pair of vertices in \(W_{1}\) have \(k\) common neighbours (namely \(x_{1},y_{1},z_{1}\) and members of \(U_{1}\)), vertices in \(W_{1}\) get pair-wise distinct colours under \(f\) by Claim 1. A similar argument works for \(W_{2}\) and \(W_{3}\). This proves Claim 2. Thanks to Claim 2, we may assume without loss of generality that vertices in \(W_{1}\) are assigned a permutation of colours \(0,1,\ldots,k-3\) (by \(f\)). Therefore, only colours \(k-2\) and \(k-1\) are available for vertices \(x_{1},y_{1}\) and \(z_{1}\). Hence, at least two of vertices \(x_{1},y_{1},z_{1}\) get the same colour. By symmetry, we assume without loss of generality that \(f(y_{1})=f(z_{1})=k-2\). For each colour \(i\in\{0,1,\ldots,k-3\}\), there is a vertex \(w\in W_{1}\) coloured \(i\) so that \(y_{1},w,z_{1}\) is a \(3\)-vertex path coloured \(k-2,i,k-2\), and thus \(f(y_{2})\neq i\) (if not, path \(y_{2},y_{1},w,z_{1}\) is a bicoloured \(P_{4}\)). Thus, \(f(y_{2})\neq i\) for each \(i\in\{0,1,\ldots,k-3\}\). Also, \(f(y_{2})\neq k-2=f(y_{1})\) since \(y_{2}\) is adjacent to \(y_{1}\). So, \(f(y_{2})=k-1\). Similarly, \(f(y_{3})=k-1\) and \(f(z_{2})=f(z_{3})=k-1\). Since \(y_{2},y_{1},y_{3}\) is a \(3\)-vertex path coloured \(k-1,k-2,k-1\) and \(y_{2}\) is adjacent to every member of \(W_{2}\), no member of \(W_{2}\) is coloured \(k-2\). Also, no member \(w\) of \(W_{2}\) is coloured \(k-1\) because \(w\) is adjacent to \(x_{2}\) and \(f(x_{2})=k-1\). So, only colours \(0,1,\ldots,k-3\) are available for vertices in \(W_{2}\). Thanks to Claim 2 and the fact \(|W_{2}|=k-3\), exactly one colour from \(\{0,1,\ldots,k-3\}\) is missing in \(W_{2}\). Without loss of generality, we may assume that the missing colour is \(0\); that is, vertices in \(W_{2}\) are assigned a permutation of colours \(1,2,\ldots,k-3\). Now, for each colour \(i\in\{1,2,\ldots,k-3\}\), there is a vertex Figure 24: (a) The symbol for gadget component, and (b) the gadget component when \(k=5\). \(w\in W_{2}\) coloured \(i\) so that \(y_{2},w,z_{2}\) is a \(3\)-vertex path coloured \(k-1,i,k-1\); as a result, \(c\) cannot be coloured \(i\) (otherwise, path \(c,y_{2},w,z_{2}\) is a bicoloured \(P_{4}\)). Thus, \(f(c)\neq i\) for each \(i\in\{1,2,\ldots,k-3\}\). Besides, \(f(c)\neq k-2\) because \(y_{2},y_{1},y_{3}\) is a \(3\)-vertex path coloured \(k-1,k-2,k-1\) and \(c\) is a neighbour of \(y_{2}\) (if \(f(c)=k-2\), then path \(c,y_{2},y_{1},y_{3}\) is a bicoloured \(P_{4}\)). Therefore, \(f(c)=0\). By similar arguments, \(f(a)=0\) as well (\(\because a\) is a neighbour of \(y_{3}\)). To complete the proof of the lemma, it suffices to show that \(f(b)=0\). Consider the set \(W_{3}\) and an arbitrary vertex \(w\in W_{3}\). Vertex \(w\) is not coloured \(0\) since otherwise \(a,y_{3},w,z_{3}\) is a bicoloured \(P_{4}\). Also, \(w\) is not coloured \(k-2\) since \(z_{2},z_{1},z_{3}\) is \(3\)-vertex path coloured \(k-1,k-2,k-1\) and \(w\) is adjacent to \(z_{3}\). Thus, \(f(w)\neq 0\) and \(f(w)\neq k-2\). Also, \(f(w)\neq k-1=f(y_{3})\) since \(y_{3}\) is a neighbour of \(w\). Therefore, only colours \(1,2,\ldots,k-3\) are available for vertices in \(W_{3}\). Thanks to Claim 2 and the fact \(|W_{3}|=k-3\), vertices in \(W_{3}\) are assigned a permutation of colours \(1,2,\ldots,k-3\). Hence, for each colour \(i\in\{1,2,\ldots,k-3\}\), there is a \(3\)-vertex path from \(y_{3}\) to \(z_{3}\) coloured \(k-1,i,k-1\) so that vertex \(b\) cannot be coloured \(i\) (if not, there is a \(4\)-vertex path from \(y_{3}\) to \(b\) coloured \(k-1,i,k-1,i\)). Thus, \(f(b)\neq i\) for each \(i\in\{1,2,\ldots,k-3\}\). Moreover, \(b\) cannot be coloured \(k-2\) since \(z_{2},z_{1},z_{3}\) is a \(3\)-vertex path coloured \(k-1,k-2,k-1\) and \(z_{3}\) is adjacent to \(b\). Therefore, the only colour available at \(b\) is \(0\). This proves that \(f(a)=f(b)=f(c)\). **Construction 3**.: _Parameter:_ An integer \(k\geq 4\). _Input:_ A graph \(G\) of maximum degree \(2(k-1)\). _Output:_ A bipartite graph \(G^{\prime}\) of maximum degree \(k\). _Guarantee 1:_\(G\) is \(k\)-colourable if and only if \(G^{\prime}\) is \(k\)-star colourable. _Guarantee 2:_\(G^{\prime}\) has only \(O(n)\) vertices where \(n=|V(G)|\). _Steps:_ Replace each vertex \(v\) of \(G\) by a vertex gadget as shown in Figure 25. For each vertex \(v\) of \(G\), the vertex gadget for \(v\) has \(2(k-1)\) terminals \(v_{1},v_{2},\ldots,v_{2(k-1)}\) which accommodate the edges incident on \(v\) in \(G\) (each terminal takes at most one edge; order does not matter). Replacement of vertices by vertex gadgets converts each edge \(uv\) of \(G\) into an edge \(u_{i}v_{j}\) between terminals \(u_{i}\) and \(v_{j}\) of corresponding vertex gadgets (where \(i,j\in\{1,2,\ldots,2(k-1)\}\)). Finally, replace each edge \(u_{i}v_{j}\) between terminals by an edge gadget as shown in Figure 26. An example is available in the supplementary material. Observe that every vertex of \(G^{\prime}\) is within a gadget component. We know that the gadget component has maximum degree \(k\) and the 'corners' \(a,b,c\) of the gadget component has only two neighbours within the gadget component. Since the vertex gadget is a 'chain' of gadget components and \(k\geq 4\), the vertex gadget has maximum Figure 25: Replacement of vertex by vertex gadget. degree \(k\) and each terminal of the gadget has only two neighbours within the gadget. Also, the edge gadget has maximum degree \(k\) and each terminal of the gadget has only two neighbours within the gadget. So, each terminal in \(G^{\prime}\) has degree two or four. The graph \(G^{\prime}\) has maximum degree \(k\) because gadgets in \(G^{\prime}\) have maximum degree \(k\), terminals in \(G^{\prime}\) have degree at most four, and \(k\geq 4\). Proof of Guarantee 1.: Suppose that \(G\) admits a \(k\)-colouring \(f\colon V(G)\to\{0,1,\ldots,k-1\}\). We construct a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) as follows. For each gadget component within the vertex gadget for \(v\in V(G)\), use the scheme in Figure 27, but swap colours \(0\) and \(f(v)\). For each gadget component between terminals \(u_{i}\) and \(v_{j}\), choose two distinct colours \(p,q\in\{0,1,\ldots,k-1\}\setminus\{f(u),f(v)\}\) and apply the scheme in Figure 27, but swap colour \(0\) with colour \(p\) and swap colour \(k-1\) with colour \(q\). Clearly, \(f^{\prime}\) is a \(k\)-colouring. Figure 27: A \(k\)-star colouring scheme for the gadget component (for every vertex in \(U_{1}\cup U_{2}\cup U_{3}\), use colour \(k-2\)). Figure 26: Replacement of edge between terminals by edge gadget. **Claim:**\(f^{\prime}\) is a \(k\)-star colouring. Contrary to the claim, assume that there is a \(4\)-vertex path \(Q\) in \(G^{\prime}\) bicoloured by \(f^{\prime}\). By construction, the restriction of \(f^{\prime}\) to each gadget component is a \(k\)-star colouring. It is easy to verify that \(f^{\prime}\) restricted to each edge gadget is a \(k\)-star colouring. Therefore, either \(Q\) is within a vertex gadget, or \(Q\) contains vertices from a vertex gadget and an edge gadget. We show that the latter leads to a contradiction (the former can be ruled out by similar arguments). Suppose \(Q\) contains vertices from the vertex gadget for vertex \(v\in V(G)\) and from an edge gadget between two terminals \(u_{i}\) and \(v_{j}\) of \(G^{\prime}\). Clearly, \(Q\) must contain a bicoloured \(3\)-vertex path \(Q^{*}\) such that (i) \(Q^{*}\) is entirely within a gadget, and (ii) an endpoint of \(Q^{*}\) is a terminal of the gadget. Observe that \(Q^{*}\) cannot be in the vertex gadget because there is no bicoloured \(P_{3}\) in Figure 27 having a terminal as an endpoint. Similarly, the choice of colour \(q\) ensures that \(q\notin\{f(u),f(v)\}\), and thus \(Q^{*}\) cannot be in the edge gadget either. This contradiction proves the claim. Hence, \(G^{\prime}\) is \(k\)-star colourable. Conversely, suppose that \(G^{\prime}\) admits a \(k\)-star colouring \(f^{\prime}:V(G^{\prime})\to\{0,1,\ldots,k-1\}\). Thanks to Lemma 3, \(f^{\prime}\) must colour all terminals of a vertex gadget by the same colour (because the vertex gadget is a 'chain' of gadget components). Again by Lemma 3, for each edge gadget between terminals \(u_{i}\) and \(v_{j}\) in \(G^{\prime}\), \(f^{\prime}(b)=f^{\prime}(c)\) and hence \(f^{\prime}(u_{i})\neq f^{\prime}(v_{j})\) (if not, \(b,u_{i},c,v_{j}\) is a bicoloured \(P_{4}\)). So, all terminals of a vertex gadget have the same colour, and terminals of each edge gadget have different colours. Therefore, the function \(f\colon V(G)\to\{0,1,\ldots,k-1\}\) defined as \(f(v)=f^{\prime}(v_{1})\) for all \(v\in V(G)\), is a \(k\)-colouring of \(G\). Hence, \(G\) is \(k\)-colourable whenever \(G^{\prime}\) is \(k\)-star colourable. Proof of Guarantee 2.: Let us count the number of vertices in \(G^{\prime}\). Each gadget component has \((k-2)+5(k-3)+12=6k-5\) vertices. Each vertex gadget has \((2k-2)(6k-5)-(2k-1)=12k^{2}-24k+11\) vertices. Each edge gadget has \(6k-5\) vertices excluding the terminals (let us count terminals as part of vertex gadgets). So, \(G^{\prime}\) has \(n(12k^{2}-24k+11)+m(6k-5)=O(m+n)\) vertices where \(m=|E(G)|\) and \(n=|V(G)|\). As \(\Delta(G)=2(k-1)\), we have \(m\leq n\Delta(G)/2=n(k-1)=O(n)\). Therefore, \(G^{\prime}\) has only \(O(m+n)=O(n)\) vertices. Thanks to Construction 3, we have the following theorem since \(k\)-Colourability is NP-complete for graphs of maximum degree \(2(k-1)\) (in fact, NP-complete for line graphs of \(k\)-regular graphs [28]) and the problem does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails (the latter can be observed from a reduction of Emden-Weinert et al. [17]; Theorem 3 in supplementary material presents a shorter alternate proof with the help of an alternate reduction). **Theorem 16**.: _For all \(k\geq 4\), \(k\)-Star Colourability(bipartite, \(\Delta=k\)) is NP-complete, and the problem does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails. _ ## 6 Open Problems and Related Work Many problems related to star colouring are open even for the class of cubic graphs. Chen et al. [10] proved that cubic graphs are \(6\)-star colourable. On the other hand, Xie et al. [39] proved that cubic graphs are not \(3\)-star colourable. So, \(4\leq\chi_{s}(G)\leq 6\) for every cubic graph \(G\). Both bounds are tight because \(\chi_{s}(K_{4})=4\) and \(\chi_{s}(M_{8})=6\)[19] where \(M_{8}\) is the Wagner's graph (also called the 8-vertex Mobius ladder graph). Conjecture 12 of Almeter et al. [2] implies that \(M_{8}\) is the only cubic graph with star chromatic number six. **Conjecture 1** ([2]).: Every cubic graph except \(M_{8}\) is 5-star colourable. Conjecture 1 implies that 5-Star Colourability is in P for cubic graphs. **Problem 2**.: What is the complexity of 4-Star Colourability in cubic graphs? ## Statements and Declarations ### Funding The first author is supported by SERB(DST), MATRICS scheme MTR/2018/000086. ### Conflict of interest The authors declare that they have no conflict of interest. ### Acknowledgement We thank Emil Jerabek and an anonymous referee for their valuable comments.
2309.07940
CvFormer: Cross-view transFormers with Pre-training for fMRI Analysis of Human Brain
In recent years, functional magnetic resonance imaging (fMRI) has been widely utilized to diagnose neurological disease, by exploiting the region of interest (RoI) nodes as well as their connectivities in human brain. However, most of existing works only rely on either RoIs or connectivities, neglecting the potential for complementary information between them. To address this issue, we study how to discover the rich cross-view information in fMRI data of human brain. This paper presents a novel method for cross-view analysis of fMRI data of the human brain, called Cross-view transFormers (CvFormer). CvFormer employs RoI and connectivity encoder modules to generate two separate views of the human brain, represented as RoI and sub-connectivity tokens. Then, basic transformer modules can be used to process the RoI and sub-connectivity tokens, and cross-view modules integrate the complement information across two views. Furthermore, CvFormer uses a global token for each branch as a query to exchange information with other branches in cross-view modules, which only requires linear time for both computational and memory complexity instead of quadratic time. To enhance the robustness of the proposed CvFormer, we propose a two-stage strategy to train its parameters. To be specific, RoI and connectivity views can be firstly utilized as self-supervised information to pre-train the CvFormer by combining it with contrastive learning and then fused to finetune the CvFormer using label information. Experiment results on two public ABIDE and ADNI datasets can show clear improvements by the proposed CvFormer, which can validate its effectiveness and superiority.
Xiangzhu Meng, Qiang Liu, Shu Wu, Liang Wang
2023-09-14T06:06:01Z
http://arxiv.org/abs/2309.07940v1
# CvFormer: Cross-view transFormers with Pre-training for fMRI Analysis of Human Brain ###### Abstract In recent years, functional magnetic resonance imaging (fMRI) has been widely utilized to diagnose neurological disease, by exploiting the region of interest (RoI) nodes as well as their connectivities in human brain. However, most of existing works only rely on either RoIs or connectivities, neglecting the potential for complementary information between them. To address this issue, we study how to discover the rich cross-view information in fMRI data of human brain. This paper presents a novel method for cross-view analysis of fMRI data of the human brain, called Cross-view transFormers (CvFormer). CvFormer employs RoI and connectivity encoder modules to generate two separate views of the human brain, represented as RoI and sub-connectivity tokens. Then, basic transformer modules can be used to process the RoI and sub-connectivity tokens, and cross-view modules integrate the complement information across two views. Furthermore, CvFormer uses a global token for each branch as a query to exchange information with other branches in cross-view modules, which only requires linear time for both computational and memory complexity instead of quadratic time. To enhance the robustness of the proposed CvFormer, we propose a two-stage strategy to train its parameters. To be specific, RoI and connectivity views can be firstly utilized as self-supervised information to pre-train the CvFormer by combining it with contrastive learning and then fused to finetune the CvFormer using label information. Experiment results on two public ABIDE and ADNI datasets can show clear improvements by the proposed CvFormer, which can validate its effectiveness and superiority. Functional MRI, Human Brain, Cross-view Modeling, Transformers, Self-supervised Learning. ## I Introduction With the rapid development of modern medical era, magnetic resonance imaging (MRI) [1] technologies have been proven to be a valuable tool in the investigation of neurological issues, particularly in the diagnosis of neurological disorders. In particular, functional magnetic resonance imaging (fMRI) [2] is one of the non-invasive techniques to temporal dynamics of blood oxygen level dependency (BOLD) [3] response. In recent years, fMRI data of human brain has been widely exploited to understand functional activities and organization of human brain. Recent research [4] suggests that functional connectivities [5] among brain regions of interest (RoI) are crucial in determining behavior, cognition, and brain dysfunction. In recent years, machine learning methods [6] have been widely used to build the corresponding diagnosis model. Among these works, shadow learning-based methods usually follow two stages, where brain networks are first processed to produce an embedding of the human brain, and then a classical classifier is utilized to divide the learned data into corresponding groups. However, these methods are prone to inducing substantial errors in the second stage if the brain features from the first stage are not reliable. With the established power of deep learning methods [7], most works based on Convolution Neural Networks (CNN) [8] and Graph Neural Networks (GNN) [9] have been widely developed to extract spatial, temporal and connective patterns of fMRI time series for brain disorder diagnose. For instance, BrainNetCNN [10] leverages the topological locality of brain networks to predict cognitive and motor developmental outcome scores for infants born preterm; BrainGNN [11] designs novel RoI-aware graph convolutional layers that leverage the topological and functional information of fMRI. Moreover, transformers [12] have been studied over different types of data, and there also exist several transform-based works for human brain, such as BRAINNETTF [13], which leverages the unique properties of brain network data to maximize the power of transformer-based models for brain network analysis. Unfortunately, limited brain data might result in erratic results of the above diagnosis models due to expensive costs of data acquisition. ### _Motivations_ Even though most existing methods have obtained promising performance for brain disorder diagnosis in certain situations, there still remain two important aspects that have yet to be thoroughly investigated comprehensively. 1. Different from single-view data, cross-view data [14] can reflect different properties of human brain, i.e. node-level and edge-level information of human brain. However, most existing methods mainly focus on RoIs or connectivity information in human brain, which limits their ability to fully exploit the diverse and complementary information available across multiple views. 2. There exist less transform-based works for fMRI analysis of human brain. The main reason is that large-scale brain data can be difficult to collect to train such models. Thus, it's necessary to provide a suitable manner to pre-train the transformer-based models when facing limited human brain data. ### _Contributions_ This paper proposes a cross-view transformers (CvFormer) to jointly model the human brain from two perspectives, namely regions of interest (RoIs) and their connectivity. CvFormer employs RoI and connectivity encoder modules to generate RoI and sub-connectivity tokens as two views. These tokens are processed through separate branches based on transformer modules. Then, RoI and sub-connectivity tokens can be purely fused by cross-view modules multiple times to complement each other. Notably, CvFormer uses a global token for each branch as a query to exchange information based on cross-view modules in an efficient manner, reducing the computational and memory complexity from a quadratic to a linear time complexity. Finally, we propose a two-stage strategy to train its parameters. We combine RoI and connectivity views with contrastive learning to pre-train the CvFormer and fuse cross-view information to finetune the CvFormer using label information. We evaluate the effectiveness of the proposed CvFormer on two neurological disorders datasets of ADNI and ABIDE. The major contributions of this paper can be summarized as follows: 1. CvFormer can simultaneously consider the diversity and complementary information between two views of human brain, leading to more distinguishable and informative brain representations. 2. A two-stage strategy is proposed to train the parameters of CvFormer, where RoI and connectivity views can be combined with contrastive learning to pre-train the CvFormer and fused to finetune the CvFormer. 3. Massive experimental results on ADNI and ABIDE datasets can validate the effectiveness and superiority of the proposed CvFormer, corresponding to Alzheimer's Disorder (AD) and Autism Spectrum Disorder (ASD). ## II Method ### _Overall Architecture_ In this section, we introduced a novel cross-view transformers (CvFormer) model for brain disorder diagnosis, as shown in Fig. 1. The CvFormer model comprises three key components: (1) Tokens block contains two branches, i.e. RoIview and Connectivity-view token encoders, which generate the initial RoI-view and Connectivity-view tokens of human brains. (2) Cross-view transformer block contains two branches based on transformer encoder to process RoI-view and connectivity-view tokens, and leverages cross-view encoder to learn complementary information between views. (3) Pooling block is utilized to extract the global cross-view representations, corresponding to RoI and connectivity views. ### _Cross-View Transformer_ Cross-view transformer mainly consists of three following blocks. **Tokens block** is utilized to transform preprocessed fMRI data of human brain into cross-view tokens, consisting of RoI-view and connectivity-view token encoders. For RoI-view token encoder, given \(\hat{M}\) RoIs of human brain, the initial feature of each RoI usually can be represented by the time series or connectivity profile. Then, one linear layer is used to project the initial features to latent space. Adding it with positional embeddings, we can get RoI-view tokens as \[\left[\mathbf{CLS}_{R}^{(0)},\mathbf{RoI}_{1}^{(0)},\cdots,\mathbf{RoI}_{M}^{(0)}\right], \tag{1}\] where \(\mathbf{CLS}_{R}\) denotes the class token to exploit the global RoI-view information. \(\mathbf{RoI}_{i}\) is the token of \(i\)th RoI node of human brain. For connectivity-view token encoder, we first convert a connectivity network into a sequence of \(N\) patch tokens by dividing it with a certain patch size, where each patch token denotes the local connectivity pattern of human brain. Then, we linearly project each patch and add Fig. 1: Flowchart of CvFormer framework, consisting of three following parts, tokens block, cross-view transformer block, and pooling block. These blocks serve the purpose of generating cross-view representations, leveraging the complementary information between views, and ultimately pooling the global cross-view information for human brain representation, respectively. positional embeddings into connectivity-view tokens, which can be expressed as \[\left[\mathbf{CLS}_{C}^{(0)},\mathbf{SUB}_{1}^{(0)},\mathbf{SUB}_{2}^{(0)},\cdots,\mathbf{SUB}_{N }^{(0)}\right], \tag{2}\] where \(\mathbf{CLS}_{C}\) denotes the class token to exploit the global RoI-view information. \(\mathbf{SUB}_{i}\) is the token of \(i\)th local connectivity pattern of human brain. **Cross-view transformer block** is the core component of CvFormer, containing two transformer-based branches that encode rich information between RoI and connectivity views. Cross-view encoder is adopted in cross-view transformer block to enforce RoI and connectivity views to learn with each other. A transformer encoder is composed of a sequence of blocks where each block contains multi-headed self-attention (MSA) with a feed-forward network (FFN). For each head of MSA module, query matrix \(\mathbf{Q}\), key matrix \(\mathbf{K}\) and value matrix \(\mathbf{V}\) are generated by linearly projecting RoI-view or connectivity tokens layers. The scaled dot-product self-attention is applied on \(\mathbf{Q}\), \(\mathbf{K}\) and \(\mathbf{V}\) can be expressed as \[Attention(\mathbf{Q},\mathbf{K},\mathbf{V})=softmax(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}} })\mathbf{V}, \tag{3}\] where \(d_{k}\) is the dimension of \(\mathbf{Q}\) and \(\mathbf{K}\). The softmax function is applied to each row of the attention score that reflects the relationship between \(\mathbf{Q}\) and \(\mathbf{K}\). FFN contains two-layer linear layers with expanding ratio \(r\) at the hidden layer, and one GELU non-linearity is applied after the first linear layer. Layer normalization (LN) is applied before every block, and residual shortcuts are applied after every block. For RoI-view tokens \(\mathbf{X}_{R}^{(l)}=\left[\mathbf{CLS}_{C}^{(0)},\mathbf{SUB}_{1}^{(0)},\mathbf{SUB}_{2}^{(0 )},\cdots,\mathbf{SUB}_{N}^{(0)}\right]\), the processing of the \(l\)th transformer encoder of RoI-view branch can be expressed as follows: \[\mathbf{X}_{R}^{(l)}=LN(\mathbf{X}_{R}^{(l)}+MSA(\mathbf{X}_{R}^{(l)})), \tag{4}\] \[\mathbf{X}_{R}^{(l+1)}=LN(\mathbf{X}_{R}^{(l)}+FFN(\mathbf{X}_{R}^{(l)})). \tag{5}\] Similarly, we can also get the processing of the \(l\)th transformer encoder of connectivity-view branch following by Eq.(4)-Eq.(5). To integrate complement information between RoI and connectivity views, cross-view encoder is proposed to enhance the key information of human brain. Generally, the CLS token already learns global information among all tokens in such view, interacting with context tokens at the other branch helps to integrate cross-view information. Thus, the basic idea of cross-view encoder is to involve the CLS token of one branch and context tokens of the other branch. That is, we first utilize the CLS token at each branch as one query to exchange information among context tokens from the other branch by MSA. Then, FFN is applied after cross-view fusion. LN is applied before every block, and residual shortcuts after every block. For each head of MSA of cross-view encoder, its processing can be expressed as follows: \[\mathbf{CLS}_{R}^{(l+1)}=Attention(\mathbf{CLS}_{R}^{(l)},Content_{C}^{(l)},Content_{C }^{(l)}), \tag{6}\] \[\mathbf{CLS}_{C}^{(l+1)}=Attention(\mathbf{CLS}_{C}^{(l)},Content_{R}^{(l)},Content_{ R}^{(l)}). \tag{7}\] where \(Content_{C}^{(l)}=[\mathbf{SUB}_{1}^{(l)},\cdots,\mathbf{SUB}_{N}^{(l)}]\) and \(Content_{R}^{(l)}=[\mathbf{RoA}_{1}^{(l)},\cdots,\mathbf{RoA}_{M}^{(l)}]\). After fusing cross-view information, the CLS token interacts with its own patch tokens again at the next transformer encoder, where it is able to pass the learned information from the other branch to its own patch tokens, to enrich the representation of each patch token. Therefore, cross-view encoder can effectively preserve the interactive information between RoI and connectivity views. Besides, the computation and memory complexity of generating the attention map in cross-view encoder are linear rather than quadratic time. **Pooling block** is proposed to obtain the global cross-view information via pooling operation after the processing of cross-view transformer blocks. To be specific, the pooling operation employs the CLS token as final embedding for each branch. That, we can get the global tokens \(\mathbf{CLS}_{R}\) and \(\mathbf{CLS}_{C}\) for ROI and connectivity views, as shown in Fig. 1. ### _Two-stage Training_ To enhance the robustness of the proposed CvFormer, we propose a two-stage training strategy to learn its parameters. For the first stage, RoI and connectivity views can be utilized as self-supervised information to pre-train the CvFormer by combining it with contrastive learning. For the second stage, we can fuse the cross-view information to finetune the CvFormer through refined label information. #### Iii-C1 Self-supervised Pre-training Through the processing of CvFormer for human brain, we can obtain global tokens \(\mathbf{CLS}_{R}\) and \(\mathbf{CLS}_{C}\) for ROI and connectivity views. For any instance of human brain, its embedding generated in ROI view (or connectivity view), is treated as the anchor, the embedding of it generated in the other view, forms the positive sample, and the other embeddings in the two views are naturally regarded as negative samples. In this way, we can employ a contrastive objective that distinguishes the embeddings of the same instance in these two different views from other instances' embeddings. Mirroring the InfoNCE objective in our self-supervised pre-training setting, we can define the pairwise objective for positive pairs \(\{(\mathbf{u}_{i},\mathbf{v}_{i})\}\) as \[\mathbf{Loss}_{CL}=\sum_{i=1}^{N}\frac{exp(\theta(\mathbf{u}_{i},\mathbf{v}_{i}))/\tau}{exp (\theta(\mathbf{u}_{i},\mathbf{v}_{i}))/\tau+neg(i)} \tag{8}\] where \(neg(i)=\sum_{k\neq i}{(exp(\theta(\mathbf{u}_{i},\mathbf{v}_{k}))+exp(\theta(\mathbf{u}_{ k},\mathbf{v}_{i})))/\tau}\), and \(\tau\) is a temperature parameter. \(\theta(\mathbf{u}_{i},\mathbf{v}_{i})=s(h(\mathbf{u}_{i}),h(\mathbf{u}_{i}))\), where \(s(\cdot,\cdot)\) is the cosine similarity and \(h(\cdot)\) is a multi-layer perception (MLP). #### Iii-C2 Fine-tuning After the processing of cross-view transformer blocks, the CLS token is utilized as final embedding for each branch. We adopt the MLP to produce the probability scores of target labels for two views, and then fuse these scores as final output for brain disorder diagnosis. In fine-tuning the whole CvFormer, we combine the cross entropy and contrastive loss as the final loss function, which can be formulated as \[\mathbf{Loss}=\mathbf{Loss}_{CE}+\lambda\mathbf{Loss}_{CL} \tag{9}\] where \(\lambda>0\) is a hyper-parameter to balance the cross entropy and contrastive loss terms. Finally, we utilize the stochastic gradient descent (SGD) method to update all weight parameters. ## III Experiments ### _Experiment Setting_ **Datasets.** To validate the effectiveness of the proposed CvFormer, massive experiments are performed on two types of brain neuro-science disorder diagnosis. Specifically, Alzheimer's Disease Neuroimaging Initiative (ADNI)1 contains the neuRoImaging data of 128 different patients, consisting of 54 Early Mild Cognitive Impairment (EMCI), 38 Late Mild Cognitive Impairment Impairment (LMCI), and 34 Alzheimer's Disease (AD). Autism Brain Imagine Data Exchange (ABIDE)2 contains 1112 subjects, which are composed of structural and resting state fMRI data along with the corresponding phenotypic information. Footnote 1: [http://www.adni-info.org](http://www.adni-info.org). Footnote 2: [http://preprocessed-connectomes-project.org/abide/download.html](http://preprocessed-connectomes-project.org/abide/download.html) **Preprocessing.** For ABIDE dataset, we downloaded the preprocessed rs-fMRI series data of the top four largest sites (UM, NYU, USM, UCLA) from the preprocessed ABIDE dataset with Configurable Pipeline for the Analysis of Connectomes (CPAC), band-pass filtering (0.01 - 0.1 Hz), no global signal regression, parcellating each brain into 90 RoIs by the Automated Anatomical Labeling (AAL) atlas. For ADNI dataset, we utilized the standard preprocessing procedure to process original fMRI data as work [15], and each human brain was also parcellated into 90 RoIs using AAL atlas. For both the ABIDE and ADNI datasets, the mean time series of the RoIs were used to compute the functional connectivity network (FCN) by computing the correlation matrix. The FCN was then represented as RoI-view features, where each row of the FCN represented a node in the human brain. A connectivity-view feature was obtained by drawing an edge between each pair of RoIs with a correlation larger than the 70-th percentile of the correlation values. **Baselines.** We evaluated the effectiveness of the proposed CvFormer in classifying brain networks by comparing it with the following baselines, including Principle Component Analysis (PCA) [16], Support Vector Machine (SVM) [17], Multilayer Perceptron (MLP), Graph Convolutional Networks (GCN) [18], BrainGNN [11], BrainNetCNN [10], Graphnomer [19], and BRAINNETTF [13]. **Implement Details.** For all datasets, we randomly select 70% of the samples as training samples, 10% of samples as validating samples, and the remaining 20% of samples as testing samples at each iteration. We repeatedly run the above validation process ten times for all methods and use the accuracy of classification as the evaluation index. The above computational experiments are performed on a server running Ubuntu Linux 18.04 64bit with 56 CPUs (Xeon E5-2660 v4) at 3.2 GHz and 8 GPUs (NVIDIA TITAN Xp). The experimental setting for the baselines is based on the methodology described in the corresponding original papers. Finally, the results of the experiments are aggregated and summarized. ### _Evaluation Results and Analysis_ As illustrated in Table I, we could find that the proposed CvFormer performs other methods in most situations. The main reason was that cross-view representation of human brain provides more comprehensive information as compared to single-view approaches (such as Region of Interest or connectivity-based methods). Additionally, the cross-view encoder effectively captured complementary information from both the Region of Interest and connectivity views, further enhancing the performance of CvFormer. Among single-view works, we observed that transformer-based methods achieved remarkable for brain disorder diagnosis, demonstrating the modeling power of transformers for human brains. In comparison to deep learning-based methods, traditional methods were unable to deliver comparable results, suggesting that deep learning-based techniques are more effective in automatically discovering meaningful information. However, these methods were limited by their reliance on single-view information, failing to fully utilize the rich information in the human brain. Therefore, it's necessary to first extract the prior knowledge for the subsequent neurological disorders diagnosis. ### _Ablation Analysis_ We perform an extensive ablation study on ADNI dataset to evaluate the impact of individual components and to assess the effectiveness of our proposed CvFormer approach with varying architectural specifications. #### Iii-C1 Efforts of Sub-modules We have conducted massive experiments on ADNI dataset to evaluate the impact of sub-modules of CvFormer. As demonstrated by the results presented in Table II, the proposed CvFormer exhibited superior performance when simultaneously utilizing these sub-modules. In comparison to utilizing single RoI or connectivity view, two views could result in more rich information than single view. Despite the use of transformer encoder to combine information between two views, it failed to achieve better performance without cross-view encoder. Consequently, the integration of both the RoI-view and connectivity-view branches could lead to the highest accuracy, indicating that these two branches learn complementary features. #### Iii-B2 Efforts of Pretraining We evaluate the impact of self-supervised pertraining for the proposed CvFormer on ADNI dataset. As demonstrated by the results presented in Table III, CvFormer can obtain more promising performance when training it with self-supervised pertaining. The main reason is that training transformers-based models usually depends on large-scale brain data. In comparison to directly training the CvFormer, this manner can initialize more suitable parameters for limited human brain data. Thus, it's necessary to provide a suitable manner to pre-train the transformer-based models when facing limited human brain data. Therefore, using self-supervised pretraining as a suitable manner to pre-train the transformer-based models is very necessary and valuable when facing limited human brain data. #### Iii-B3 Efforts of Hyper-parameter \(\lambda\) To validate the effort of hyper-parameter \(\lambda\) in eq. (9), we conduct the experiments with different settings on ADNI dataset. As shown in the Table IV, CvFormer can obtain stable results on the ABIDE dataset in most situations. we can readily find that the proposed TaGBL obtains the best performance when \(\lambda=0.1\). More importantly, it's obvious that there exists a wide range for hyper-parameter \(\lambda\). ## IV Conclusion This paper proposes a novel transformer model for modeling human brains, called CvFormer. CvFormer is comprised of three main components: tokens block, cross-view transformer block, and fusion block. Unlike previous related works, the token block is capable of producing both RoI-view and connectivity-view tokens, as opposed to just one. The cross-view transformer block, which is the core component of CvFormer, provides two branches based on Transformer modules to process RoI and sub-connectivity tokens. It also implements a cross-view encoder to facilitate the exchange of complementary information between the two views. The final representation of the human brain is obtained through the pooling block, which generates cross-view global tokens. Thereto, CvFormer can simultaneously take into account the diverse and complementary information from both views of the human brain, producing distinct and informative brain representations. Moreover, we propose a two-stage strategy to train its parameters. We combine RoI and connectivity views with contrastive learning to pre-train the CvFormer and fuse cross-view information to finetune the CvFormer using label information. The experimental results can validate the effectiveness of CvFormer.
2309.11844
Constructing the Hyper-Kamiokande Computing Model in the Build Up to Data Taking
Hyper-Kamiokande is a next-generation multi-purpose neutrino experiment with a primary focus on constraining CP-violation in the lepton sector. It features a diverse science programme that includes neutrino oscillation studies, astrophysics, neutrino cross-section measurements, and searches for physics beyond the standard model, such as proton decay. Building on its predecessor, Super-Kamiokande, the Hyper-Kamiokande far detector has a total volume approximately 5 times larger and is estimated to collect nearly 2PB of data per year. The experiment will also include both on- and off-axis near detectors, including an Intermediate Water Cherenkov Detector. To manage the significant demands relating to the data from these detectors, and the associated Monte Carlo simulations for a range of physics studies, an efficient and scalable distributed computing model is essential. This model leverages Worldwide LHC Grid computing infrastructure and utilises the GridPP DIRAC instance for both workload management and for file cataloguing. In this report we forecast the computing requirements for the Hyper-K experiment, estimated to reach around 35PB (per replica) and 8,700 CPU cores (~ 100,000 HS06) by 2036. We outline the resources, tools, and workflow in place to satisfy this demand.
Sophie King
2023-09-21T07:36:35Z
http://arxiv.org/abs/2309.11844v1
# Constructing the Hyper-Kamiokande Computing Model in the Build Up to Data Taking ###### Abstract Hyper-Kamiokande is a next-generation multi-purpose neutrino experiment with a primary focus on constraining CP-violation in the lepton sector. It features a diverse science programme that includes neutrino oscillation studies, astrophysics, neutrino cross-section measurements, and searches for physics beyond the standard model, such as proton decay. Building on its predecessor, Super-Kamiokande, the Hyper-Kamiokande far detector has a total volume approximately 5 times larger and is estimated to collect nearly 2 PB of data per year. The experiment will also include both on- and off-axis near detectors, including an Intermediate Water Cherenkov Detector. To manage the significant demands relating to the data from these detectors, and the associated Monte Carlo simulations for a range of physics studies, an efficient and scalable distributed computing model is essential. This model leverages Worldwide LHC Grid computing infrastructure and utilises the GridPP DIRAC instance for both workload management and for file cataloguing. In this report we forecast the computing requirements for the Hyper-K experiment, estimated to reach around 35 PB (per replica) and 8,700 CPU cores (\(\sim\)100,000 HS06) by 2036. We outline the resources, tools, and workflow in place to satisfy this demand. Sophie King et al. ## 1 Introduction The Hyper-Kamiokande (Hyper-K) experiment [1; 2] is the successor to the highly successful and accomplished Super-Kamiokande (SK) [5; 6] and T2K (Tokai-to-Kamioka) [3; 4] experiments. Currently under construction, the Hyper-K far detector (HKFD) is a 258 kton underground water Cherenkov detector. The inner detector will house 20,000 inward-facing photomultiplier tubes (PMTs), 50 cm in diameter, along with a thousand multi-PMTs that each contain nineteen 8 cm PMTs. This region will be encompassed by the outer detector, which is designed to be instrumented with a few thousand 8 cm PMTs to veto incoming backgrounds. This amounts to tens of thousands of readout channels, and a post-trigger rate of around 5 TB/day of data to be transferred and stored off-site, setting the scale for Hyper-K storage requirements. Hyper-K will search for CP-violation in the lepton sector through the study of neutrino oscillations in an accelerator-based long-baseline neutrino oscillation configuration. For this purpose, in addition to the far detector, Hyper-K features several near detectors, which will constrain systematic uncertainties relating to the flux and neutrino interaction models and measure the neutrino beam properties, as well as performing cross-section measurements and searches for new physics. Hyper-K is also building the Intermediate Water Cherenkov Detector (IWCD), providing a near detector that utilises the same technology and target as the far detector, as well as the ability to profile the beam across a continuous range of off-axis angles. The computing predictions that cover the raw and processed data needs of Hyper-K detectors, as well as the Monte Carlo (MC) production samples covering the signal, background and control samples needed for all physics analyses, are presented in Section 2. The infrastructure, tools and workflow that define the Hyper-K computing model, constructed to meet these needs in an organised and efficient manner, is defined in Section 3. ## 2 Hyper-K Computing Forecasts The Hyper-K far detector is currently under construction, and the experiment will start taking data in 2027. The computing forecasts were split into two stages: the construction period, from 2023 up till the end of 20261; and the first ten years of operation, from the start of 2027 to the end of 20362. Internally within the Hyper-K Computing Group, a detailed year-by-year prediction has been forecast for each detector and for each sample type, with input from the Computing, Software, DAQ, Calibration and Physics working groups. However, due to the large uncertainties on these estimates - stemming from both the current phase of rapid software development within Hyper-K, which will stabilize over the next couple of years, and because aspects of the DAQ are still being finalised - we present only the 2023 and 2036 points, and use a straight line to convey the order of magnitude while avoiding the impression of year-to-year precision. The detailed breakdown is available upon request to the computing and funding agencies that support Hyper-K. Footnote 1: It should be noted that construction began in 2021, but these estimates cover the period starting in 2023 as this is the point where our MC production campaigns are starting to ramp up to a more significant level. Footnote 2: It is expected that Hyper-K will continue for an additional 10 years till the end of 2046. A _very_ rough prediction for the end of this period can be considered by doubling the 2036 estimate. ### Aggregated CPU and Storage Projections The Hyper-K computing requirements for raw data storage as well as both data and MC production, across all detectors, are totalled and presented as a function of time in Figure 1. The left axis indicates storage, which is projected to reach around 35 PB for a single replica of each file. The policy is to maintain three replicas of raw data, one in Japan and two in different countries, and two replicas of MC. The right axis represents the number of CPU cores and is predicted to reach nearly 8,700 cores (\(\sim\)100,000 HS06). Throughout the construction period, the computing estimates revolve solely around the MC simulations that are generated for physics sensitivity studies, detector design optimization, validation of trigger algorithms, and calibration studies. During this period, the IWCD MC production dominates the CPU demand and the HKFD MC dominates the storage. During the operational phase, the storage and CPU associated with raw data storage, data (re)processing, calibration and event reduction/pre-selection contribute to the computing requirements. While the HKFD raw data quickly dominates the storage needs of the experiment, accounting for around 70% of the total, the IWCD MC production continues to dominate the CPU and is responsible for around 70% of the total. ### Resource-intensive tasks In this section we comment on the tasks that drive the CPU and storage estimates during the operational phase of the experiment. This helps us to understand the best way to meet these demands and consider areas where we have potential to improve efficiency. **CPU usage for Water Cherenkov reconstruction:** The CPU requirements of the IWCD MC production, which dominates both the construction and operational periods of the experiment, is due to the reconstruction stage. While the two Water Cherenkov detectors (HKFD, IWCD) use the same reconstruction algorithm, the close proximity of the IWCD to the neutrino beam results in a much larger number of beam events and hence requires MC samples with higher statistics. The current reconstruction algorithm, fiTQun /citebib:fitqun, was successfully introduced for SK to reduce the fiducial volume while also improving reconstruction capabilities, and is also used within both T2K and Hyper-K. It uses a maximum-likelihood based algorithm that computes the likelihood for different particle topologies along with their reconstructed kinematics. Efforts are being made to improve reconstruction processing times, both by improving the efficiency of the existing method, and by investigating machine learning based techniques, which seek to improve not only speed but also reconstruction capabilities [8, 9]. **Raw data storage:** The vast size of the far detector, coupled with tens of thousands of readout channels, yields a predicted (post-trigger) raw data rate of nearly 2 PB/year to be transferred and stored off-site. While this estimate assumes a basic level of data rejection due to triggering, it is a slightly conservative number and has the potential to decrease once the triggering is finalised and the collaboration reaches agreement about what data to store long-term. The majority of this data will be archived to tape and brought onto disk upon request; regularly accessed data samples, based on triggers and reduction/pre-selection cuts, will be replicated to disk storage for continued ease of access. Figure 1: The total computing forecasts, covering all the Hyper-K detectors, considering both data and MC needs, and for all signal, background and control/calibration samples. Note that the storage estimate is per replica. ### Future Projections As previously mentioned, machine learning based techniques are under investigation to improve both processing speeds and the performance of kinematic reconstruction and particle identification. Other stages of production for which machine learning methods are being explored include simulation and calibration, albeit with lower priority. These developments could significantly alter the Hyper-K computing needs, introducing a need for GPU-based jobs within the computing workflow and resource allocations. These innovations could drastically reduce processing times, though it is expected that some level of likelihood-based processing would remain in conjunction with the new methods, at least for validation purposes. While these considerations are under internal discussion, they are not included in the current computing projections. As the timeline and performance metrics become more refined, we plan to update the computing forecasts accordingly. The current computing estimates focus solely on the needs concerning the storage and production of real data and MC simulations. Internal discussions between the computing and analysis groups are underway to understand the computing requirements of the different high-level analysis frameworks. As neutrino physics enters a systematically dominated era, statistical analysis in increasingly high-dimensional space can no longer be performed on small institute clusters. Physics groups are instead turning to HPC clusters, with many of these analysis frameworks utilising GPU acceleration. As the data grows in size and the complexity of these analyses increases, resource limitations can lead to the need to introduce approximations that improve speed but decrease accuracy, or to compromise on which studies can be performed. Preparing results for significant conferences can also impose strict deadlines, resulting in large spikes in demand. Integrating analysis level tasks into the computing framework can help to prioritise at the expense of less urgent tasks, and including the computing requirements in negotiations with computing and funding bodies can help to better plan for and guarantee these needs. In future iterations, the computing projections and workflows for computationally demanding analysis will be presented alongside the needs of real data and MC simulation. ## 3 The Hyper-K Computing model Hyper-K adopts a tiered system for computing, similar to that of the Worldwide LHC Computing Grid (WLCG), and benefits from much of its infrastructure and tools. To meet specific requirements of Hyper-K jobs and data management, we utilize community-based tools which we integrate with custom Hyper-K software. This is discussed in Section 3.1, while the tier definitions are outlined Section 3.2. ### Computing Tools, Production Workflow and Data Management #### 3.1.1 Distributed Computing with GridPP DIRAC The DIRAC (Distributed Infrastructure with Remote Agent Control) project [10] is a software framework that interfaces users with computing resources. It offers a pilot-based Workload Management System (WMS) for job submission, which can connect to grid, cloud and batch systems. DIRAC also provides a Data Management System (DMS) for cataloging file replication and metadata, in the Dirac File Catalogue (DFC), along with and the associated end-user tools to manage this. Hyper-K uses both of these services, provided by the GridPP instance of DIRAC hosted at Imperial College London [13]. This is a multi-virtual organisation (multi-VO) service that Hyper-K accesses through the hyperk.org VO. All T1 and T2 Hyper-K storage is managed through the DIRAC DMS and DFC. Efficient data transfer between sites is possible through third party services such as FTS3. For bulk data operations Hyper-K uses FTS3 with the Dirac Request Management System (RMS), which provides a scheduling and monitoring service. At the time of writing, Hyper-K uses DIRAC for job submission to access both allocated and opportunistic resources at UK (RAL, ICL and other T2 sites), Italian (INFN) and French (IN2P3) grid sites. Through the JENNIFER2 project (funded by the Horizon 2020 programme3) and collaboration between the T2K, Hyper-K and Belle II [14] experiments, Hyper-K connected cloud resources hosted by INFN and IN2P3 to the GridPP DIRAC instance using the virtual machine manager VCYCLE [15] to generate DIRAC pilot based VMs. Following from this work, a cloud demonstrator was deployed that integrates token based authentication in the Openstack module of VCYCLE, allowing European Grid Infrastructure (EGI) cloud resources to be accessed by DIRAC [16]. This demonstrates versatile nature of DIRAC, facilitating the integration of cloud resources into the central pool for Hyper-K resources. Footnote 3: The JENNIFER2 project is funded under the Horizon2020 program of the European Union as a Marie Sklodowska Curie Action of the RISE program: MSCA-RISE-2018 call, project JENNIFER2, GA 822070 #### 3.1.2 Software Distribution All Hyper-K production is done with version controlled containers, which ensures consistent and reproducible results. Hyper-K uses the CERN Virtual Machine File System (CVMFS) [17] configured for EGI and hosted by RAL (UK), where these containers are distributed as singularity sandboxes. #### 3.1.3 Hyper-K Computing tools The Hyper-K computing tools are written in python and designed to be platform-independent, such that a coherent set of tools and job definitions can be used across multiple resources, with a different wrapper written for each type of backend. For submission to DIRAC-linked sites, the tools utilize the python-based DIRAC API. Similarly, for all DFC data management, the Hyper-K tools integrate the DIRAC API to make use of the DMS and RMS tools for both single and bulk file operations and metadata management. The DIRAC software may be installed locally, accessed on CVMFS or containerised with the Hyper-K tools, such that the minimum requirement from a site that wishes to use the full extent of the Hyper-K computing package is that it has open the necessary ports to access the GridPP DIRAC server4. Footnote 4: Though note that access to the DIRAC server is not necessary to use all aspects of the Hyper-K computing tools. ### Computing infrastructure and Tiers #### 3.2.1 Grid/DIRAC Tiers This section outlines the Hyper-K tiers that are connected to the hyperk.org VO through the GridPP instance of DIRAC. While this is primarily grid resources, it may also include cloud and batch systems, as mentioned in Section 3.1.1. **Tier-0 (T0)** sites are where raw data from the detectors is initially archived. This is KEK Central Computer System (KEKCC) for the near detectors, and a dedicated computing system at the Kamioka Observatory for the far detector; both are based in Japan, close to the associated detector sites. Raw data processing of each detector should be performed at the corresponding T0 site, although DIRAC-linked resources may be used as an over-spill if required, e.g. if a large amount of data needs to be reprocessed at short notice. It is planned that 3,000 cores will be dedicated to this purpose at the Kamioka Observatory. This is around double what is estimated for real-time processing of the HKFD, to allow for any reprocessing needs. For the near detectors, the CPU required for data processing is a small fraction of their MC production CPU needs. However, to ensure data processing and reprocessing can be performed in a timely manner and at short notice, the number of cores allocated is based on being able to reprocess a years worth of beam data in around 2 months. This will initially be set at around 1,000 cores, with the requirement increasing as more data is collected. When these machines are not in use for data processing, these resources can be allocated for other purposes such as calibration or MC production. **Tier-1 (T1)** sites will hold copies of the raw and processed data, as well as MC production batches. They will also generate and process most of the MC simulations. The countries that are Tier-1 during the construction phase, and intend to maintain this status throughout operation, are France, Italy and the UK. **Tier-2 (T2)** sites provide disk storage that is mostly utilised in a temporary manner based on demand. T2 sites can support the MC production campaigns with CPU, or may be used for specific tasks, especially in instances where the disk is utilized such that jobs can access the files locally. Some or all of these countries that are T1 will also provide resources at T2 sites. Other countries that are looking to secure grid resources for Hyper-K are Canada, Japan, Poland, Sweden and Switzerland. #### 3.2.2 Standalone Tiers The Hyper-K computing model is not confined to DIRAC-linked resources; some countries will contribute'standalone' computing resources such as local HTC and HPC clusters, cloud resources, and client-server storage solutions5 Footnote 5: Local clusters have the potential to be connected to DIRAC via SSH, but most will likely be treated as standalone. Cloud resources may also be added to DIRAC; whether Hyper-K connects them to DIRAC or treats them as a standalone will depend on both the purpose of these resources and any technical considerations for a given site. **Standalone Compute-1 (SC1)** sites provides CPU/GPU at similar level of service as as T1; an example is the Kamioka Observatory computing system (which will provide both SC1-CPU and T0-Storage services), or the resources allocated to Hyper-K through The Digital Research Alliance of Canada. **Standalone Compute-2 (SC2)** resources are defined as a site that commits to providing the resources and person power to manage a specific computing service task; these are typically institute clusters, and the Hyper-K computing group does not interact directly with these resources, only with the collaborator assigned to the task. **Standalone Storage-3 (SS3)** storage is any non-DIRAC/non-grid storage that is accessible to all Hyper-K collaborators. It may be used for analysis files or general-purpose file sharing. The KCL (UK) hosted Nextcloud service is an example of this. ## 4 Summary The Hyper-Kamiokande experiment has a rich and diverse physics programme, enabled by its suite of specialised, yet multi-purpose, detectors. This gives rise to substantial computational and storage demands, met by significant infrastructure and pledges contributed by France, Italy, Japan, and the UK. To coherently manage these contributions, and to incorporate a heterogeneous pool of resources from additional sites and platforms going forward, a centralised, efficient, and well-coordinated computing model is essential. This is supported and realised by a combination of community software and services, notably DIRAC and the services provided by GridPP, and dedicated Hyper-K computing tools. As a result, the Hyper-K computing group can enable measurements based on the latest data to be published in an efficient and timely manner, while ensuring the long-term preservation of data and metadata. ## 5 Acknowledgements The author would like to thank the computing services and support provided by the following. The Digital Research Alliance of Canada, GridPP, INFN-CNAF, the Centre de Calcul de l'IN2P3, KEKCC, and Kamioka Observatory (ICRR, The University of Tokyo) for computing resources and services. The European Union's Horizon 2020 Research and Innovation Programme for funding the JENNIFER2 project that supported part of this work. The Belle II collaboration, in particular S. Pardi, for collaboration on the cloud demonstrator. The GridPP Collaboration and Imperial College London for the GridPP DIRAC services, in particular the support from D. Bauer and S. Fayer. The CVMFS taskforce at UKRI RAL, funded by EGI.
2309.05831
Studying Accuracy of Machine Learning Models Trained on Lab Lifting Data in Solving Real-World Problems Using Wearable Sensors for Workplace Safety
Porting ML models trained on lab data to real-world situations has long been a challenge. This paper discusses porting a lab-trained lifting identification model to the real-world. With performance much lower than on training data, we explored causes of the failure and proposed four potential solutions to increase model performance
Joseph Bertrand, Nick Griffey, Ming-Lun Lu, Rashmi Jha
2023-09-11T21:17:10Z
http://arxiv.org/abs/2309.05831v1
Studying Accuracy of Machine Learning Models Trained on Lab Lifting Data in Solving Real-World Problems Using Wearable Sensors for Workplace Safety ###### Abstract Pork can be a leading cause of occupation-induced disability and results in large amounts of lost productivity annually [1]. Most occupational back pain is caused by roles requiring repetitive tasks such as heavy lifting of objects. There are many ergonomic guidelines for lifting with the goal of reducing the risk of back pain, but it is difficult for workers to consistently follow these guidelines in all situations. Many may be unable to lift in the required manner due to poor workstation designs or physical limitations [2]. Monitoring a lifting workstation can ensure compliance with safe lifting techniques as well as aid in determining if workers are consistently lifting in an unsafe manner to perform the work the job requires. However, this type of monitoring requires significant overhead and doesn't provide sufficient data to create an objective methodology to assess risk. An automated system would allow users to classify risk with minimal overhead and provide real-time feedback to workers in an attempt to reduce long-term risk. Both lift classification and lift detection are known as human activity recognition (HAR), which have been extensively studied in machine learning under various sensor modalities [3]. HAR systems have been successful in video-based deployments [4]. However, automatic lift assessment would require cameras to be placed in every lift location, which is prohibitively expensive and raises concerns for the privacy of the workers. Furthermore, it is impractical for temporary workstations such as construction zones and contract work. In contrast, Inertial Measurement Unit (IMU) sensors are wearable, can detect motion regardless of workplace location, and don't have the privacy concerns associated with cameras. IMU sensors are widespread in common consumer devices, such as smart watches and phones and therefore provide significant advantage for HAR problems. Previous work was done to develop a model that could identify a lifting event from a subset of laboratory-gathered data with an F1 score of 97% [5]. F1 is the harmonic mean of precision and recall and is commonly used to evaluate binary classifiers. The evaluation data was randomly sampled using K-fold Cross Validation. The model trained was able to reliably identify lifting and non-lifting events from data gathered under the same conditions as its training data. However, when this model was applied to a real-world environment (data collection in lab and real world is discussed in section II), the F1 score dropped to 32.8%, showing that while the model could reliably identify lifting events within a dataset environment, it failed to identify more general lifting events. Poor general performance on a model is not uncommon, and there is a significant amount of general guidance published [6]. However, most of this work pertains to datasets where the capture device use does not significantly affect the data. For example, creating an object recognition model from images, the camera settings do not majorly change the underlying dataset, since all reasonable cameras produce clear images, either with or without the relevant recognition object. However, in HAR models, the location of the IMU sensor majorly changes the data, and it is not currently possible to standardize IMU data from sensors placed in different locations on the body. This makes it unfeasible to use some common techniques for improving model performance, such as transfer learning and generalizing training data from public datasets. Therefore, adjustments to previous techniques were developed to make them suitable for HAR model applications. ## II Background ### _Data Collection_ To gather data, each subject had six IMU sensors attached to various locations on their body. One sensor on each wrist, one on the upper-right thigh, one on the upper back, one on the upper-right arm, and one on the waist (figure 1). Each IMU sensor gathered accelerometer and gyroscopic data at a sampling rate of 25hz. #### Ii-A1 Lab Data Lab data was gathered in Phase 1 & 2 of the NIOSH research (IRB approved study protocol: 16-DART-05XP). Therefore, it may be referred to as phase 1&2 data throughout this paper. To gather the data, each subject started in the center of a room. They would then walk forward and pick up a small, wired box (this denoted the start of the lift). They would then turn and walk with the box across the room and set the box down on a table. Setting the box down denoted the end of the lifting event. After they set the box down, they would turn and walk back to the center of the room, where the trial would stop. NIOSH recorded videos of each trial and used a motion capture system (MoCap) to accurately track the location of the box. The start and end lift times were determined from the motion capture system. To identify the start and stop lift time for training and testing the IMU sensor model, code was written to synchronize the IMU data with the MoCap data. The time of the lift, in hh:MM:ss:ms format, was stored, from MoCap data, as metadata for each lift trial and each IMU sensor data point had a Unix Epoch timestamp associated with it. To synchronize the two, code was written to extract the time of day from the Unix timestamp, then using the time of day from the metadata, and the knowledge that the sampling rate of the IMUs was 25HZ, we could calculate which IMU frame the lift started and ended. The calculations and code were manually verified. #### I-A2 Real-World Data Real-world data was gathered in Phase 3 of the NIOSH research (IRB approved study protocol: 16-DART-05XP). Therefore, it may be referred to as phase 3 data throughout this paper. Phase 3 data was gathered in the field at a light-industrial plant. Subjects wore the sensors in the same arrangements as they did in the phase 1&2 data. Two Sony Handycam digital cameras were placed at two angles of view (side and front view) of the subject to allow the NIOSH team to review and manually label the beginning and ending of each lift. The labeled lifts were used to assess the model in the evaluation stage (See section V). There were some points where certain sensors were non-functional. That data was removed from the dataset to maintain consistency with the expected six-sensor data format. To perform model evaluation, the team sampled a small segment of phase 3 data and used subjective analysis to determine if the model was performing well. As model development continued, the team decided to go back and label the phase 3 dataset so models could be compared objectively. After balancing the dataset there were approximately 950 seconds of data used for evaluation. ### _Identification Model Training_ For the purposes of this paper the beginning of the lifting motion (BOL) was defined by when the object that the subject was lifting began to move, and the end of the lifting motion (EOL) was defined by when the subject had fully lifted the object and had either placed it down, or brought it close to their body. We defined lifting with these parameters because the majority of the risk associated with the lifting motion is within these two time intervals [7] The lifting labeling in Phase 1&2 training data differed slightly from this definition. For the NIOSH labeling, two researchers reviewed the trials and identified the frame number where the grid started to move (Beginning of Lift) and where the grid was set down completely by two hands (End of Lift). If there was any doubt, the researchers would discuss and reach an agreement on the final frame [8]. In order to fit the NIOSH labeling to the definition of lifting used in this project, the EOL was adjusted to be 1.52 seconds after the BOL. This time was chosen by reviewing the lifting videos and finding the average time it took for a subject to complete the core lifting motion. For the evaluation dataset, the beginning of lift and end of lift labels provided by NIOSH matched our lifting definition very closely, so no adjustments were necessary. To train the lift identification model, the phase 1&2 dataset was divided equally into two categories, lifting events and non-lifting events. Since the raw dataset contained more non-lifting events than lifting events, the dataset was balanced randomly to include roughly the same number of both classes. The lifting event was a 1.2 second window, starting from the beginning of lift index provided by NIOSH. 1.2 seconds Fig. 1: Placement of IMU sensors on subjects was the approximate average lifting time based on examining videos of the subject's lifts. Everything outside that 1.2 second window was considered to be a non-lifting event. There were also several files provided which contained no lifting events, and instead contained events such as walking, sitting, or jumping. Those files were also considered to be non-lifting events. Once a balanced dataset was generated, the model was trained using keras's.fit method. The hyperparamaters batch size, feature length, epoch, and validation split, were varied in model in an attempt to find the combination that provided the best performance. A total of 3456 models were trained, using a combination of various predefined hyperparameter combinations. All models had the same neuron architecture, consisting of an input layer, followed by a 128-layer deep LSTM layer, followed by two Dense functions of width 5, then with an dense output layer of size 1 to provide the final result. This model architecture was evaluated on phase 1&2 data and had performance parity with previous work [5]. Each model was then evaluated on phase 3 data. The evaluation consisted of obtaining the accuracy, F1, and confusion matrix from each model. The results were cataloged to create the results data seen below in the results section. ## III Challenge Points ### _Data Investigation_ #### Iii-A1 Data Offset Results from the base lab-trained model were promising and produced a very high accuracy [5]. When it was discovered the model performed poorly on real-world data, the first step was to perform a detailed examination of the training data (Phase 1&2). It was discovered that several subjects had systematic offsets between the IMU time and the labeling time. This caused the predictions to be delayed by up to several seconds (see figure 2 and 3). Upon investigation, it appears that the IMU clock drifts out of sync with the camera clock over time. The degree of offset varies from subject to subject, but typically is consistent across trials that occurred back-to-back. This error was less pronounced on the original lab-trained model, likely because the size of the sliding window was 25 frames (1 second) [5]. However, when shrinking the size of the sliding window, the issue becomes much more pronounced. The incorrect offsets were adjusted manually and trials without the necessary information to manually fix were removed from the dataset. The model was retrained on Phase 1&2 and re-evaluated on Phase 3 data. The model showed a noticeable improvement in evaluation and the optimal size of the sliding window changed from 25 frames to 10 frames. #### Iii-A2 Incorrect Sensor Placement Further into the dataset investigation process it was discovered that the back sensor had been incorrectly placed on the subjects' back in several phase 1&2 trials. Fortunately, the incorrect orientation was consistent, meaning that we could use data from other sensors to infer if the sensor was incorrectly placed and restructure the data mathematically to emulate the correct sensor placement. The resulting sensor data was compared against a correctly placed sensor on the same subject performing a similar motion and was confirmed to be effective. Typically, it would be advisable to remove any abnormal data. However, given our very small dataset we decided to invest time in fixing the data instead of removing it. ### _Non-Representative Training Data_ Another challenge we faced porting this model onto real-world data was going from specific laboratory-controlled movement to general real-world movement. The Phase 1&2 non-lifting dataset consisted of primarily of sitting, standing, walking, and jumping. In the real world, workers are performing much more complex non-lifting motions. This causes the model to struggle to differentiate between these Fig. 3: Offset Predictions on Phase 1&2 data. Model is consistently and systematically predicting several seconds to early, implying a fault in the dataset. Fig. 2: Typical lifting predictions on Phase 1&2 data complex movements and a lifting event. For example, the model frequently predicts a lifting event when a worker is utilizing a control panel. Machine learning models learn their classifiers by identifying features that exist when the classifier is present, but don't exist when the classifier is not. If the training data is not fully representative of the deployment environment, the model may assume certain feature combinations always correlate with the classifier when in fact they do not. In our situation, it seems the model highly correlates lifting with a subject having their arms extended out in front of them. This is a reasonable and true correlation in Phase 1&2 but is not true in the more general Phase 3 data. ### _Poor Model Generalization_ As mentioned before, due to our relatively small and controlled training dataset, our model struggled to interpret the complex motions in phase 3 data. One technique we used to understand _how_ the model interpreted the data was saliency mapping. Typically, neural network models are thought of as "black boxes", where input data is fed into the model and outputs are generated. The user has no concept of how the model processes the input data to provide the outputs. This makes it very challenging to debug poorly performing models. Saliency mapping is a technique which uses the weights between neurons as well as an input to create a heatmap of the input features that have the most significance on the output [9]. Historically, saliency maps are applied to images where the heatmap can be overlaid on the input image. However, saliency maps can be used on non-image data as well. For this project, we applied a saliency map to our IMU data. See figure 4. Each row is a 'frame' in our dataset (representing 1/25th of a second) and each column is a feature from an IMU sensor. The darker squares represent less contribution, while the lighter squares represent more. As we can see in figure 4, this model uses a single point in our input data as a major indicator of whether a lifting event is occurring or not. Unless we expect a single feature to effectively represent our class, this is typically a bad sign as it implies the model did not generalize well. Here, we can see that the upper arm gyroscope feature is the especially salient feature in our input data. Seeing this should raise a key question "Why is this such a salient feature in our training dataset?" Upon further investigation, we discovered that the only time a subject's hands are out in front of them is when they are lifting. Therefore, using the arm gyro is very effective on the lab data, but as we know, there are frequently times in real-world situations where the arms are extended in front of the body that aren't lifting movements (sitting at a desk, using machinery, etc.). There are several effective ways to resolve this issue. The first and most intuitive is to simply gather additional training data to address this edge case. Unfortunately, gathering additional test data was not financially feasible for this project, so we had to resort to more creative methods. ## IV Methodology With these three challenge-areas identified we investigated four potential solutions to resolve these issues. ### _Synthetic IMU Data_ One idea was to create additional data using lifting data gathered in other public studies. This was originally deemed not feasible as there is no standard placement of IMU sensors. Therefore, the data from other studies utilizing IMUs would not match up with the data we gathered for our study. However, many studies utilize Motion Capture data (mocap) to capture the movements of subjects. The AMASS dataset [10] is a unified database of 15 MoCap datasets containing over 40 hours of motion data. BABEL [11] builds on the AMASS dataset by labeling frames of each sequence with the action occurring. Additionally, [12] outlines a methodology to synthesize IMU data from the AMASS dataset by placing virtual IMU sensors on the mesh surface. The combination of frame-by-frame data and synthetic IMU data generation makes this dataset a feasible avenue for incorporating additional data. ### _Transfer Learning_ Another idea was to use transfer learning. Transfer learning is where a model is trained on a large, generalized dataset relating to its final classifier, then is specifically trained on a small amount of data. A common example of this is a voice assistant that responds only to a specific person. The voice assistant is trained on general voice recognition, then later uses a few samples of the user's specific voice to identify that voice uniquely. This idea doesn't work well for IMU data because there are no standard IMU placement locations. Models and datasets that are publicly available do not match our IMU placements, and therefore are not fit for transfer learning in the traditional sense. However, we can still implement a small transfer learning style procedure in this model. Instead of using external data to augment our dataset, we use our lab dataset as the "general" dataset, then a few examples of lifting and non-lifting from a real-world environment. This solution wouldn't fix the lapses in the lab dataset, but it would help the model adjust to any personal bias in lifting technique. ### _Filtering_ Filtering data (or preprocessing) is a common machine learning technique. The idea here was that filtering the data would mask the gaps in our training set and force the model to learn more generally. To do this, we tested two types of filters: Extended Kalman filters and Mahony filters. Extended Kalman filters (EKFs) are a modification of standard Kalman filters, both of which have seen widespread use in IMUs. Both operate fundamentally the same way. Kalman filters first "predicts" the current state of the sensor using a state transition model, then "updates" using a sensor reading, derived from an observation model. This predict-update cycle allows for handling of sensor noise. In particular, it can assist in the handling of gyroscope drift. EKFs differ from conventional Kalman filters by using non-linear, differentiable functions as part of the model, rather than linear models. An EKF filter is readily available in the AHRS Python library. Mahony filters (available in the same library) were also tested, despite being less widely used, for a more comprehensive evaluation of the performance of filtering. ### _Sensor Removal_ Removing sensors from the dataset seems very un-intuitive. We'd expect that providing the model with more data would allow it to identify correlations that we humans may not see, and while that can be true, in our situation, it actually ends up hurting performance more than helping. With a small dataset, we found that removing features not deemed salient to lifting (by referencing NIOSH equation [7]) improved performance significantly. This process must be done carefully, as removing data limits the capabilities of our model. In our situation, our baseline accuracy was not acceptable for production deployment, so we were willing to sacrifice some sensitivity to gain specificity. It's most important to identify high-risk lifts and since high-risk lifts often have pronounced trunk flexion, we created a set of sensors we felt best captured the motions of high-risk lifting. Models were retrained on their specific sets of sensors and reevaluated on the entire real-world dataset (not just high-risk events). ## V Results and Discussion All results are from the model evaluated on Phase 3 data. The 'Evaluation data' is a subset of the Phase 3 data. ### _Synthetic IMU Data_ The authors of [12] have provided their tooling for generating synthetic IMU data. However, while the tooling provides a simulation of accelerometer data, gyroscopic data is provided as orientations, requiring modifications to provide angular velocities as in the NIOSH data set. Gyroscopic data is vital in capturing key lifting characteristics; therefore, reconciling the two formats is essential for incorporating additional data. Unfortunately, the necessary modifications proved too far outside our area of experience to incorporate any synthetic data into our training set. ### _Transfer Learning_ Our transfer learning strategy did not produce a significant improvement in accuracy or F1. Typically, transfer learning models are trained on much more data ahead of time and have very good performance on general class detection/identification ahead of the transfer learning stage. In our process, the model did not have a general understanding of what a lifting event was, so finishing training with a small sample of real-world data did not improve performance because the model was already poorly generalized. Simply put, transfer learning works best from taking a powerful generalized model and applying it to a specific sub-set of its original purpose (i.e., written digit recognition). Our model was not a well-generalized model, so our specific transfer training was unsuccessful. ### _Filtering_ See Figure 5. It was found that both EKF and Mahony filters resulted in slightly lower median accuracy and F1 scores on training data, and notably lower median accuracy and F1 scores on evaluation data. Furthermore, Figure 6 shows that even the models with the best F1 scores with filtering applied are outperformed by the median model with no filtering. It was noted in [5] that filtering the NIOSH data set was considered but ultimately not used due to both better performance with raw data and reduced pre-processing time; although it is unclear what filtering methods were evaluated, the results we have obtained seem to align with these preliminary findings. One additional source of reduced performance from the EKF filter may be from un-optimized filter parameters. Kalman filters incorporate noise variances of the sensors. The default noise levels of the AHRS library were used in testing due to unfamiliarity with the details of the sensors in the NIOSH data set. Similarly, default values were used for gain parameters of the Mahony filter. Fig. 4: IMU Saliency Map. Lighter colors: higher saliency, darker colors: lower saliency. Columns: IMU channels (gyro or accelerometer) and rows: time steps (1/25th of a second) ### _Sensor Removal_ Removing sensor channels produced excellent results. The best model trained with only wrist and back sensors had an F1 score 8% higher than the top performing model trained with all sensors. See figure 7 for a breakdown of the different core channel combinations and their associated metrics. Hiding data from a Machine Learning model typically does not produce better results, however in this situation it did because the sensors we removed were the _least_ representative of real-world lifting motions. In real-world lifting environments, subjects are constantly moving their arms around as part of the normal working environment, however in our training data, non-lifting working motions were not included in the dataset. Therefore, when the model learned our lifting class, it used features in the other sensors that were not unique to lifting generally, but were in our training set. Removing these sensor channels forced the model to learn a more generalized definition of a lifting event from the remaining sensors. HAR IMU models have come a long way since their original development. We can design models to identify repetitive consistent laboratory-like data reliably using multiple different model architectures. However, the vision of utilizing a laboratory trained lifting model on real-world data is still largely out of reach. Non-standard IMU placements cause limited datasets which make it challenging to incorporate all reasonable movements in training. The authors of [12] have successfully emulated IMU sensor data from motion-capture data by placing virtual sensors on models, which could significantly expand the reach of future training data sets. Furthermore, the authors have provided their tooling for generating synthetic IMU data, although modifications may be necessary to reconcile differences between synthetic and real-world data sets. However, even with limited training data, we are still able to achieve significant insight with saliency mapping techniques. We were able to apply saliency mapping to our IMU model and see that the model was fixating on a feature not vitally significant to lifting. From this learning, we were able to target our solutions to resolve this challenge and achieved significant results for our efforts. Our baseline balanced Phase 3 F1 was 32.8%. By perfecting training data, resolving sensor issues, and performing a targeted removal of non-salient sensors, we were able to increase our typical F1 scores up to 55%, with our top model nearly doubling our baseline with a F1 of 64.66% Unfortunately, even with these substantial increases to performance, we still fall short of the 90+% necessary for deployment. However, with more work in synthetic data future work could improve performance enough to make this solution viable for deployment. Fig. 5: Results from filtering. No filtering provides the highest accuracy and F1 scores on the evaluation dataset Fig. 6: Heatmap results from filtering. There are no models that perform exceptionally better than the mean for any given filter technique Fig. 7: Results from removing sensors. Using only the wrist and back sensors provided the best F1 and accuracy scores on the Phase 3 dataset, showing significant improvement over training with all sensors ## VI Conclusion Outlined below are three key research considerations informed from this project: 1. Training Data is very important * Thoroughly ensure the accuracy of your training set and insure it is representative of your deployment environment. If the dataset isn't representative, it will be very challenging to develop a successful model. * Researching ways to augment datasets: Larger training sets, as long as they're representative, will help your model generalize better. Augmenting data with synthetic data or publicly available datasets is a powerful way to increase your model's robustness. 2. Studying model's saliency: Having a sense of how your model interprets data is vital to understanding how it will behave in new environments. It can also inform modifications to make ahead of time to improve the chances of success. 3. Considering transfer learning: Transfer learning is a great way to adjust a model slightly to remove user bias. While transfer learning was unsuccessful on this particular project, transfer learning can provide a "personal touch" to a model that can significantly improve its reliability on a specific user. ## VII Acknowledgments This work was funded by CDC/NIOSH. The authors would like to thank Marie Hayden, Menekse Barim and Dwight Werren for their support in creating the datasets. Findings and conclusions in this report are those of the authors and do not necessarily represent the official positions of NIOSH or the Centers for Disease Control and Prevention (CDC). The mention of any company or product does not constitute endorsement by NIOSH or the CDC.
2310.20522
Tight bounds on adjacency labels for monotone graph classes
A class of graphs admits an adjacency labeling scheme of size $b(n)$, if the vertices in each of its $n$-vertex graphs can be assigned binary strings (called labels) of length $b(n)$ so that the adjacency of two vertices can be determined solely from their labels. We give tight bounds on the size of adjacency labels for every family of monotone (i.e., subgraph-closed) classes with a well-behaved growth function between $2^{O(n \log n)}$ and $2^{O(n^{2-\delta})}$ for any $\delta > 0$. Specifically, we show that for any function $f: \mathbb N \to \mathbb R$ satisfying $\log n \leqslant f(n) \leqslant n^{1-\delta}$ for any fixed $\delta > 0$, and some~sub-multiplicativity condition, there are monotone graph classes with growth $2^{O(nf(n))}$ that do not admit adjacency labels of size at most $f(n) \log n$. On the other hand, any such class does admit adjacency labels of size $O(f(n)\log n)$. Surprisingly this tight bound is a $\Theta(\log n)$ factor away from the information-theoretic bound of $\Omega(f(n))$. The special case when $f = \log$ implies that the recently-refuted Implicit Graph Conjecture [Hatami and Hatami, FOCS 2022] also fails within monotone classes. We further show that the Implicit Graph Conjecture holds for all monotone \emph{small} classes. In other words, any monotone class with growth rate at most $n!\,c^n$ for some constant $c>0$, admits adjacency labels of information-theoretic order optimal size. In fact, we show a more general result that is of independent interest: any monotone small class of graphs has bounded degeneracy.We conjecture that the Implicit Graph Conjecture holds for all hereditary small classes.
Édouard Bonnet, Julien Duron, John Sylvester, Viktor Zamaraev, Maksim Zhukovskii
2023-10-31T15:00:42Z
http://arxiv.org/abs/2310.20522v3
# Tight Bounds on Adjacency Labels for Monotone Graph Classes ###### Abstract A class of graphs admits an adjacency labeling scheme of size \(b(n)\), if the vertices in each of its \(n\)-vertex graphs can be assigned binary strings (called labels) of length \(b(n)\) so that the adjacency of two vertices can be determined solely from their labels. We give tight bounds on the size of adjacency labels for every family of monotone (i.e., subgraph-closed) classes with a well-behaved growth function between \(2^{O(n\log n)}\) and \(2^{O(n^{2-\delta})}\) for any \(\delta>0\). Specifically, we show that for any function \(f:\mathbb{N}\to\mathbb{R}\) satisfying \(\log n\leqslant f(n)\leqslant n^{1-\delta}\) for any fixed \(\delta>0\), and some sub-multiplicative condition, there are monotone graph classes with growth \(2^{O(nf(n))}\) that do not admit adjacency labels of size at most \(f(n)\log n\). On the other hand, any such class does admit adjacency labels of size \(O(f(n)\log n)\). Surprisingly this tight bound is a \(\Theta(\log n)\) factor away from the information-theoretic bound of \(O(f(n))\). The special case when \(f=\log\) implies that the recently-refuted Implicit Graph Conjecture [Hatami and Hatami, FOCS 2022] also fails within monotone classes. Introduction A _class_ of graphs is a set of graphs which is closed under isomorphism. For a class of graphs \(\mathcal{X}\) we denote by \(\mathcal{X}_{n}\) the set of graphs in \(\mathcal{X}\) with vertex set \([n]\). The function \(n\mapsto|\mathcal{X}_{n}|\) is called the _speed_ of \(\mathcal{X}\). A _coding_ of graphs is a representation of graphs by words in the binary alphabet \(\{0,1\}\). One of the main considerations with graph representations is their succinctness; clearly, any representation of \(n\)-vertex graphs in a class \(\mathcal{X}\) would require at least \(\lceil\log|\mathcal{X}_{n}|\rceil\) bits for some graphs in \(\mathcal{X}_{n}\). Another consideration is whether the representation is global or local. Standard graph representations, such as adjacency matrix or adjacency lists, are examples of _global_ representations, where a graph is stored in a single data structure that needs to be accessed in order to query some information about the graph, e.g., adjacency between a pair of vertices. By contrast, in _local_ graph representations, the encoding of a graph is distributed over its vertices in such a way that the queries can be answered by looking only into the local information associated with the vertices involved in the query. In this work we are concerned with local graph representations for adjacency queries, i.e., queries that given two vertices answer whether they are adjacent or not. Let \(\mathcal{X}\) be a class of graphs and \(b:\mathbb{N}\to\mathbb{N}\) be a function. An _\(b(n)\)-bit adjacency labeling scheme_ (or simply _\(b(n)\)-bit labeling scheme_) for \(\mathcal{X}\) is a pair (encoder, decoder) of algorithms where for any \(n\)-vertex graph \(G\in\mathcal{X}_{n}\) the encoder assigns binary strings, called _labels_, of length \(b(n)\) to the vertices of \(G\) such that the adjacency between any pair of vertices can be inferred by the decoder only from their labels. We note that the decoder depends on the class \(\mathcal{X}\), but not on the graph \(G\). The function \(b(\cdot)\) is the _size_ of the labeling scheme. Adjacency labeling schemes were introduced by Kannan, Naor, and Rudich [12, 13], and independently by Muller [14] in the late 1980s and have been actively studied since then. Adjacency labeling schemes are closely related to induced universal graphs, which we will refer to simply as universal graphs. For a function \(u:\mathbb{N}\to\mathbb{N}\), a _universal graph sequence_ or simply _universal graph_ of size \(u(n)\) is a sequence of graphs \((U_{n})_{n\in\mathbb{N}}\) such that for every \(n\in\mathbb{N}\) the graph \(U_{n}\) has at most \(u(n)\) vertices and every \(n\)-vertex graph in \(\mathcal{X}\) is an induced subgraph of \(U_{n}\). It was observed in [12] that for a class of graphs the existence of a \(b(n)\)-bit labeling scheme is equivalent to the existence of a universal graph of size \(2^{b(n)}\). The binary word, obtained by concatenating labels of the vertices of a graph \(G\in\mathcal{X}_{n}\) assigned by an adjacency labeling scheme, uniquely determines graph \(G\). Thus, a \(b(n)\)-bit labeling scheme cannot represent more than \(2^{nb(n)}\) graphs on \(n\) vertices, and therefore, if \(\mathcal{X}\) admits a \(b(n)\)-bit labeling scheme, then \(|\mathcal{X}_{n}|\leqslant 2^{nb(n)}\). This implies a lower bound of \(\frac{\log|\mathcal{X}_{n}|}{n}\) on the size \(b(n)\) of any adjacency labeling scheme for \(\mathcal{X}\). A natural and important question is: which classes admit an adjacency labeling scheme of a size that matches this information-theoretic lower bound? We say that a graph class \(\mathcal{X}\) admits an _implicit representation_, if it admits an _order optimal_ adjacency labeling scheme, i.e., if \(\mathcal{X}\) has a \(b(n)\)-bit labeling scheme, where \(b(n)=O(\frac{\log|\mathcal{X}_{n}|}{n})\). Equivalently, \(\mathcal{X}\) admits an implicit representation if \(\mathcal{X}\) has a universal graph of size \(2^{O(\frac{\log|\mathcal{X}_{n}|}{n})}\). For example, the class \(\mathcal{A}\) of all graphs admits an implicit representation, because \(|\mathcal{A}_{n}|=2^{\binom{n}{2}}=2^{\Theta(n^{2})}\) and \(b(n)=O(\frac{\log|\mathcal{A}_{n}|}{n})=O(n)\), and one can easily design an \(O(n)\)-bit labeling scheme for \(\mathcal{A}\), e.g., by assigning to each vertex of a graph an \((n+\lceil\log n\rceil)\)-bit label consisting of the row in an adjacency matrix of the graph corresponding to the vertex and the index of that row. However, not every class admits an implicit representation. The following example is due to Muller [14] (see also [15]). Let \(\mathcal{Y}\) be the class of graphs in which the number of edges does not exceed the number of vertices. It is easy to estimate that \(|\mathcal{Y}_{n}|=2^{O(n\log n)}\). To show that this class does not admit an implicit representation, consider an arbitrary \(n\)-vertex graph \(G\). Obviously, \(G\) does not necessarily belong to \(\mathcal{Y}\), but after adding \(n^{2}-n\) isolated vertices to \(G\), we obtain a graph \(H\) on \(n^{2}\) vertices that belongs to \(\mathcal{Y}\). Now, if an \(O(\log n)\)-bit labeling scheme for \(\mathcal{Y}\) existed, then the \(O(\log n^{2})\)-bit adjacency labels for \(H\) could be used as \(O(\log n)\)-bit adjacency labels for \(G\). Since, \(G\) was chosen arbitrarily, this is in contradiction with the lower bound of \(\frac{\log|\mathcal{A}_{n}|}{n}=\Omega(n)\) on the size of any labeling scheme for the class \(\mathcal{A}\) of all graphs. The crucial property used in the above example is that by adding isolated vertices to a graph not in \(\mathcal{Y}\), one can obtain a graph in \(\mathcal{Y}\). Using more familiar terminology, one would say that class \(\mathcal{Y}\) is not _hereditary_, i.e., it is not closed under vertex removal or, equivalently, under taking induced subgraphs. Many natural graph classes (e.g., forests, planar graphs, bipartite graphs, geometric intersection graphs) are hereditary. It turns out that finding a hereditary graph class that does not admit an implicit representation is a non-trivial question. The first instance of this question was asked by Kannan, Naor, and Rudich [14] for _factorial classes_ (i.e., graph classes \(\mathcal{X}\) with the speed \(|\mathcal{X}_{n}|=2^{O(n\log n)}\)), which was later stated by Spinrad [15] in the form of a conjecture, that became known as the _Implicit Graph Conjecture_. \((\mathit{IGC})\): Any hereditary graph class of at most factorial speed admits an \(O(\log n)\)-bit labeling scheme. This question remained open for over 30 years until a recent breakthrough by Hatami and Hatami [13]. They showed that, for any \(\delta>0\), there exists a hereditary factorial class that does not admit a labeling scheme of size \(n^{1/2-\delta}\), which is very far from the information-theoretic lower bound of \(\Omega(\log n)\). This result leaves wide open the question of characterizing factorial hereditary graph classes that admit an implicit representation (see e.g. [13] for more discussion). Factorial hereditary classes form an important family, as many classes of theoretical or practical interest are factorial (e.g., forests, planar graphs, disk graphs, graphs of bounded twin-width). However, as was noted by Spinrad [15], there is nothing that prevents one from considering implicit representability of other hereditary graph classes. Spinrad [15] raised this as the _Generalized Implicit Graph Question_, which we restate using the terminology of our paper as follows. **Question 1** ([15]).: _Which hereditary graph classes admit implicit representations?_ The answer to this question is known for classes with \(|\mathcal{X}_{n}|=2^{\Omega(n^{2})}\), and for _subfactorial_ graph classes, i.e., classes \(\mathcal{X}\) with \(|\mathcal{X}_{n}|=2^{o(n\log n)}\). Indeed, for the latter classes, it is known that they have at most exponential speed, i.e., \(|\mathcal{X}_{n}|=2^{O(n)}\)[1], and also admit \(O(1)\)-bit labeling schemes [16]. For the former classes, the \(O(n)\)-bit labeling scheme mentioned above for the class \(\mathcal{A}\) of all graphs is an order optimal labeling scheme. In fact, in this regime, _asymptotically optimal_ (up to the second-order term) labeling schemes are available. For the class of all graphs, such results (in the language of universal graphs) were available since 1965 [17, 18]. For proper hereditary graph classes \(\mathcal{X}\) with the speed \(2^{\Omega(n^{2})}\), by the Alekseev-Bollobas-Thomason theorem [1, 2], their speed is \(|\mathcal{X}_{n}|=2^{(1-1/k(\mathcal{X}))n^{2}/2+o(n^{2})}\), where \(k(\mathcal{X})\) is an integer greater than 1. Recently, Bonamy, Esperet, Groenland, and Scott showed [1] that all such classes have asymptotically optimal adjacency labeling schemes of size \((1-1/k(\mathcal{X}))n/2+o(n)\). For the classes in the intermediate range, i.e., the classes with the speed between \(2^{\Omega(n\log n)}\) and \(2^{o(n^{2})}\) the picture is much less understood (see Figure 1). Most known information is concentrated on the lower extreme of the range, i.e., around factorial speed, which was promoted by the Implicit Graph Conjecture. Factorial graph classes from certain families are known to admit implicit representations: proper minor-closed graph classes [1], graph classes of bounded degeneracy (equivalently, of bounded arboricity) [14], clique-width [13, 15] (see also [1]), and twin-width [1] all admit implicit representations. The only lower bound witnessing (non-constructively) factorial classes that do not admit an implicit representation is the above-mentioned result by Hatami and Hatami [11]. A notable family of hereditary graph classes where Question 1 remains open is the _small_ graph classes, i.e., classes \(\mathcal{X}\) with \(|\mathcal{X}_{n}|\leqslant c^{n}n!\) for some constant \(c\). These classes encompass only the bottom part of the factorial layer and include proper minor-closed classes [13], and more generally, classes of bounded twin-width [1]. However, it is still unknown if all such classes admit an implicit representation (see [1] for more details on implicit representation of small classes). Alon showed [1] that every hereditary graph class \(\mathcal{X}\) with \(|\mathcal{X}_{n}|=2^{o(n^{2})}\) admits an \(n^{1-\delta}\)-bit labeling scheme for some \(\delta>0\). ### Our contribution In this paper, we study Question 1 for _monotone_ graph classes, i.e., graph classes that are closed under taking subgraphs. Monotone graph classes form a subfamily of hereditary graph classes. The following result shows that any monotone class with non-decreasing speed admits a labeling scheme of size at most \(O(\log n)\) away from the information-theoretic lower bound. **Proposition 1.1**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be a non-decreasing function. Then, any monotone class of graphs \(\mathcal{X}\) with the speed \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) admits an adjacency labeling scheme of size \(O(f(n)\log n)\)._ This upper bound is an easy consequence of an estimation of the number of edges in graphs from monotone classes combined with a standard labeling scheme for \(k\)-degenerate graphs [13], i.e., graphs in which every induced subgraph contains a vertex of degree at most \(k\). Our main result shows that this upper bound is attained by some monotone classes. Before stating the result formally we must briefly introduce a family of non-decreasing functions we call "decent". Roughly speaking, on some domain \([s,\infty)\), decent functions are sub-multiplicative, i.e., \(f(xy)\leqslant f(x)f(y)\), and slow-growing, that is \(\log x\leqslant f(x)\leqslant x^{1-\delta}\) for some constant \(\delta\in(0,1)\), see Definition 2.4 for the formal definition of decent functions. **Theorem 1.2**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be a decent function. Then, there exists a monotone graph class \(\mathcal{X}\) with speed \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) that does not admit a universal graph of size at most \(2^{f(n)\log n}\). Equivalently, \(\mathcal{X}\) admits no adjacency labeling scheme of size at most \(f(n)\log n\)._ Theorem 1.2 is the main contribution of the paper, and it gives the existence of monotone classes requiring labels whose size is a \(\log n\)-factor above the information-theoretic lower bound. In particular this shows that Proposition 1.1 is tight. A special case of Theorem 1.2 (when \(f(x)=\log x\)) implies that the Implicit Graph Conjecture does not hold even for monotone graph classes. Combining this observation with Proposition 1.1 gives the following result. **Corollary 1.3**.: _For any constant \(c>0\), there are factorial monotone classes that do not admit a \((c\log^{2}n)\)-bit labeling scheme, while any factorial monotone class admits an \(O(\log^{2}n)\)-bit labeling scheme._ This result (more generally Theorem 1.2 and Proposition 1.1) gives the first example of tight bounds for families of graph classes that do not admit order optimal adjacency labeling schemes. Chandoo [1] observed that the proof of the refutation of the IGC by Hatami and Hatami [11] implies that the family of factorial classes cannot be "described" by a countable set of factorial classes. Using the same ideas, we establish the following result from our proof for monotone classes. **Theorem 1.4**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be any decent function, and \(\mathbb{X}\) be any countable set of graph classes, each with speed at most \(2^{nf(n)\log n}\). Then, there exists a monotone graph class \(\mathcal{X}\) of speed \(2^{O(nf(n))}\) such that there does not exist a \(\mathcal{D}\in\mathbb{X}\) with \(\mathcal{X}\subseteq\mathcal{D}\)._ This shows that monotone classes are complex in the sense that they cannot be covered by a countably infinite family of classes which grow slightly faster, even if these classes are not restricted to being hereditary (thus also non-monotone). ### Proof outline and techniques In this section we outline the proofs of our main results. Monotone classes that do not admit implicit representations.Recall that, roughly speaking1, a function \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) is decent if \(\log x\leqslant f(x)\leqslant x^{1-\delta}\) for some constant \(\delta\in(0,1)\), and \(f\) is sub-multiplicative, i.e. \(f(xy)\leqslant f(x)\cdot f(y)\), for all \(x,y\) in the domain. Our approach is inspired by the refutation of the IGC by Hatami and Hatami [10]. Namely, for any decent function \(f\), we expose so many _monotone_ classes of speed \(2^{nf(n)}\) that there are not enough universal graphs of size \(2^{f(n)\log n}\) to capture all of them. The approach involves several key ingredients: Footnote 1: The formal definition of decent (Definition 2.4) is more general and depends on three parameters \(\delta,C,s\). For this proof sketch it suffices to work with the simplified (informal) definition above which only has one parameter \(\delta\). 1. Estimation of the number of sets of graphs of fixed cardinality representable by universal graphs. A set of graphs \(\mathcal{M}\) is _representable_ by a universal graph \(U\), if every graph in \(\mathcal{M}\) is Figure 1: A \(\checkmark\) indicates that all classes of the given type have an implicit representation, a \(\checkmark\) shows that they do not, and a \(\checkmark\) signals that the question is open. The upper and lower bounds (denoted UB and LB respectively) are stated up to constants which may depend on the class. an induced subgraph of \(U\). A direct estimation shows that the number of sets of cardinality \(k_{n}:=\lceil 2^{\sqrt{nf(n)}}\rceil\) of \(n\)-vertex graphs that are representable by a \(u_{n}\)-vertex universal graph, with \(u_{n}:=2^{f(n)\log n}\) is at most \[2^{u_{n}^{2}}\cdot u_{n}^{nk_{n}}=2^{2^{2f(n)\log n}+k_{n}\cdot nf(n)\log n}.\] (1) 2. Notion of \(f\)_-good graphs_. We will construct our monotone classes of speed \(2^{nf(n)}\) by taking the monotone closure of an appropriately chosen set of graphs. The monotonicity and the speed of target classes impose a natural restriction on the number of edges in graphs that can be used in such constructions. To explain, let \(\mathcal{X}\) be a monotone class with \(|\mathcal{X}_{n}|\leqslant 2^{nf(n)}\). Since \(\mathcal{X}\) is closed under taking subgraphs, if \(\mathcal{X}\) contains an \(n\)-vertex graph with \(m\) edges, then \(\mathcal{X}\) contains at least \(2^{m}\) labeled \(n\)-vertex graphs. This, together with the speed assumption, imply that for any \(G\in\mathcal{X}\) and \(k\), every subgraph of \(G\) on \(k\) vertices contains at most \(kf(k)\) edges. This restriction, however, is not strong enough for our purposes. Indeed, while each graph with the above property contributes to the monotone closure an appropriate number of subgraphs at every _level_ (i.e., on every number of vertices), we build our desired classes by taking the monotone closure of _infinitely_ many of such graphs, and this can result in some levels having too many graphs. To overcome this difficulty, we introduce the notion of \(f\)_-good_ graphs, which are \(n\)-vertex graphs in which the number of edges in every \(k\)-vertex subgraph is at most \(kf(k)\) if \(k>\sqrt{n}\), and at most \(\frac{kf(k)}{\log k}\) if \(2\leqslant k\leqslant\sqrt{n}\). The latter condition allows us to guarantee that all small enough subgraphs of the graphs from a set that we close under taking subgraphs belong to a _fixed_ monotone class of speed \(2^{nf(n)}\), namely, the class of all \(n\)-vertex graphs in which every \(k\)-vertex subgraph has at most \(\frac{kf(k)}{\log k}\) edges for every \(2\leqslant k\leqslant n\). 3. Construction of monotone classes of speed \(2^{nf(n)}\) from sets of \(f\)-good graphs. We show that for any sequence \((\mathcal{M}_{n})_{n\in\mathbb{N}}\), where \(\mathcal{M}_{n}\) is a set of \(f\)-good \(n\)-vertex graphs of cardinality \(k_{n}\), the monotone closure \(\operatorname{Mon}(\cup_{n\in\mathbb{N}}\mathcal{M}_{n})\) has speed at most \(2^{nf(n)}\). 4. Lower bound on the number of sets of cardinality \(k_{n}\) of \(f\)-good \(n\)-vertex graphs. We show that for any \(\gamma>1\), there exists some \(c:=c(\gamma,\delta)>0\) such that for every \(n\in\mathbb{N}\) there are at least \(2^{(\gamma\delta/2-o(1))\cdot nf(n)\log n}\) many unlabeled \(cf\)-good \(n\)-vertex graphs. Thus, the number of sets of cardinality \(k_{n}\) of \(cf\)-good \(n\)-vertex graphs is at least \[2^{k_{n}\cdot(\gamma\delta/2-o(1))\cdot nf(n)\log n}.\] (2) By setting \(\gamma=4/\delta\) and recalling that \(k_{n}=\lceil 2^{\sqrt{nf(n)}}\rceil\), we show that (2) is larger than (1). Therefore, there exists a monotone class \(\operatorname{Mon}(\cup_{n\in\mathbb{N}}\mathcal{M}_{n})\) of speed \(2^{nf(n)}\) that is not representable by a universal graph of size \(2^{f(n)\log n}\). Many \(f\)-good graphs.A core step in the above approach is to show that for any \(\gamma>1\), there exists some \(c:=c(\gamma,\delta)>0\) such that the number of \(n\)-vertex \(cf\)-good graphs grows as \(2^{(\gamma\delta/2-o(1))\cdot nf(n)\log n}\). To do so, we show that a random graph \(G_{n}\sim G(n,\gamma f(n)/n)\) is \(cf\)-good with high probability (_w.h.p._). It is in this step we really need to use the sub-multiplicativity property of decent functions, as we need to relate the magnitude of \(f\) at two different parts of its domain. In particular, to show that w.h.p. \(G_{n}\) is \(cf\)-good, we apply a first moment bound to show there are no "large" \(k\)-vertex subgraphs of \(G_{n}\) with more than \(ckf(k)\) edges, and "small" ones with more than \(ckf(k)/\log k\) edges. Observe that the number of edges \(\xi\) in a given \(k\)-vertex subgraph has expectation \(\binom{k}{2}\frac{\gamma f(n)}{n}\). Thus, for "large" subgraphs, the probability that \(\xi\) is constant factor larger than \(ckf(k)\) decays with exponent \(\propto-f(k)\cdot\ln\frac{nf(k)}{kf(n)}\) by the Chernoff bound. From this we see that unless \(f(k)/f(n)>k/n\), then the bound fails. Sub-multiplicativity helps us here as it allows us to say \(f(n)=f(k\cdot(n/k))\leqslant f(k)\cdot f(n/k)\), moderate-growth then bounds the term \(f(n/k)\). A similar issue occurs for "small" subgraphs. From the explanation above it may seem that needing such tight control over the ratio of \(f(k)\) to \(f(n)\) for all \(k\leqslant n\) is an artefact of our proof, however some "smoothness" condition on the function is necessary. To see this, consider a function \(f:\mathbb{N}\to\mathbb{R}_{\geqslant 0}\) such that \(f(n)=\log n\), if \(n\) is odd, and \(f(n)=\sqrt{n}\), if \(n\) is even. Then, for any \(c>0\), and large enough even \(n\), \(G(n,f(n)/n)\) will not be \(cf\)-good as the restriction on the subgraphs with odd number of vertices is far too stringent. Sub-multiplicativity was the most natural and broad condition we could find to combat this issue, and we show in Lemma 2.5 that many common functions growing at a suitable rate satisfy this. It would be interesting to see if sub-multiplicativity can be replaced with something more general. We also used sub-multiplicativity in step 3 above (which corresponds to Lemma 4.1) to bound the speed of \(\operatorname{Mon}(\cup_{n\in\mathbb{N}}\mathcal{M}_{n})\), however it is possible some less stringent property can be used there. A matching upper bound on the size of adjacency labels.We show that for any non-decreasing function \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\), any monotone class with speed \(2^{O(nf(n))}\) admits an \(O(f(n)\log n)\)-bit labeling scheme. This follows from an easy observation that any such class is \(O(f(n))\)-degenerate, followed by a standard \(O(k\log n)\)-bit labeling scheme for \(k\)-degenerate graphs. One consequence of this upper bound is that our random graph result is tight: for any \(p\in\omega(f(n)/n)\) and \(c\geqslant 0\), a random graph \(G_{n}\sim G(n,p)\) is not \(cf\)-good w.h.p. ### Discussion A natural question arising from our work is to characterize monotone classes that admit an implicit representation, i.e., an adjacency labeling scheme of order optimal size. Motivated by the Implicit Graph Conjecture, of particular interest is the case of factorial classes. **Question 2**.: _Which monotone factorial graph classes admit an \(O(\log n)\)-bit labeling scheme?_ An analogous question is completely understood for constant-size _adjacency sketches_ (a probabilistic version of adjacency labeling schemes) that were studied in [11, 12, 13]. The importance of constant-size adjacency sketches is that they can be derandomized to \(O(\log n)\)-bit adjacency labels [12, 13]. Thus, if a class admits constant-size adjacency sketches, then it admits an \(O(\log n)\)-bit labeling scheme. Though, the converse is not always true. Esperet, Harms, and Kupavskii showed [1] that a monotone class admits constant-size adjacency sketches if and only if it has bounded degeneracy. This result may suggest that bounded degeneracy also characterizes monotone classes that admit \(O(\log n)\)-bit labeling schemes. This, however, is not the case, as the class of subgraphs of hypercubes is monotone, has unbounded degeneracy, and admits an \(O(\log n)\)-bit labeling scheme [1]. Recall that Question 1 (first raised in [14]), asks which hereditary graph classes admit implicit representations. A prominent instance of Question 1 is whether every small class (i.e., class \(\mathcal{X}\) with \(|\mathcal{X}_{n}|\leqslant c^{n}\cdot n!\) for some constant \(c>0\)), or even every monotone small class, admits an implicit representation. It is known that for any \(\kappa>0\) there is a small monotone class which does not admit a \((\kappa\log n)\)-bit labeling scheme [10]. In particular, some small classes admit no asymptotically optimal labeling scheme. However, this result does not rule out the existence of an order optimal labeling scheme for each small class of graphs. ### Organization The rest of the paper is organized as follows. In Sections 2.1 and 2.2, we cover some common notation, definitions and lemmas. In Section 2.3 we introduce two key concepts used in our proofs. Firstly, we give the notion of _\(f\)-good_ graphs, which are the building blocks for the monotone classes used to prove our main result. Secondly, we formally define _decent_ functions which describe the speeds of these monotone graph classes, before concluding Section 2.3 with some natural examples of decent functions. In Section 3, we prove a result about random graphs which is the main technical ingredient of our lower bound. In Section 4, we establish the lower and upper bounds on labeling schemes for monotone classes, along with the result on the complexity of monotone graph classes. We conclude the paper in Section 5 with a discussion and some open problems. ## 2 Preliminaries ### Standard definitions and notation We let \([n]\) denote the set \(\{1,\ldots,n\}\) of natural numbers, and use \(\ln^{c}x\) as a shorthand for \((\ln x)^{c}\). We take \(\mathbb{R}_{\geqslant 0}\) to denote the set of non-negative real numbers. We use \(X\sim\mathcal{D}\) to denote that the random variable \(X\) has distribution \(\mathcal{D}\). We say that a sequence of events \((A_{n})\) holds _with high probability (w.h.p.)_ if \(\mathbb{P}\left[\,A_{n}\,\right]\to 1\) as \(n\to\infty\).. Graphs.We consider finite undirected graphs, without loops or multiple edges. Given a graph \(G\), we write \(V(G)\) for its vertex set, and \(E(G)\) for its edge set. A graph \(H\) is a _subgraph_ of \(G\) if \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). Thus, \(H\) can be obtained from \(G\) by vertex and edge deletions. The graph \(H\) is an _induced subgraph_ of \(G\) if \(V(H)\subseteq V(G)\), and \(E(H)\) consists exactly of the edges in \(E(G)\) with both endpoints in \(V(H)\). In that case, \(H\) can be obtained from \(G\) by vertex deletions only. In the usual way, for a set of vertices \(U\subseteq V(G)\), we denote by \(G[U]\) the induced subgraph of \(G\) with the set of vertices \(U\). We denote by \(e(G)\) the number of edges in \(G\) When we refer to an \(n\)-vertex graph \(G\) as _labeled_, we mean that the vertex set of \(G\) is \([n]\), and we distinguish two different labeled graphs even if they are isomorphic. In contrast, if we refer to \(G\) as _unlabeled_ graph, its vertices are indistinguishable and two isomorphic graphs correspond to the same unlabeled graph. Graph classes.A graph class is _hereditary_ if it is closed under taking induced subgraphs, and it is _monotone_ if it closed under taking subgraphs. For a set \(\mathcal{X}\) of graphs we let \(\operatorname{Her}(\mathcal{X})\) denote the hereditary closure of \(\mathcal{X}\), i.e., the inclusion-wise minimal hereditary class that contains \(\mathcal{X}\); and \(\operatorname{Mon}(\mathcal{X})\) denote the monotone closure of \(\mathcal{X}\), i.e., the minimal monotone class that contains \(\mathcal{X}\). ### Useful lemmas We use standard notation \(G(n,p)\) to denote the distribution on \(n\)-vertex graphs where each edge is included independently with probability \(p\), and \(G(n,m)\) to denote the uniform distribution on \(n\)-vertex graphs with \(m\) edges, see (for example) [10]. The following lemma allows us to transfer results from one graph model to another. **Lemma 2.1**.: _Let \(\mathcal{P}\) be any graph property (i.e., graph class) and \(0\leqslant p\leqslant 1\) satisfy \(p\binom{n}{2}\to\infty\) and \(\binom{n}{2}-p\binom{n}{2}\to\infty\) and \(m=\left\lceil p\binom{n}{2}\right\rceil\). Then, for \(G_{n}\sim G(n,m)\) and \(G_{n}^{\prime}\sim G(n,p)\), we have_ \[\mathbb{P}\left[\,G_{n}\in\mathcal{P}\,\right]\leqslant 10\sqrt{m}\cdot\mathbb{P} \left[\,G_{n}^{\prime}\in\mathcal{P}\,\right].\] Lemma 2.1 follows by a very minor adaption of [13, Lemma 3.2], the only difference is a ceiling in the number of edges, which makes no difference in the proof. We will make use of the following version of the Chernoff bound (see [1, Theorem A.1.15]), where \(\operatorname{Bin}(N,p)\) denotes the binomial distribution with parameters \(N\) and \(p\). **Lemma 2.2** (Chernoff bound).: _Let \(\xi\sim\operatorname{Bin}(N,p)\), \(\mu=Np\), and \(a,t>0\). Then,_ \[\mathbb{P}(\xi>(1+a)\mu)\leqslant\left(\frac{e^{a}}{(1+a)^{1+a}}\right)^{\mu} \leqslant\exp\left(-(1+a)\mu\cdot\ln\frac{1+a}{e}\right).\] ### Good graphs and decent functions **Definition 2.3** (\(f\)-good).: Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be a function. An \(n\)-vertex graph \(G\) is \(f\)_-good_ if the number of edges in any subgraph on \(k\) vertices is bounded from above by \[\begin{cases}\frac{k\cdot f(k)}{\log k}&\text{ if }2\leqslant k\leqslant\sqrt{n} \\ k\cdot f(k)&\text{ if }\sqrt{n}<k\leqslant n\end{cases}.\] We observe that \(f\)-goodness is a monotone property, i.e., if a graph \(G\) is \(f\)-good, then so is any of its subgraphs. Indeed, moving the threshold (between the first and the second, more relaxed, upper bound) from \(\sqrt{n}\) down to a smaller value may only help in satisfying these bounds. **Definition 2.4** (\((\delta,C,s)\)-_decent_).: For constants \(\delta\in(0,1)\), \(C\geqslant 1\) and \(s\geqslant 2\), we say that a non-decreasing function \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) is \((\delta,C,s)\)-_decent_ if the following properties hold **(Moderate-growth):**: \(\log x\leqslant f(x)\leqslant C\cdot x^{1-\delta}\) holds for every \(x\in[s,\infty)\), **(Sub-multiplicativity):**: \(f(xy)\leqslant C\cdot f(x)\cdot f(y)\) holds for any \(x,y\in[s,\infty)\). We say that a function \(f\) is _decent_ if there exist some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\) such that \(f\) is \((\delta,C,s)\)-decent. We now give some natural examples of decent functions. **Lemma 2.5**.: _For any fixed \(\alpha>0,\beta\geqslant 1,\gamma\geqslant 1\) and \(d\in(0,1)\), the following functions are decent:_ 1. \(f(x)=\alpha x^{d}\)_,_ 2. \(f(x)=\exp\left(\alpha\cdot\ln^{d}x\right)\)_,_ 3. \(f(x)=\exp\left(\beta\cdot\ln^{\gamma}(\log x)\right)\)_,_ 4. \(f(x)=\beta\cdot g(x)\)_, where_ \(g(x)\) _is decent._ 5. \(f(x)=g(x)\cdot h(x)\)_, where_ \(g(x),h(x)\) _are decent and_ \(g(x)\cdot h(x)\) _is moderately-growing._ Proof.: For \((i)\), if we set \(s_{1}:=\left(\frac{2}{d\max\{\alpha,1\}}\right)^{2/d}\) then we have \[f(s)=\alpha\cdot\left(\frac{2}{d\max\{\alpha,1\}}\right)^{2}\geqslant\frac{2 }{d}\cdot\frac{2}{d\max\{\alpha,1\}}\geqslant\frac{2}{d}\cdot\log\frac{2}{d \max\{\alpha,1\}}=\log s.\] Furthermore, there exists some constant \(s_{2}:=s_{2}(\alpha,d)\) such that \(\frac{\alpha x^{d}}{\log x}\) is increasing for all \(x\geqslant s_{2}\), establishing moderate-growth on \([\max\{s_{1},s_{2}\},\infty)\) with \(C=\alpha\) and \(\delta=1-d\). Observe also that \(f(xy)=\frac{1}{\alpha}\cdot f(x)f(y)\), and thus \(f(x)=\alpha x^{d}\) is \(\left(1-d,\max\left\{\alpha,\frac{1}{\alpha}\right\},\max\{s_{1},s_{2}\} \right)\)-decent. For \((ii)\), moderate-growth holds for \(C=1\), any fixed \(\delta\in(0,1)\), and sufficiently large \(s\geqslant 2\). For \(x,y\geqslant 0\) let \(g_{x}(y)=(x+y)^{d}-x^{d}-y^{d}\) and observe that \(g_{x}(0)=0\) and \(g_{x}^{\prime}(y)=d(x+y)^{d-1}-dy^{d-1}\leqslant 0\). Consequently, \(g_{x}(y)\leqslant 0\) for all \(x,y\geqslant 0\), or equivalently \((x+y)^{d}\leqslant x^{d}+y^{d}\). This implies that \(f\) is sub-multiplicative as \[f(xy)=\exp\left(\alpha(\ln x+\ln y)^{d}\right)\leqslant\exp\left(\alpha((\ln x) ^{d}+(\ln y)^{d})\right)=f(x)\cdot f(y).\] For \((iii)\), it will be useful to show that \[\ln^{\gamma}(x+y)\leqslant\ln^{\gamma}x+\ln^{\gamma}y,\qquad\text{for all }x,y\in[e^{ \gamma},\infty). \tag{3}\] To prove (3), we first observe that the function \(g(x)=\frac{\ln^{\gamma}x}{x}\) is non-increasing for \(x\in[e^{\gamma},\infty)\). This follows since \(g\) is differentiable when \(x\neq 0\) and \(g^{\prime}(x)=\frac{(\gamma-\ln x)\ln^{\gamma-1}x}{x^{2}}<0\) for all \(x>e^{\gamma}>1\). Thus (3) follows from this observation, since for any \(x,y\in[e^{\gamma},\infty)\) we have \[\ln^{\gamma}(x+y)=x\cdot\frac{\ln^{\gamma}(x+y)}{x+y}+y\cdot\frac{\ln^{\gamma }(x+y)}{x+y}\leqslant x\cdot\frac{\ln^{\gamma}x}{x}+y\cdot\frac{\ln^{\gamma}y} {y}=\ln^{\gamma}x+\ln^{\gamma}y.\] We now see that \(f\) is sub-multiplicative for any \(x,y\in[2^{e^{\gamma}},\infty)\) as by (3) we have \[f(xy)=\exp\left(\beta\cdot\ln^{\gamma}(\log x+\log y)\right)\leqslant\exp \left(\beta\cdot(\ln^{\gamma}(\log x)+\ln^{\gamma}(\log y))\right)=f(x)\cdot f (y).\] Since \(\gamma\geqslant 1\) and \(\beta\geqslant 1\), \(f\) is also moderately-growing for a sufficiently large \(s\). For \((iv)\), if \(g\) is \((\delta_{g},C_{g},s_{g})\)-decent, then it is easy to check that \(\beta g\) is \((\delta_{g},\beta C_{g},s_{g})\)-decent. For \((v)\), let \(g\) be \((\delta_{g},C_{g},s_{g})\)-decent and \(h\) be \((\delta_{h},C_{h},s_{h})\)-decent, and \(f(x):=g(x)\cdot h(x)\). As \(\log x\leqslant f(x)\leqslant C^{\prime}x^{1-\delta^{\prime}}\) for some \(\delta^{\prime}\in(0,1),C^{\prime}>0\), and \(s^{\prime}\geqslant 2\), by assumption, it remains to show sub-multiplicativity. For any \(x,y\in[\max\{s_{g},s_{h}\},\infty)\) we have \[f(xy)=g(xy)\cdot h(xy)\leqslant C_{g}g(x)g(y)\cdot C_{h}h(x)h(y)\leqslant C_{g }\cdot C_{h}\cdot f(x)f(y),\] and thus \(f\) is \((\delta^{\prime},\max\{C^{\prime},C_{g}\cdot C_{h}\},\max\{s^{\prime},s_{g},s _{h}\})\)-decent. ## 3 Growth of the number of edges in subgraphs of \(G(n,p)\) **Theorem 3.1**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be \((\delta,C,s)\)-decent for some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\). Then, for any fixed \(\gamma>1\), there exists \(c:=c(\delta,C,s,\gamma)>0\) such that, for large \(n\),_ \[\mathbb{P}\left[\,G(n,\gamma f(n)/n)\text{ is not }(cf)\text{-good }\right] \leqslant n^{-2}.\] Proof.: Let \(p:=p(n)=\gamma f(n)/n\), and let \(c_{1},c_{2}\) be sufficiently large constants (depending on \(\gamma\)) fixed later. Let \(\mathcal{E}_{1,k}\) (respectively \(\mathcal{E}_{2,k}\)) be the event that there are no subgraphs of size \(k\) with more than \(c_{1}kf(k)/\log k\) edges (respectively \(c_{2}kf(k)\) edges). Observe that if \(c=\max\{c_{1},c_{2},\binom{s}{2}\}\), then \[\{G(n,p)\text{ is not }(cf)\text{-good}\}\subseteq\left(\bigcup_{k=s}^{\lfloor \sqrt{n}\rfloor}\neg\mathcal{E}_{1,k}\right)\cup\left(\bigcup_{k=\lfloor\sqrt {n}\rfloor+1}^{n}\neg\mathcal{E}_{2,k}\right). \tag{4}\] Let \(k\) denote the number of vertices in a subgraph, and thus \(\xi\sim\text{Bin}\left(\binom{k}{2},p\right)\) denotes the number of edges in a given \(k\)-vertex subgraph. The expectation of \(\xi\) is \[\mu:=\binom{k}{2}p=\frac{\gamma}{2}\cdot\frac{k(k-1)f(n)}{n}.\] On the other hand, the number of ways to select a \(k\)-vertex subgraph is \[\binom{n}{k}\leqslant\left(\frac{en}{k}\right)^{k}=\exp\left(k\ln\frac{n}{k}+k \right)\leqslant\exp\left(2k\ln n\right). \tag{5}\] Our strategy will be to bound the probability of the events on the right-hand side of (4) using the union and Chernoff bounds. We begin by considering events of the form \(\mathcal{E}_{2,k}\) and thus can assume that \(\lfloor\sqrt{n}\rfloor+1\leqslant k\leqslant n\). Observe that since \(f\) is sub-multiplicative, non-decreasing, and moderately-growing, we have \[\frac{f(k)}{f(n)}=\frac{f(k)}{f\left(\frac{n}{k}\cdot k\right)}\geqslant\frac {f(k)}{C\cdot f(\frac{n}{k})\cdot f(k)}\geqslant\frac{f(k)}{C\cdot f(\frac{ sn}{k})\cdot f(k)}\geqslant\frac{f(k)}{C^{2}\cdot(\frac{sn}{k})^{1-\delta} \cdot f(k)}\geqslant\frac{k}{C^{2}s\cdot n}. \tag{6}\] If we now fix \[c_{2}=C^{2}s\cdot e^{2}\cdot\gamma>6, \tag{7}\] then by (6) we have \[\frac{2c_{2}nf(k)}{e\gamma(k-1)f(n)}=\frac{2C^{2}se\cdot nf(k)}{(k-1)f(n)} \geqslant\frac{2ek}{k-1}>e. \tag{8}\] So, applying Chernoff bound (Lemma 2.2) with \(1+a=\frac{c_{2}kf(k)}{\mu}=\frac{2c_{2}nf(k)}{\gamma(k-1)f(n)}\) gives \[\mathbb{P}(\xi>c_{2}kf(k)) \leqslant\exp\left(-(1+a)\mu\cdot\ln\frac{1+a}{e}\right)\] \[=\exp\left(-c_{2}kf(k)\cdot\ln\frac{2c_{2}nf(k)}{e\gamma(k-1)f(n )}\right)\] \[\overset{(\ref{eq:C2})}{\leqslant}\exp\left(-c_{2}kf(k)\right)\] \[\overset{(\ref{eq:C2})}{\leqslant}\exp\left(-6kf(k)\right). \tag{9}\] Thus, by (5), (9), the union bound, and as \(f(k)\geqslant\log k>\ln k\), we have \[\mathbb{P}\left(\bigcup_{k=\lfloor\sqrt{n}\rfloor+1}^{n}\neg\mathcal{E}_{2, k}\right)\leqslant\sum_{k=\lfloor\sqrt{n}\rfloor+1}^{n}\exp\left(2k\ln n \right)\cdot\exp\left(-6kf(k)\right)\leqslant\sum_{k=\lfloor\sqrt{n} \rfloor+1}^{n}k^{-k}\leqslant\exp(-\sqrt{n}). \tag{10}\] We now treat events of the form \(\mathcal{E}_{1,k}\), and thus we can assume that \(s\leqslant k\leqslant\lfloor\sqrt{n}\rfloor\). Observe that for any fixed constant \(d>0\) and sufficiently large \(n\) we have \(\frac{n^{2/3}}{k(\log k)^{d}}\geqslant s\) as \(k\leqslant\sqrt{n}\). Thus, by sub-multiplicativity, and moderate-growth we have \[f\left(\frac{n^{2/3}}{(\log k)^{d}}\right) =f\left(\frac{n^{2/3}}{k(\log k)^{d}}\cdot k\right)\] \[\leqslant C\cdot f\left(\frac{n^{2/3}}{k(\log k)^{d}}\right)\cdot f \left(k\right)\] \[\leqslant C^{2}\cdot\left(\frac{n^{2/3}}{k(\log k)^{d}}\right)^{1- \delta}\cdot f(k)\] \[\leqslant C^{2}\cdot\frac{n^{2/3}}{k(\log k)^{d}}\cdot f(k).\] Similarly, by sub-multiplicativity and moderate-growth, we have \[f(n) =f\left(\frac{n^{2/3}}{(\log k)^{d}}\cdot n^{1/3}(\log k)^{d}\right)\] \[\leqslant C\cdot f\left(\frac{n^{2/3}}{(\log k)^{d}}\right)\cdot f \left(n^{1/3}(\log k)^{d}\right)\] \[\leqslant C^{2}\cdot f\left(\frac{n^{2/3}}{(\log k)^{d}}\right) \cdot n^{(1-\delta)/3}(\log k)^{(1-\delta)\cdot d}.\] If we set \(d=1/\delta>0\) then the two bounds above give \[\frac{f(k)}{f(n)}\geqslant\frac{f\left(\frac{n^{2/3}}{(\log k)^{d}}\right) \cdot\frac{k(\log k)^{d}}{C^{2}n^{2/3}}}{C^{2}\cdot f\left(\frac{n^{2/3}}{( \log k)^{d}}\right)\cdot n^{(1-\delta)/3}(\log k)^{(1-\delta)\cdot d}}=\frac{k (\log k)^{\delta d}}{C^{4}n^{1-\delta/3}}=\frac{k\log k}{C^{4}n}\cdot n^{ \delta/3}. \tag{11}\] Foreseeing the need for the constant \(15\) later on, we now set \[c_{1}=e\cdot 15\cdot C^{4}\gamma/\delta. \tag{12}\] We now set \(1+a:=\frac{c_{1}kf(k)}{\mu\cdot\log k}\), which by (11) satisfies \[1+a=\frac{c_{1}kf(k)}{\mu\cdot\log k}=\frac{2c_{1}nf(k)}{\gamma(k-1)f(n)\log k }\geqslant\frac{2c_{1}k}{\gamma(k-1)C^{4}}\cdot n^{\delta/3}>e\cdot n^{\delta /3}. \tag{13}\] As before, Chernoff bound (Lemma 2.2) with this \(1+a\) gives \[\mathbb{P}\left(\xi>\frac{c_{1}kf(k)}{\log k}\right) \leqslant\exp\left(-\frac{c_{1}kf(k)}{\log k}\cdot\ln\frac{1+a}{ e}\right)\] \[\overset{\eqref{eq:C1}}{\leqslant}\exp\left(-\frac{c_{1}kf(k)}{ \log k}\cdot\frac{\delta}{3}\ln n\right)\] \[\overset{\eqref{eq:C1}}{\leqslant}\exp\left(-c_{1}k\cdot\frac{ \delta}{3}\ln n\right)\] \[\overset{\eqref{eq:C1}}{\leqslant}\exp\left(-5k\ln n\right), \tag{14}\] where \((*)\) follows as \(f(k)\geqslant\log k\) by moderate-growth. Thus, by (5), (14), and the union bound, \[\mathbb{P}\left(\bigcup_{k=s}^{\lfloor\sqrt{n}\rfloor}\neg\mathcal{E}_{1,k} \right)\leqslant\sum_{k=s}^{\lfloor\sqrt{n}\rfloor}\exp\left(2k\ln n\right) \cdot\exp\left(-5k\ln n\right)\leqslant\sqrt{n}\cdot n^{-3s}\leqslant n^{-5}. \tag{15}\] The result follows by taking \(c=\max\{c_{1},c_{2},\binom{s}{2}\}\), (4), and the union bound over (10) and (15). We can now use this result to bound the number of \(cf\)-good graphs from below. **Lemma 3.2**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be \((\delta,C,s)\)-decent for some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\). Then, for any fixed \(\gamma>1\), there exists some \(c:=c(\gamma,\delta,C,s)>0\) such that for every \(n\in\mathbb{N}\) there are at least \(2^{(\gamma\delta/2-o(1))\cdot nf(n)\log n}\) many unlabeled \((cf)\)-good \(n\)-vertex graphs._ Proof.: Let \(m:=\left\lceil\frac{\gamma(n-1)f(n)}{2}\right\rceil\) and \(G_{n}\sim G\big{(}n,m\big{)}\). Observe that by Theorem 3.1 and Lemma 2.1, there exists some fixed \(c>0\) such that for sufficiently large \(n\) \[\mathbb{P}\left[\,G_{n}\text{ is }(cf)\text{-good}\,\right]\geqslant 1-10 \sqrt{\left\lceil\frac{\gamma(n-1)f(n)}{2}\right\rceil}\cdot n^{-2}=1-o(1). \tag{16}\] The number of labeled graphs in the support of \(G\big{(}n,m\big{)}\) is \[\binom{n}{2}=\binom{n}{2}\] \[\binom{\frac{\gamma(n-1)f(n)}{2}}{m}\Bigg{|}\geqslant\left(\frac {n}{\gamma f(n)}\right)^{\frac{\gamma(n-1)f(n)}{2}}=2^{\frac{\gamma}{2}\cdot (n-1)f(n)\cdot(\log n-\log(\gamma f(n)))}.\] By (16), a \(1-o(1)\) fraction of these labeled graphs are \((cf)\)-good. Furthermore, there are at most \(n!\leqslant n^{n}\) labelings of a given unlabeled graph. Thus, the number of unlabeled \(n\)-vertex \((cf)\)-good graphs is bounded from below by \[(1-o(1))\cdot\frac{1}{n^{n}}\cdot 2^{\frac{\gamma}{2}\cdot(n-1)f(n) \cdot(\log n-\log(\gamma f(n)))} =2^{\frac{\gamma}{2}\cdot nf(n)\cdot(\log n-\log(f(n))-O(1))}\] \[\geqslant 2^{\frac{\gamma}{2}\cdot nf(n)\cdot(\log n-(1-\delta) \log(n)-O(1))}\] \[=2^{(\delta\gamma/2-o(1))\cdot nf(n)\log n},\] as claimed, since \(\log n\leqslant f(n)\leqslant Cn^{1-\delta}\) by moderate-growth. ## 4 Tight bounds on labeling schemes for monotone factorial classes We begin in Section 4.1 with a lemma which is useful for bounding the speed when constructing monotone classes with no implicit representation. This is then used to prove our lower bound in Section 4.2. Finally, in Section 4.3 we give a matching upper bound on labeling schemes for monotone classes, this follows from [10] and included mainly for completeness. ### Construction of monotone tiny classes We begin with a lemma showing that, for a decent function \(f\), we can create monotone classes from the union of many \(f\)-good graphs and still maintain control over the speed. The proof follows the broad idea of [14, Claim 3.1]. **Lemma 4.1**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be \((\delta,C,s)\)-decent for some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\). Let \(c>0\) be a constant, and, for every \(n\in\mathbb{N}\), let \(M_{n}\) be any set of \((cf)\)-good unlabeled \(n\)-vertex graphs satisfying \(|M_{n}|\leqslant\left\lceil 2^{\sqrt{nf(n)}}\right\rceil\). Then the speed of \(\mathcal{X}:=\operatorname{Mon}(\cup_{n\in\mathbb{N}}M_{n})\) is \(2^{O(nf(n))}\)._ Proof.: Let \(\mathcal{Y}:=\operatorname{Her}(\cup_{n\in\mathbb{N}}M_{n})\). Note that \(\mathcal{X}=\operatorname{Mon}(\mathcal{Y})\). We first estimate the speed of \(\mathcal{Y}\). For an \(n\)-vertex graph \(G\in\mathcal{Y}\), let \(N\) be the smallest integer such that \(G\) is an induced subgraph of a graph \(H\in M_{N}\). We split the proof over two cases: \((i)\): \(N\geqslant n^{2}\), and \((ii)\): \(N<n^{2}\). _Case (i)_: Since \(H\) is a \((cf)\)-good \(N\)-vertex graph and \(G\) is its \(n\)-vertex induced subgraph, where \(n\leqslant\sqrt{N}\), it follows from Definition 2.3 that \(G\) must have at most \(g(n):=cnf(n)/\log n\) many edges. The number of such graphs is at most \[\binom{\binom{n}{2}}{g(n)}\leqslant\left(\frac{n^{2}e}{g(n)}\right)^{g(n)}=2^{ g(n)\cdot\log\frac{n^{2}e}{g(n)}}=2^{c\frac{nf(n)}{\log n}\cdot\log\frac{n^{2}e}{g(n) }}=2^{O(nf(n))},\] and so \(\mathcal{Y}\) contains \(2^{O(nf(n))}\) many \(n\)-vertex labeled graphs each of which is an induced subgraph of a graph in \(M_{N}\) for some \(N\) with \(n\leqslant\sqrt{N}\). _Case (ii)_: For this case, we simply use the fact that any \(H\in M_{N}\) has at most \(N^{n}\) many \(n\)-vertex induced subgraphs. Thus, the number of \(n\)-vertex labeled graphs in \(\mathcal{Y}\) each of which is an induced subgraph of a graph in \(M_{N}\) for some \(N\) with \(N<n^{2}\) is bounded from above by \[n!\cdot\sum_{N=n}^{n^{2}}N^{n}\cdot|M_{N}| \leqslant n!\cdot\sum_{N=n}^{n^{2}}N^{n}\cdot\left\lceil 2^{\sqrt{ Nf(N)}}\right\rceil\] \[\leqslant n!\cdot n^{2}\cdot(n^{2})^{n}\cdot\left\lceil 2^{\sqrt{n^{2} f(n^{2})}}\right\rceil\] \[\leqslant 2^{O(n\log n)}\cdot\left\lceil 2^{\sqrt{C}nf(n)}\right\rceil\] \[=2^{O(nf(n))},\] where in the last inequality we used sub-multiplicativity of \(f\), and in the final equality we used the fact that \(f(x)\geqslant\log x\). Thus, \(|\mathcal{Y}_{n}|=2^{O(nf(n))}\). Now, since every \(n\)-vertex labeled graph in \(\mathcal{X}\) is a subgraph of an \(n\)-vertex labeled graph in \(\mathcal{Y}\), and, due to \((cf)\)-goodness, every graph in \(\mathcal{Y}_{n}\) has at most \(2^{cnf(n)}\)\(n\)-vertex subgraphs, we conclude that \(|\mathcal{X}_{n}|\leqslant|\mathcal{Y}_{n}|\cdot 2^{cnf(n)}=2^{O(nf(n))}\). ### Lower bound We can now show the main result of the paper, which we recall for convenience. **Theorem 1.2**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be a decent function. Then, there exists a monotone graph class \(\mathcal{X}\) with speed \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) that does not admit a universal graph of size at most \(2^{f(n)\log n}\). Equivalently, \(\mathcal{X}\) admits no adjacency labeling scheme of size at most \(f(n)\log n\)._ Proof.: By assumption \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) is \((\delta,C,s)\)-decent for some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\). We will construct a monotone class (via the probabilistic method) with the speed \(2^{O(nf(n))}\) that does not admit a universal graph of size \(u_{n}:=2^{f(n)\log n}\). Fix \(\gamma:=4/\delta>1\) and let \(c:=c(\gamma,\delta,C,s)>0\) be the satisfying constant from Theorem 3.1 corresponding to this choice of \(\gamma\). Let \(k_{n}:=\left\lceil 2^{\sqrt{nf(n)}}\right\rceil\). The number of distinct \(u_{n}\)-vertex graphs is at most \(2^{u_{n}^{2}}\) and the number of \(n\)-vertex induced subgraphs of a fixed \(u_{n}\)-vertex graph is at most \(\binom{u_{n}}{n}\). Hence the number of collections of \(k_{n}\) graphs on \(n\) vertices that are induced subgraphs of a \(u_{n}\)-vertex (universal) graph is at most \[2^{u_{n}^{2}}\cdot\binom{\binom{u_{n}}{n}}{k_{n}}\leqslant 2^{u_{n}^{2}}\cdot u _{n}^{k_{n}\cdot n}. \tag{17}\] On the other hand, from Lemma 3.2, the number of different collections of \(n\)-vertex \((cf)\)-good graphs of cardinality \(k_{n}\) is at least \[\binom{2^{(\gamma\delta/2-o(1))\cdot nf(n)\log n}}{k_{n}}\geqslant\left(\frac{2^{ (\gamma\delta/2-o(1))\cdot nf(n)\log n}}{k_{n}}\right)^{k_{n}}=2^{k_{n}\cdot( \gamma\delta/2-o(1))\cdot nf(n)\log n}, \tag{18}\] as \(\log k_{n}=O(\sqrt{nf(n)})=o(nf(n)\log n)\). By taking logarithms, we can see that for sufficiently large \(n\) the upper bound (17) is smaller than the lower bound (18). In particular, taking the logarithm of (17) gives \[\log\left(2^{u_{n}^{2}}\cdot u_{n}^{k_{n}\cdot n}\right) =u_{n}^{2}+k_{n}\cdot n\log u_{n}\] \[=2^{2f(n)\log n}+k_{n}\cdot nf(n)\log n\] \[=(1+o(1))\cdot k_{n}\cdot nf(n)\log n,\] as \(k_{n}:=\left\lceil 2^{\sqrt{nf(n)}}\right\rceil=\omega(2^{2f(n)\log n})\). However, since \(\gamma=4/\delta\), the logarithm of (18) is \[\log\left(2^{k_{n}\cdot(\gamma\delta/2-o(1))\cdot nf(n)\log n}\right) =k_{n}\cdot(\gamma\delta/2-o(1))\cdot nf(n)\log n\] \[=(2-o(1))\cdot k_{n}\cdot nf(n)\log n.\] Thus, for any sufficiently large \(n\), there exists a collection \(M_{n}\) of \(k_{n}\)\((cf)\)-good \(n\)-vertex graphs that are not representable by any universal graph of size at most \(u_{n}=2^{f(n)\log n}\). Consequently, by Lemma 4.1, the speed of \(\mathcal{X}:=\mathrm{Mon}(\cup_{n}M_{n})\) is \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) and \(\mathcal{X}\) does not admit a universal graph of size at most \(2^{f(n)\log n}\). ### Upper bound In this section we prove the following upper bound on labeling schemes for monotone classes. **Proposition 1.1**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be a non-decreasing function. Then, any monotone class of graphs \(\mathcal{X}\) with the speed \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) admits an adjacency labeling scheme of size \(O(f(n)\log n)\)._ Before proving this we will recall that a graph is \(k\)-degenerate if every induced subgraph has a vertex of degree at most \(k\). We will need the following folklore bound which we include for completeness [10]. **Lemma 4.2**.: _The class of \(k\)-degenerate graphs has a \(k\lceil\log n\rceil\)-bit adjacency labeling scheme._ Proof.: For any \(k\)-degenerate graph \(G\) on \(n\)-vertices, we first order vertices so that each vertex has at most \(k\) neighbours appearing after it in the ordering. This can be done greedily since each subgraph has a vertex of degree at most \(k\). One can then assign each vertex a label consisting of its place in the order, followed by the places of the at most \(k\) neighbour vertices following it in the ordering. We can now use this bound, together with an observation relating the degeneracy and speed of a monotone class, to prove Proposition 1.1. Proof of Proposition 1.1.: Let \(\mathcal{X}\) be a monotone class with at most \(2^{Cnf(n)}\) labeled \(n\)-vertex graphs for every \(n\). If an \(n\)-vertex graph \(G\in\mathcal{X}\) has \(m\) edges, then \(\mathcal{X}\) contains at least \(2^{m}\) labeled \(n\)-vertex graphs, as every subgraph of \(G\) also belongs to \(\mathcal{X}\) due to monotonicity. This implies that every \(n\)-vertex graph \(G\) in \(\mathcal{X}\) contains at most \(Cnf(n)\) edges, and hence, has a vertex of degree at most \(2Cf(n)\). Due to monotonicity of \(f\), the same is true for every subgraph of \(G\). Indeed, if \(H\) is a \(k\)-vertex subgraph of \(G\), then, since \(H\) belongs to \(\mathcal{X}\), the number of edges in \(H\) is at most \(Ckf(k)\leqslant Ckf(n)\), and therefore \(H\) has a vertex of degree at most \(2Cf(n)\). Thus, every \(n\)-vertex graph in \(\mathcal{X}\) is \(2Cf(n)\)-degenerate, and Lemma 4.2 implies that \(\mathcal{X}\) admits a \(2Cf(n)\lceil\log n\rceil\)-bit labeling scheme. ### Complexity of monotone classes The following result shows that monotone classes are complex in the sense that they cannot be "described" by even a countable number of classes of a slightly larger speed. The proof of this theorem follows the exact same idea as [10, Lemma 2.4], also see [11, Theorem 1.2] for the proof of a similar theorem in the context of tiny/small classes. **Theorem 1.4**.: _Let \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) be any decent function, and \(\mathbb{X}\) be any countable set of graph classes, each with speed at most \(2^{nf(n)\log n}\). Then, there exists a monotone graph class \(\mathcal{X}\) of speed \(2^{O(nf(n))}\) such that there does not exist a \(\mathcal{D}\in\mathbb{X}\) with \(\mathcal{X}\subseteq\mathcal{D}\)._ Proof.: By assumption \(f:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) is \((\delta,C,s)\)-decent for some constants \(\delta\in(0,1)\), \(C\geqslant 1\), and \(s\geqslant 2\). By Lemma 3.2 there exists some \(c:=c(\delta,C,s)>0\) such that if \(\mathcal{G}_{n}\) is the number of unlabeled \((cf)\)-good \(n\)-vertex graphs, then \(|\mathcal{G}_{n}|\geqslant 2^{(2-o(1))nf(n)\log n}\) for every \(n\in\mathbb{N}\). Let \(k_{n}:=\left\lceil 2^{\sqrt{nf(n)}}\right\rceil\). Then, by Lemma 4.1, for every \(n\in\mathbb{N}\) and \(M_{n}\subseteq\mathcal{G}_{n}\) satisfying \(|M_{n}|\leqslant k_{n}\), the speed of \(\mathcal{X}:=\operatorname{Mon}(\cup_{n\in\mathbb{N}}M_{n})\) is \(2^{O(nf(n))}\). Thus if we wish to build such a class \(\mathcal{X}\) there are \[\binom{|\mathcal{G}_{n}|}{k_{n}}\geqslant\left(\frac{|\mathcal{G}_{n}|}{k_{n}} \right)^{k_{n}}\geqslant 2^{(2-o(1))\cdot k_{n}nf(n)\log n}, \tag{19}\] ways of selecting the set \(M_{n}\subseteq\mathcal{G}_{n}\). Let \(\mathbb{X}=(\mathcal{D}^{i})_{i\in\mathbb{N}}\) be any countable collection of classes, satisfying \(|\mathcal{D}^{i}_{n}|\leqslant 2^{nf(n)\log n}\) for each \(n\in\mathbb{N}\) and \(i\in\mathbb{N}\). Any class \(\mathcal{D}^{i}\in\mathbb{X}\) contains at most \(2^{k_{n}\cdot nf(n)\log n}\) different sets of \(n\)-vertex graphs with size \(k_{n}\). By (19), there is some constant \(N_{0}\) such that for all \(n\geqslant N_{0}\) this is less than the number of choices of sets \(M_{n}\subseteq\mathcal{G}_{n}\) with size \(k_{n}\). Thus, given any such set \(\mathbb{X}\), for each \(n\in\mathbb{N}\) we can take some \(M_{n}\subseteq\mathcal{G}_{n+N_{0}}\) such that \(M_{n}\not\subseteq\mathcal{D}^{n}\) and thus \(\mathcal{X}\notin\mathbb{X}\). ## 5 Conclusions Our main result shows that for any 'decent' function \(f\) we can find a monotone class of graphs \(\mathcal{X}\) with speed \(|\mathcal{X}_{n}|=2^{O(nf(n))}\) for which any labeling scheme requires a multiplicative factor \(\log n\) more bits than the information-theoretic lower bound. Furthermore, we gave an upper bound on the size of labeling schemes for any monotone class with non-decreasing speed which matches our lower bound, up-to a constant. In Section 1.3, we discussed some natural open problems arising from this work. We conclude the paper with a more technical (yet natural) question of whether the conditions (moderate-growth and sub-multiplicativity) of 'decent' can be relaxed. Due to the discussion in the introduction, the moderate-growth condition is essentially necessary. It is not so clear to what extent the sub-multiplicativity condition is necessary. However, if one is to follow our method, some notion of global "smoothness" is required to prove Theorem 3.1, see the discussion under the heading "Many \(f\)-good graphs" in Section 1.2 for more details. **Acknowledgments.** We are grateful to Nathan Harms for valuable feedback on the early version of this paper. This work has been supported by Research England funding to enhance research culture, by the Royal Society (IES\(\backslash\)R1\(\backslash\)231083), by the ANR projects TWIN-WIDTH (ANR-21-CE48-0014) and Digraphs (ANR-19-CE48-0013), and also the EPSRC project EP/T004878/1: Multilayer Algorithmics to Leverage Graph Structure.
2303.18185
Nehari manifold approach for fractional Kirchhoff problems with extremal value of the parameter
In this work we study the following nonlocal problem \begin{equation*} \left\{ \begin{aligned} M(\|u\|^2_X)(-\Delta)^s u&= \lambda {f(x)}|u|^{\gamma-2}u+{g(x)}|u|^{p-2}u &&\mbox{in}\ \ \Omega, u&=0 &&\mbox{on}\ \ \mathbb R^N\setminus \Omega, \end{aligned} \right. \end{equation*} where $\Omega\subset \mathbb R^N$ is open and bounded with smooth boundary, $N>2s, s\in (0, 1), M(t)=a+bt^{\theta-1},\;t\geq0$ with $ \theta>1, a\geq 0$ and $b>0$. The exponents satisfy $1<\gamma<2<{2\theta<p<2^*_{s}=2N/(N-2s)}$ (when $a\neq 0$) and $2<\gamma<2\theta<p<2^*_{s}$ (when $a=0$). The parameter $\lambda$ involved in the problem is real and positive. The problem under consideration has nonlocal behaviour due to the presence of nonlocal fractional Laplacian operator as well as the nonlocal Kirchhoff term $M(\|u\|^2_X)$, where $\|u\|^{2}_{X}=\iint_{\mathbb R^{2N}} \frac{|u(x)-u(y)|^2}{\left|x-y\right|^{N+2s}}dxdy$. The weight functions $f, g:\Omega\to \mathbb R$ are continuous, $f$ is positive while $g$ is allowed to change sign. In this paper an extremal value of the parameter, a threshold to apply Nehari manifold method, is characterized variationally for both degenerate and non-degenerate Kirchhoff cases to show an existence of at least two positive solutions even when $\lambda$ crosses the extremal parameter value by executing fine analysis based on fibering maps and Nehari manifold.
P. K. Mishra, V. M. Tripathi
2023-03-31T16:29:07Z
http://arxiv.org/abs/2303.18185v1
# Nehari manifold approach for fractional Kirchhoff problems with extremal value of the parameter ###### Abstract. In this work we study the following nonlocal problem \[\begin{cases}M(\|u\|_{X}^{2})(-\Delta)^{s}u=\lambda f(x)|u|^{\gamma-2}u+g(x)|u|^ {p-2}u&\text{in}\ \ \Omega,\\ u=0&\text{on}\ \ \mathbb{R}^{N}\setminus\Omega,\end{cases}\] where \(\Omega\subset\mathbb{R}^{N}\) is open and bounded with smooth boundary, \(N>2s,s\in(0,1),M(t)=a+bt^{\theta-1},\ t\geq 0\) with \(\theta>1,a\geq 0\) and \(b>0\). The exponents satisfy \(1<\gamma<2<2\theta<p<2_{s}^{*}=2N/(N-2s)\) (when \(a\neq 0\)) and \(2<\gamma<2\theta<p<2_{s}^{*}\) (when \(a=0\)). The parameter \(\lambda\) involved in the problem is real and positive. The problem under consideration has nonlocal behaviour due to the presence of nonlocal fractional Laplacian operator as well as the nonlocal Kirchhoff term \(M(\|u\|_{X}^{2})\), where \(\|u\|_{X}^{2}=\iint_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}dxdy\). The weight functions \(f,g:\Omega\rightarrow\mathbb{R}\) are continuous, \(f\) is positive while \(g\) is allowed to change sign. In this paper an extremal value of the parameter, a threshold to apply Nehari manifold method, is characterized variationally for both degenerate and non-degenerate Kirchhoff cases to show an existence of at least two positive solutions even when \(\lambda\) crosses the extremal parameter value by executing fine analysis based on fibering maps and Nehari manifold. Key words and phrases:Nehari manifold, variational methods, extremal parameter, concave-convex, multiplicity 2010 Mathematics Subject Classification: 35R11, 35A15, 49J35 ## 1. Introduction In the paper, we study the following nonlocal problem ( \[P_{\lambda}\] ) \[\begin{cases}M(\|u\|_{X}^{2})(-\Delta)^{s}u=\lambda f(x)|u|^{\gamma-2}u+g(x)|u |^{p-2}u&\text{in}\ \ \Omega,\\ u=0&\text{on}\ \ \mathbb{R}^{N}\setminus\Omega,\end{cases}\] where \(\|u\|_{X}^{2}=\iint_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}dxdy\), \(\Omega\subset\mathbb{R}^{N}\) is open and bounded with smooth boundary, \(N>2s,s\in(0,1),M(t)=a+bt^{\theta-1},\ t\geq 0\) with \(p/2>\theta>1,a\geq 0\) and \(b>0\). The exponents satisfy \(1<\gamma<2<2\theta<p<2_{s}^{*}\) (when \(a\neq 0\)) and \(2<\gamma<2\theta<p<2_{s}^{*}\) (when \(a=0\)), where \(2_{s}^{*}=2N/(N-2s)\) is fractional Sobolev critical exponent. Here \((-\Delta)^{s}\) is well know fractional Laplacian operator which is defined, up to a normalization constant, as \[(-\Delta)^{s}\varphi(x)=\int_{\mathbb{R}^{N}}\frac{2\varphi(x)-\varphi(x+y)- \varphi(x-y)}{|y|^{N+2s}}dy,\quad x\in\mathbb{R}^{N},\] for any \(\varphi\in C_{0}^{\infty}(\Omega)\). The weight functions \(f,g:\Omega\rightarrow\mathbb{R}\) are continuous and satisfy following assumptions: * \(f\in C(\Omega)\cap L^{\infty}(\Omega)\) and there exists some \(f_{0}>0\) such that \(f(x)>f_{0}\) for all \(x\) near \(\partial\Omega\), * \(g\in C(\Omega)\cap L^{\infty}(\Omega)\) with \(0\not\equiv g^{+}=\max\{0,g(x)\}\). The Kirchhoff term is called degenerate if \(a=0\) and non-degenerate otherwise. The problem is non-local due to the presence of the operator formed by the nonlocal Kirchhoff term and fractional Laplacian operator. The nonlocal nature of the operator does not allow us to compare the equation in \((P_{\lambda})\) pointwise. The problem \((P_{\lambda})\) has a variational structure and the suitable functional space (inspired from [22]) to look for solutions can be \[X=\big{\{}u\in H^{s}(\mathbb{R}^{N}):\ u=0\ \text{a.e. in}\ \mathbb{R}^{N} \setminus\Omega\big{\}},\] where \(H^{s}(\mathbb{R}^{N})\) is the fractional Sobolev space (see [17] for more details). We recall that \(X\) is a Hilbert space endowed with the following norm \[\left\|u\right\|_{X}:=\left(\iint_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|x-y |^{N+2s}}dxdy\right)^{1/2}.\] **Definition 1.1**.: _A function \(u\in X\) is said to be a (weak) solution of \((P_{\lambda})\) if for every \(\phi\in X\)_ \[(a+b\|u\|_{X}^{2(\theta-1)}) \iint_{\mathbb{R}^{2N}}\frac{(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^ {N+2s}}dxdy\] \[-\lambda\int_{\Omega}fu^{\gamma-1}\phi dx-\int_{\Omega}gu^{p-1} \phi dx=0.\] To study the problem via variatonal methods, we define the associated energy functional \(\mathcal{E}_{\lambda}:X\to\mathbb{R}\) as \[\mathcal{E}_{\lambda}(u)=\frac{a}{2}\|u\|_{X}^{2}+\frac{b}{2\theta}\|u\|_{X}^ {2\theta}-\frac{\lambda}{\gamma}\int_{\Omega}f|u|^{\gamma}dx-\frac{1}{p}\int_ {\Omega}g|u|^{p}dx.\] One can see that \(\mathcal{E}_{\lambda}(u)\) is of class \(C^{1}(X)\) and solutions of \((P_{\lambda})\), in the light of above Definition 1.1, can be viewed as the critical points of \(\mathcal{E}_{\lambda}(u)\). The class of problem under consideration has widely studied in recent past because of their vast application in many areas of science. The above class of problems can be seen as stationary state of the following problem \[\begin{cases}u_{tt}-M\left(\int_{\Omega}|\nabla u|^{2}dx\right) \Delta u=F(x,u)&\text{in}\ \ \Omega,\\ u=0&\text{on}\ \ \partial\Omega,\end{cases}\] which was firstly introduced by Kirchhoff to deal with free transversal oscillations of elastic strings (see [14]). The term \(M\) measures the tension in the string caused by any change in the length of the string during vibration and is directly proportional to the Sobolev norm of the displacement of the string. The degenerate case, arises if the initial tension of the string is zero, physically a very realistic model. We refer the interested readers to the servey [19] for recent advances in nonlocal Kirchhoff problems. The class of nonlinearity under consideration in this paper is referred as concave-convex nonlinearity and attracted many researchers to explore after the pioneering work of Ambrosetti et al. in [1] where, authors have established global multiplicity result. The work on this class of nonlinearity can be found in [2, 4, 6, 25, 26] and references therein for both local and nonlocal problems. In particular, [4] and [2] contain the study of quasilinear and fractional counter part of [1], respectively. Precisely the problem involving non-local fractional operator of the type \[(-\Delta)^{s}u=\lambda fu^{q-1}+gu^{p-1},\ u>0,\ \text{ in}\ \Omega,\ \ u=0\ \text{on}\ \mathbb{R}^{n}\setminus\Omega,\] has been studied in [10]( see [25, 26] for \(s=1\)) for existence of at least two solutions when \(1<q<2\) and \(1<p<2^{*}_{s}-1\) via Nehari manifold technique introduced by Pokhozhaev (see [11]) and Nehari (see [15, 16]). The multiplicity results obtained obtained in [10] (also in [25, 26]) heavily relies on the fact that the nonlinearity exhibit the convex-concave behaviour which is essential for fibering maps to have two critical points. In this paper, we have shown that the degenerate fractional Kirchhoff operator allows us to break this structure of concave-convex nonlinearity in order to show multiple solutions via Nehari manifold technique. Another class of nonlocal problems involves the operator formed with the fusion of Kirchhoff term and fractional Laplacian. In [9] (see the Appendix) authors have given a motivation for this class of problem by modelling a fractional Kirchhoff problem coming out of string vibrations. These Kirchhoff counterparts have also occupied the enough space in the literature due to its vast applications in applied science, see for reference [3, 5, 8, 9, 18, 20, 21, 27] and references therein. We also refer readers to [7, 18] for non-degenerate structure of Kirchhoff term where results have been established with a control on the parameter \(\lambda<\Lambda\) for some small \(\Lambda>0\) via Nehari manifold minimization argument. The control on the parameter \(\lambda\) is natural as for \(\lambda\geq\Lambda\) it is not obvious to show that the constrained minimizers obtained in different decompositions of Nehari set are the critical points of the associated energy functional of the problem because \(\mathcal{N}_{\lambda}\) is no longer a manifold. In this paper, inspired from the work of [13, 23], we have included this delicate case in our study when \(\lambda\geq\lambda_{a,b}^{*}\) where \(\lambda_{a,b}^{*}\) is termed as extremal parameter introduced in [12] while studying generalized Rayleigh quotient. The value \(\lambda_{a,b}^{*}\) is extremal in the sense that the Nehari set will no longer remain a manifold if the parameter \(\lambda\) crosses this threshold \(\lambda_{a,b}^{*}\). By overcoming the above technical difficulty, for \(\epsilon>0\) sufficiently small, we have shown existence of at least two positive solutions by constrained minimization on Nehari manifold based on fibering analysis for \(\lambda\in(0,\lambda_{a,b}^{*}+\epsilon)\). The work of this paper is divided in the following order. In section [2], we have provided framework of Nehari setup and section [3] includes some technical results together with our main result of the paper. In section [4], [5] and [6], we have shown the existence of minimizers for the energy functional when \(\lambda<\lambda_{a,b}^{*},\lambda=\lambda_{a,b}^{*}\) and \(\lambda>\lambda_{a,b}^{*}\) respectively. We have attempted to address both the degenerate and non-degenerate Kirchhoff cases collectively and distinguished whenever it's required. ## 2. Nehari manifold structure The main objective of this paper is to look for critical points of \(\mathcal{E}_{\lambda}\) over \(X\). The critical points of \(\mathcal{E}_{\lambda}\) can be obtained via minimization of \(\mathcal{E}_{\lambda}\). Since \(\mathcal{E}_{\lambda}\), fails to be bounded from below in \(X\) and therefore, we constrained the minimization problem in a proper subset of \(X\), namely Nehari manifold (a natural constraint as it contains all the critical points of \(\mathcal{E}_{\lambda}\)) defined as \[\mathcal{N}_{\lambda}=\left\{u\in X\setminus\{0\}:D_{u}\mathcal{E}_{\lambda}(u) (u)=0\right\}.\] It can be observed that \(\mathcal{N}_{\lambda}\) is a disjoint union of the following sets, \[\mathcal{N}_{\lambda}^{+}=\{u\in\mathcal{N}_{\lambda}: D_{uu}\mathcal{E}_{\lambda}(u)(u,u)>0\},\mathcal{N}_{\lambda}^{-}=\{u\in \mathcal{N}_{\lambda}:D_{uu}\mathcal{E}_{\lambda}(u)(u,u)<0\},\] \[\mathcal{N}_{\lambda}^{0}=\{u\in\mathcal{N}_{\lambda}: D_{uu}\mathcal{E}_{\lambda}(u)(u,u)=0\}.\] Consider the fibering functions \(\psi_{\lambda,u}:[0,\infty)\rightarrow\mathbb{R}\) for every \(u\in X\), defined as \(\psi_{\lambda,u}(t)=\mathcal{E}_{\lambda}(tu)\), that is, \[\psi_{\lambda,u}(t)=\frac{at^{2}}{2}\|u\|_{X}^{2}+\frac{bt^{2\theta}}{2\theta} \|u\|_{X}^{2\theta}-\frac{\lambda t^{\gamma}}{\gamma}\int_{\Omega}f|u|^{\gamma }dx-\frac{t^{p}}{p}\int_{\Omega}g|u|^{p}dx.\] Note that \(u\in\mathcal{N}_{\lambda}\) if and only if \(\psi_{\lambda,u}^{\prime}(1)=0\) and, in general, \(tu\in\mathcal{N}_{\lambda}\) if and only if \(\psi_{\lambda,u}^{\prime}(t)=0\). Therefore, we can read Nehari submanifolds in terms of \(t=1\) being local maxima or local minima or saddle point of \(\psi_{\lambda,u}(t)\) as follows \[\mathcal{N}_{\lambda}^{+}=\{u\in\mathcal{N}_{\lambda}:\psi_{ \lambda,u}^{\prime\prime}(1)>0\},\,\mathcal{N}_{\lambda}^{-}=\{u\in\mathcal{N }_{\lambda}:\psi_{\lambda,u}^{\prime\prime}(1)<0\},\] \[\mathcal{N}_{\lambda}^{0}=\{u\in\mathcal{N}_{\lambda}:\psi_{ \lambda,u}^{\prime\prime}(1)=0\}.\] The presence of sign changing weights as well as the nature of Kirchhoff term can govern the behaviour of fibering functions and therefore we split the discussion based on the sign of the integral involving sign changing weight \(g(x)\) and the nature of Kirchhoff term. For, we define the following \[\mathcal{C}^{+}=\{u\in X:\int_{\Omega}g|u|^{p}dx>0\}\;\;\text{and}\;\;\mathcal{C}^{- }=\{u\in X:\int_{\Omega}g|u|^{p}dx\leq 0\}.\] In the next section we give a characterization of extremal parameter \(\lambda_{a,b}^{*}\) for the both degenerate and non-degenerate Kirchhoff term. ## 3. Characterization of extremal parameter For \(u\in\mathcal{C}^{+}\), by solving following system of two nonlinear equations in variables \(t\) and \(\lambda\) \[\left\{\begin{aligned} a\|tu\|_{X}^{2}+b\|tu\|_{X}^{2\theta}- \lambda\int_{\Omega}f|tu|^{\gamma}dx-t^{p}\int_{\Omega}g(x)|u|^{p}dx& =0,\\ 2a\|tu\|_{X}^{2}+2b\theta\|tu\|_{X}^{2\theta}-\lambda\gamma\int_ {\Omega}f|tu|^{\gamma}dx-p\ t^{p}\int_{\Omega}g(x)|u|^{p}dx&=0, \end{aligned}\right. \tag{3.1}\] we can find \((t_{a,b}(u),\lambda_{a,b}(u))\) uniquely. In fact, by eliminating \(\lambda\) from above equations we can see \(t_{a,b}(u)\) as unique zero of the following scalar function \[m_{u}(t)=a(2-\gamma)t^{2}\|u\|_{X}^{2}+b(2\theta-\gamma)t^{2\theta}\|u\|_{X}^ {2\theta}-(p-\gamma)t^{p}\int_{\Omega}g(x)|u|^{p}dx,\] and hence we have unique \(\lambda_{a,b}(u)\), given as \[\lambda_{a,b}(u)=\frac{a(p-2)\|t_{a,b}(u)u\|_{X}^{2}+2b(p-2\theta)\|t_{a,b}(u) u\|_{X}^{2\theta}}{(p-\gamma)\int_{\Omega}f|t_{a,b}(u)u|^{\gamma}dx}.\] In the degenerate Kirchhoff case, we can get an explicit expression for the solution of the above system (3.1) as \[t_{0,1}(u):=t(u)=\left(\frac{(2\theta-\gamma)\|u\|_{X}^{2\theta}}{(p-\gamma) \int_{\Omega}g(x)|u|^{p}dx}\right)^{\frac{1}{p-2\theta}}\] and \[\lambda_{0,1}(u):=\lambda(u)=\left(\frac{p-2\theta}{p-\gamma}\frac{\|u\|_{X} ^{2\theta}}{\int_{\Omega}f|u|^{\gamma}dx}\right)\left(\frac{2\theta-\gamma}{p- \gamma}\frac{\|u\|_{X}^{2\theta}}{\int_{\Omega}g(x)|u|^{p}dx}\right)^{\frac{2 \theta-\gamma}{p-2\theta}}.\] For \(a\geq 0,b>0\) define the extremal value \[\lambda_{a,b}^{*}=\inf_{X\cap\mathcal{C}^{+}}\lambda_{a,b}(u) \tag{3.2}\] In particular, when \(a=0,b=1\), we denote \(\lambda^{*}=\lambda_{0,1}^{*}\). The following proposition is direct consequence of compact fractional Sobolev embeddings. **Proposition 3.1**.: _The function \(\lambda_{a,b}(u)\) is continuous, 0-homogeneous and unbounded from above. Moreover, \(\lambda_{a,b}^{*}>0\) and there exists \(u\in\mathcal{C}^{+}\) such that \(\lambda_{a,b}^{*}=\lambda_{a,b}(u)\)._ **Remark 3.1**.: _The function \(\lambda_{a,b}(u)\) is weakly lower semicontinuous also. For the degenerate case, it follows directly form weak lower semi-continuity of the norm. For non-degerate case, let \(u_{n}\rightharpoonup u\) in \(\mathcal{C}^{+}\). If \(\|u_{n}\|_{X}\to\|u\|_{X},\) then from uniqueness of the solution of the system in (3.1), we get \(t(u_{n})\to t(u)\) and \(\lambda(u_{n})\to\lambda(u)\). If not, then we have \(\|u\|_{X}<\liminf_{n\to\infty}\|u_{n}\|\). Therefore, \(0=\psi_{\lambda_{a,b},u}^{\prime}(t(u))<\liminf_{n\to\infty}\psi_{\lambda_{a,b },u_{n}}^{\prime}(t(u)),\) i.e., \(\psi_{\lambda,u_{n}}^{\prime}(t(u))>0\) for large value of \(n\). Thus graph of \(\psi_{\lambda_{a,b},u_{n}}(t)>0\) will be of the type fig 2(b) for large values of \(n\). Equivalently, \(\lambda_{a,b}(u)<\lambda_{a,b}(u_{n})\) for large \(n\) or \(\lambda_{a,b}(u)\leq\liminf_{n\to\infty}\lambda_{a,b}(u_{n})\)._ In the following Proposition, we show that Nehari decompositions are non-empty. **Proposition 3.2**.: _For a given \(u\in X\) and \(a>0,b>0\) with \(1<\gamma<2\) or \(a=0,b>0\) with \(2<\gamma<2\theta\) there are the following two cases:_ * _If_ \(u\in C^{-}\)_, then for any_ \(\lambda>0\)_, there exists a unique_ \(t^{+}_{\lambda}(u)>0\) _such that_ \(t^{+}_{\lambda}u\in\mathcal{N}^{+}_{\lambda}\)_._ * _If_ \(u\in\mathcal{C}^{+}\)_, then we have following three situations._ * \(0<\lambda<\lambda_{a,b}(u)\)_, there exist unique_ \(0<t^{+}_{\lambda}(u)<t^{-}_{\lambda}(u)\) _such that_ \(t^{+}_{\lambda}u\in\mathcal{N}^{+}_{\lambda}\) _and_ \(t^{-}_{\lambda}u\in\mathcal{N}^{-}_{\lambda}\)_._ * \(\lambda=\lambda_{a,b}(u)\)_, then there exists unique_ \(t^{0}_{\lambda}(u)>0\) _such that_ \(t^{0}_{\lambda}u\in\mathcal{N}^{0}_{\lambda}\)_._ * \(\lambda>\lambda_{a,b}(u)\)_, then_ \(tu\not\in\mathcal{N}_{\lambda}\) _for any_ \(t>0\)_. In particular,_ \(u\not\in\mathcal{N}_{\lambda}\)_._ Proof.: Proof of this proposition is divided for degenerate and non-degenerate cases respectively as follows. **Case 1:**(Degenrate Kirchhoff case) To see the graph of fibering map \(\psi_{\lambda,u}\) define a map \(\Phi_{u}:\mathbb{R}^{+}\to\mathbb{R}\) such that \[\Phi_{u}(t)=t^{2\theta-\gamma}\|u\|_{X}^{2\theta}-t^{p-\gamma}\int_{\Omega}g|u |^{p}dx,\] which has a unique critical point when \(u\in\mathcal{C}^{+}\) and can be obtained by solving the following equation \[\Phi^{\prime}_{u}(t)=(2\theta-\gamma)t^{2\theta-\gamma-1}\|u\|_{X}^{2\theta}-( p-\gamma)t^{p-\gamma-1}\int_{\Omega}g|u|^{p}dx.\] Moreover, as \(\lim_{t\to 0^{+}}\Phi_{u}(t)=0\) and \(\lim_{t\to\infty}\Phi_{u}(t)=-\infty\) the graph of \(\Phi_{u}\) be as shown in the fig 1(b). When \(u\in\mathcal{C}^{-}\), \(\Phi^{\prime}_{u}(t)>0\) for all \(t\geq 0\) and hence no critical point for \(\Phi_{u}(t)\) leading to the graph as shown in fig 1(a). **Case 2:**(Non-degenrate Kirchhoff case) For a given \(u\in X\setminus 0\), define a map \(\Phi_{u}:\mathbb{R}^{+}\to\mathbb{R}\) such that \[\Phi_{u}(t)=at^{2-\gamma}\|u\|_{X}^{2}+bt^{2\theta-\gamma}\|u\|_{X}^{2\theta}- t^{p-\gamma}\int_{\Omega}g|u|^{p}dx.\] Note that \(tu\in\mathcal{N}_{\lambda}\) (or \(t\) is a critical point of \(\psi_{\lambda,u}\)) if and only if the following equation has a scalar solution in \(t\) \[\Phi_{u}(t)=\lambda\int_{\Omega}f|u|^{\gamma}dx \tag{3.3}\] In order to look for solution of above scalar equation (3.3), we analyze the behaviour of \(\Phi_{u}(t)\). For, \[\Phi^{\prime}_{u}(t)=a(2-\gamma)t^{1-\gamma}\|u\|_{X}^{2}+b(2\theta-\gamma)t ^{2\theta-\gamma-1}\|u\|_{X}^{2\theta}-(p-\gamma)t^{p-\gamma-1}\int_{\Omega}g|u |^{p}dx.\] When \(u\in\mathcal{C}^{-}\), \(\Phi^{\prime}_{u}(t)>0\) for all \(t\geq 0\) and hence (3.3) has unique solution for all values of \(\lambda>0\). Consequently, unique projection of \(u\), namely \(t_{\lambda}^{+}u\) in \(\mathcal{N}_{\lambda}^{+}\) (as \(\psi^{\prime\prime}_{\lambda,u}(t)=t^{1+\gamma}\Phi^{\prime}_{u}(t)\)) In order to see the behaviour of \(\Phi_{u}(t)\) when \(u\in\mathcal{C}^{+}\), we can rewrite \(\Phi^{\prime}_{u}(t)=t^{1-\gamma}\mathcal{H}_{u}(t)\), where \[\mathcal{H}_{u}(t)=a(2-\gamma)\|u\|_{X}^{2}+b(2\theta-\gamma)t^{2\theta-2}\|u \|_{X}^{2\theta}-(p-\gamma)t^{p-2}\int_{\Omega}g|u|^{p}dx.\] Then \[\mathcal{H}^{\prime}_{u}(t)=b(2\theta-2)(2\theta-\gamma)t^{2\theta-1}\|u\|_{X }^{2\theta-2}-(p-2)(p-\gamma)t^{p-1}\int_{\Omega}g|u|^{p}dx.\] Thus one can observe that there exist a unique critical point \(t^{*}>0\) such that \(\mathcal{H}^{\prime}_{u}(t^{*})=0\), where \[t^{*}=\left(\frac{b(2\theta-2)(2\theta-\gamma)\|u\|_{X}^{2\theta}}{(p-2)(p- \gamma)\int_{\Omega}g|u|^{p}dx}\right)^{\frac{1}{p-2\theta}}.\] When \(u\in\mathcal{C}^{+}\), \(\mathcal{H}^{\prime}_{u}(t)>0\) as \(t\to 0^{+}\) and \(\mathcal{H}^{\prime}_{u}(t)\to-\infty\) as \(t\to\infty\). As \(\mathcal{H}_{u}(t)>0\), as \(t\to 0^{+}\) and \(\mathcal{H}_{u}(t)\to-\infty\) as \(t\to\infty\), there exists unique \(t_{*}>t^{*}>0\) such that \(\mathcal{H}_{u}(t_{*})=0\). Therefore \(\Phi_{u}(t)\) has global maximum at unique point \(t=t_{*}\). Consequently, equation (3.3) has exactly two solutions for suitable control over \(\lambda\). In otherwords, there are unique projections of \(u\), namely \(t_{\lambda}^{+}u\) and \(t_{\lambda}^{-}u\) such that \(t_{\lambda}^{+}u\in\mathcal{N}_{\lambda}^{+}\) and \(t_{\lambda}^{-}u\in\mathcal{N}_{\lambda}^{-}\) for \(\lambda<\lambda_{a,b}(u)\). The result of this proposition is depicted in the figure fig 2 above. Observe that fig 2\((a)\) corresponds to the part \((i)\) and fig 2\((b)-2(d)\) are related with parts \((ii)(a)-(c)\) respectively. **Remark 3.2**.: _It is clear in light of the above results, that \(\mathcal{N}_{\lambda}^{-}\neq\emptyset\) and \(\mathcal{N}_{\lambda}^{+}\neq\emptyset\) for all \(\lambda>0\). Moreover for \(0<\lambda<\lambda_{a,b}^{*}\), \(\mathcal{N}_{\lambda}^{0}=\emptyset\). By setting, \(N(u)=D_{u}\mathcal{E}_{\lambda}(u)u\), we have \(D_{u}N(u)(u)=D_{uu}\mathcal{E}_{\lambda}(u)(u,u)+D_{u}\mathcal{E}_{\lambda}(u) u=D_{uu}\mathcal{E}_{\lambda}(u)(u,u)\neq 0\) as \(u\in\mathcal{N}_{\lambda}\) and \(\mathcal{N}_{\lambda}^{0}=\emptyset\) for \(\lambda<\lambda_{a,b}^{*}\). Therefore from implicit function theorem one can view \(\mathcal{N}_{\lambda}\) as \(C^{1}\) manifold of codimension one when \(\lambda<\lambda_{a,b}^{*}\). But once \(\mathcal{N}_{\lambda}^{0}\neq\emptyset\), we _can have a \(u\in\mathcal{N}_{\lambda}\) such that \(D_{uu}\mathcal{E}_{\lambda}(u)(u,u)=0\), in that case the set \(\mathcal{N}_{\lambda}\) will be no longer a manifold. In coming section we will see that Nehari decomposition \(\mathcal{N}_{\lambda}^{0}\neq\emptyset\) when \(\lambda\) crosses the extremal value._ The main results of this paper are stated in the form of the following theorems. **Theorem 3.1**.: _Let \(g<0\) near \(\partial\Omega\), \(M\) be defined as \(M(t)=a+bt^{\theta-1}\) with \(\theta>1\) satisfying \(2\theta<p<2_{s}^{*}\) and \(a>0\) then for \(\gamma\in(1,2)\) there exist at least two positive solutions for the problem \((P_{\lambda})\) for \(\lambda\in(0,\lambda_{a,b}^{*}+\epsilon)\), where \(\epsilon>0\) is sufficiently small._ In the sequel for the degenerate Kirchhoff case we established the multiplicity results even when the non-linearity looses the concave-convex behaviour with no control on the sign of \(g\) near boundary. Precisely, we have the following result. **Theorem 3.2**.: _Let \(M(t)=t^{\theta-1},\theta>1\) satisfying \(2\theta<p<2_{s}^{*}\), then for \(\gamma\in(2,2\theta)\) there exist two positive solutions for \((P_{\lambda})\) when \(\lambda\in(0,\lambda^{*}+\bar{\epsilon})\), where \(\bar{\epsilon}>0\) is sufficiently small._ **Remark 3.3**.: _The results obtained in this paper contributes to the literature in the following way. We have complimented the work of [10, 23] for nonlocal fractional Kirchhoff problem and the work of [10] has further extended for the parameter lying beyond the extremal parameter value. Even when the nonlinearity lacks the sublinear-superlinear combination, we have shown multiplicity results for degenerate Kirchhoff case._ Observe that, from Proposition 3.2, \(u\) may has its projection in \(\mathcal{N}_{\lambda}^{-}\) only if \(u\in\mathcal{C}^{+}\) but \(\mathcal{N}_{\lambda}^{+}\) can have projections of \(u\) irrespective of \(u\) lying in \(\mathcal{C}^{+}\) or not. We require this distinction on later stages while studying the minimization problems. Therefore, for each \(\lambda>0\), we define the following sets noting this distinction \[\hat{\mathcal{N}}_{\lambda} =\left\{u\in X\setminus\left\{0\right\}:u\in\mathcal{C}^{+}\text { and }\lambda<\lambda_{a,b}(u)\right\},\] \[\hat{\mathcal{N}}_{\lambda}^{+} =\left\{u\in X\setminus\left\{0\right\}:u\in\mathcal{C}^{-} \right\}. \tag{3.4}\] **Remark 3.4**.: _One can observe that when \(u\in\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}_{\lambda}^{+}\) then \(tu\in\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}_{\lambda}^{+}\) as \(\lambda_{a,b}(u)\) is \(0-\) homogeneous and thus we can say that the set \(\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}_{\lambda}^{+}\) represents a cone generated by \(\mathcal{N}_{\lambda}^{+}\cup\mathcal{N}_{\lambda}^{-}\), that is,_ \[\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}_{\lambda}^{+}=\{tu:t>0,u\in \mathcal{N}_{\lambda}^{+}\cup\mathcal{N}_{\lambda}^{-}\}.\] To prove our main result, for all \(\lambda>0\), let us consider the following minimization problems \[\hat{\mathcal{J}}_{\lambda}^{-}=\inf\{\mathcal{J}_{\lambda}^{-}(u):u\in \mathcal{N}_{\lambda}^{-}\}\ \ \text{and}\ \ \ \hat{\mathcal{J}}_{\lambda}^{+}=\inf\{\mathcal{J}_{\lambda}^{+}(u):u\in \mathcal{N}_{\lambda}^{+}\},\] where functions \(\mathcal{J}_{\lambda}^{-}:\hat{\mathcal{N}}_{\lambda}\to\mathbb{R}\) and \(\mathcal{J}_{\lambda}^{+}:\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}_{ \lambda}^{+}\to\mathbb{R}\) are defined as follows \[\mathcal{J}_{\lambda}^{-}(u)=\mathcal{E}_{\lambda}(t_{\lambda}^{-}(u)u)\ \ \text{and}\ \ \ \mathcal{J}_{\lambda}^{+}(u)=\mathcal{E}_{\lambda}(t_{\lambda}^{+}(u)u).\] We have the following observations about \(\mathcal{J}_{\lambda}^{\pm}\) as \(1<\gamma<p\) and \(2<2\theta<p\). **Proposition 3.3**.: _The functional \(\mathcal{J}_{\lambda}^{\pm}\) are coercive on Nehari set._ The following lemma is about the behaviour of energy functional with respect to the parameter \(\lambda\). **Lemma 3.1**.: _Let \(u\in X\setminus\left\{0\right\}\). Let \(I\) be an open interval in \(\mathbb{R}^{+}\) such that \(t_{\lambda}^{\pm}\) are well defined for all \(\lambda\in I\). Then,_ * _the functions_ \(I\ni\lambda\to t_{\lambda}^{\pm}(u)\) _are_ \(C^{1}\)_. Moreover,_ \(I\ni\lambda\to t_{\lambda}^{-}(u)\) _is decreasing while_ \(I\ni\lambda\to t_{\lambda}^{+}(u)\) _is increasing._ * _the functions_ \(I\ni\lambda\to\mathcal{J}_{\lambda}^{\pm}(u)\) _are_ \(C^{1}\) _and decreasing._ Proof.: The proof follows from implicit function theorem. **Remark 3.5**.: _When \(I=(0,\lambda^{*}_{a,b})\), both the claims in Lemma 3.1 remain true, independent of \(u\in X\)._ In order to prove our main result we have designed the following three sections dealing with the case \(\lambda<\lambda^{*}_{a,b},\lambda=\lambda^{*}_{a,b}\) and \(\lambda>\lambda^{*}_{a,b}\) respectively. ## 4. **Existence of solutions when \(0<\lambda<\lambda^{*}_{a,b}\)** In order to prove Theorem 3.1 first we have proved existence of at least two solutions of \((P_{\lambda})\) for \(\lambda\in(0,\lambda^{*}_{a,b})\). For that, we begin by showing that the functional \(\mathcal{E}_{\lambda}\) achieves its minimizers in the Nehari decomposition \(\mathcal{N}^{-}_{\lambda}\) and \(\mathcal{N}^{+}_{\lambda}\) for all \(\lambda\in(0,\lambda^{*}_{a,b})\) for both degenerate and non-degenerate cases. We close this section by verifying that these minimizers are the solutions to the problem in \((P_{\lambda})\). **Lemma 4.1**.: _For each \(0<\lambda<\lambda^{*}_{a,b}\), there exists \(v_{\lambda}\in\mathcal{N}^{+}_{\lambda}\) such that \(\mathcal{J}^{+}_{\lambda}(v_{\lambda})=\hat{\mathcal{J}}^{+}_{\lambda}\)._ Proof.: Let \(\{v_{n}\}\subset\mathcal{N}^{+}_{\lambda}\) be an arbitrary minimizing sequence for \(\mathcal{J}^{+}_{\lambda}\), that is \(\mathcal{J}^{+}_{\lambda}(v_{n})\to\hat{\mathcal{J}}^{+}_{\lambda}\). First we claim that \(\{v_{n}\}\) is bounded. In fact, when \(v_{n}\in\mathcal{N}^{+}_{\lambda}\) using Holder inequality, we have \[b(p-2\theta)\|v_{n}\|_{X}^{2\theta}<a(p-2)\|v_{n}\|_{X}^{2}+b(p-2\theta)\|v_{n }\|_{X}^{2\theta}<\lambda(p-\gamma)S^{-\frac{\gamma}{2}}\|f\|_{\frac{2\pi}{2 ^{2}_{s}-\gamma}}\|v_{n}\|_{X}^{\gamma}, \tag{4.1}\] implies that sequence \(\{v_{n}\}\) is bounded. Moreover, one can observe that when \(v\in\mathcal{N}^{-}_{\lambda}\), we have \[\mathcal{E}_{\lambda}(v) =a\left(\frac{1}{2}-\frac{1}{p}\right)\|v_{n}\|_{X}^{2}+b\left( \frac{1}{2\theta}-\frac{1}{p}\right)\|v_{n}\|_{X}^{2\theta}-\left(\frac{1}{ \gamma}-\frac{1}{p}\right)\lambda\int_{\Omega}f|v_{n}|^{\gamma}dx\] \[<a(p-2)\left(\frac{\gamma-2}{2p\gamma}\right)\|v\|_{X}^{2}+b(p-2 \theta)\left(\frac{\gamma-2\theta}{2p\theta\gamma}\right)\|v\|_{X}^{2\theta}.\] For for non-degenerate case (when \(\gamma\in(1,2)\)), we get \(\mathcal{E}_{\lambda}(v)<0.\) and hence, \(\hat{\mathcal{J}}^{+}_{\lambda}<0\). For degenerate case (when \(\gamma\in(2,2\theta)\)), we get \[\mathcal{E}_{\lambda}(v)<(p-2\theta)\left(\frac{\gamma-2\theta}{2p\theta \gamma}\right)\|v\|_{X}^{2\theta}<0,\] and hence \(\hat{\mathcal{J}}^{+}_{\lambda}<0\). Moreover, \(\hat{\mathcal{J}}^{+}_{\lambda}>-\infty\) from definition of \(\mathcal{E}_{\lambda}\). Thus up to a subsequence, \(v_{n}\rightharpoonup v_{\lambda}\geq 0\) in \(X\). Let us prove \(v_{\lambda}\not\equiv 0\). If \(v_{\lambda}\equiv 0\), then \(0=\mathcal{E}_{\lambda}(v_{\lambda})\leq\liminf\limits_{n\to\infty}\mathcal{E }_{\lambda}(v_{n})=\hat{\mathcal{J}}^{+}_{\lambda}<0\), a contradiction and hence, \(v_{\lambda}\not\equiv 0\). Observe that, irrespective of \(v_{\lambda}\in\mathcal{C}^{\pm}\), there exists \(t^{+}_{\lambda}(v)>0\) such that \(t^{+}_{\lambda}(v_{\lambda})v_{\lambda}\in\mathcal{N}^{+}_{\lambda}\) and \(\psi_{\lambda,v_{\lambda}}\) is decreasing in \((0,t^{+}_{\lambda}(v_{\lambda}))\) with \(\psi^{\prime}_{\lambda,v_{\lambda}}(t^{+}_{\lambda}(v_{\lambda}))=0\). Our claim is \(v_{n}\to v_{\lambda}\) in \(X\). Suppose on contrary \(v_{n}\rightharpoonup v\) then \(v_{n}\to v\) in \(L^{q}(\Omega)\) where \(q\in[1,2^{*}_{s})\) and \(|v_{n}(x)|\leq h(x)\) a.e. in \(\Omega\) for some \(h(x)\in L^{q}(\Omega)\). Under the assumptions \((F)\) and \((G)\), using Holder inequality and Lebesgue dominated convergence theorem, we obtain \[\int_{\Omega}f(|v_{n}|^{\gamma}-|v_{\lambda}|^{\gamma})\to 0,\ \ \int_{\Omega}g(x)|v_{n}(x)|^{p}\to\int_{\Omega}g(x)|v_{\lambda}(x)|^{p}.\] From weak lower semicontinuity of norm, \(\|v_{\lambda}\|_{X}<\liminf\nolimits_{n\to\infty}\|v_{n}\|_{X}\). Using all this information, we have \[\liminf\limits_{n\to\infty}\psi^{\prime}_{\lambda,v_{n}}(t^{+}_{ \lambda}(v_{\lambda}))=\liminf\limits_{n\to\infty}\left[a(t^{+}_{\lambda})(v_{ \lambda})\|v_{n}\|_{X}^{2}+b(t^{+}_{\lambda})^{2\theta-1}(v_{\lambda})\|v_{n }\|_{X}^{2\theta}\right]\] \[-\liminf\limits_{n\to\infty}\left[(t^{+}_{\lambda}(v_{\lambda}))^{ \gamma-1}\lambda\int_{\Omega}f|v_{n}|^{\gamma}dx+(t^{+}_{\lambda}(v_{\lambda}))^ {p-1}\int_{\Omega}g(x)|u_{n}(x)|^{p}\right]\] \[>a(t^{+}_{\lambda})(v_{\lambda})\|v_{\lambda}\|_{X}^{2}+b(t^{+}_{ \lambda})^{2\theta-1}(v_{\lambda})\|v_{\lambda}\|_{X}^{2\theta}-(t^{+}_{ \lambda}(v_{\lambda}))^{\gamma-1}\lambda\int_{\Omega}f|v_{\lambda}|^{\gamma}dx\] \[-(t^{+}_{\lambda}(v_{\lambda}))^{p-1}\int_{\Omega}g(x)|u_{ \lambda}(x)|^{p}\] \[=\psi^{\prime}_{\lambda,v_{\lambda}}(t^{+}_{\lambda}(v_{\lambda}))=0.\] Therefore, \(\psi^{\prime}_{\lambda,v_{n}}(t^{+}_{\lambda}(v_{\lambda}))>0\), for sufficiently large \(n\). Since \(v_{n}\in\mathcal{N}^{+}_{\lambda}\), by possible fibering map it is easy to see that \(\psi^{\prime}_{\lambda,v_{n}}(t)<0\) for \(0<t<1\) and \(\psi^{\prime}_{\lambda,v_{n}}(1)=0\) (\(t^{+}_{\lambda}(v_{n})=1\)), therefore \(t^{+}_{\lambda}(v_{\lambda})>1=t^{+}_{\lambda}(v_{n})\) for large \(n\). Thus, we have \[\hat{\mathcal{J}}^{+}_{\lambda}\leq\mathcal{E}_{\lambda}((t^{+}_{\lambda}(v_{ \lambda})v_{\lambda}))=\mathcal{J}^{+}_{\lambda}(v_{\lambda})<\liminf_{n\to \infty}\mathcal{J}^{+}_{\lambda}(v_{n})=\hat{\mathcal{J}}^{+}_{\lambda},\] which is an absurd. Hence \(v_{n}\to v_{\lambda}\) in \(X\). As a consequence of strong convergence, we have \[\lim_{n\to\infty}\psi^{{}^{\prime}}_{\lambda,v_{n}}(1)=\psi^{{}^{\prime}}_{ \lambda,v_{\lambda}}(1)=0\ \ \text{and}\ \ \lim_{n\to\infty}\psi^{{}^{\prime\prime}}_{\lambda,v_{n}}(1)=\psi^{{}^{ \prime\prime}}_{\lambda,v_{\lambda}}(1)\geq 0,\] and since \(\mathcal{N}^{0}_{\lambda}=\emptyset\) for \(0<\lambda<\lambda^{*}_{a,b}\), we have \(v_{\lambda}\in\mathcal{N}^{+}_{\lambda}\) and \(\mathcal{J}^{+}_{\lambda}(v_{\lambda})=\hat{\mathcal{J}}^{+}_{\lambda}\). **Lemma 4.2**.: _For each \(0<\lambda<\lambda^{*}_{a,b}\), there exists \(u_{\lambda}\in\mathcal{N}^{-}_{\lambda}\) such that \(\mathcal{J}^{-}_{\lambda}(u_{\lambda})=\hat{\mathcal{J}}^{-}_{\lambda}\)._ Proof.: The proof is similar to Lemma 4.1. In order to show that the minimizers obtained in Lemma 4.1 and Lemma 4.2 are the weak solutions of \((P_{\lambda})\), we make use of Theorem 2.3 of [6] as for \(\lambda<\lambda_{a,b}\), the set \(\mathcal{N}^{0}_{\lambda}=\emptyset\). Therefore the above non-trival minimizers of \(\mathcal{E}_{\lambda}\) are critical points of \(\mathcal{E}_{\lambda}\) in \(X\), equivalently weak solutions of \((P_{\lambda})\). ## 5. Existence of solutions when \(\lambda=\lambda^{*}_{a,b}\) In this section, we consider the case when the parameter \(\lambda\) takes the extremal value, that is, \(\lambda=\lambda^{*}_{a,b}\). Contrary to the case of \(\lambda<\lambda^{*}_{a,b}\), the set \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) is non-empty. Hence the study of \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) becomes important. In fact, we also explore the intersection of solution set of \((P_{\lambda^{*}_{a,b}})\) with \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) in this section. **Proposition 5.1**.: _The set \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\neq\emptyset\) and \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}=\{u\in\mathcal{N}_{\lambda^{*}_{a,b}}\cap \mathcal{C}^{+}:\lambda_{a,b}(u)=\lambda^{*}_{a,b}\}\). Moreover, \(\mathcal{N}^{0}_{\lambda}\neq\emptyset\) for all \(\lambda\geq\lambda^{*}_{a,b}\)._ Proof.: From Proposition 3.1, the minimization problem in (3.2) has a solution in \(X_{0}\cap\mathcal{C}^{+}\), that is, \(\lambda_{a,b}(u)=\lambda^{*}_{a,b}\) for some \(u\in X\cap\mathcal{C}^{+}\). Using Proposition 3.2 there exists \(t^{0}_{\lambda^{*}_{a,b}}=t^{0}_{\lambda^{*}_{a,b}}(u)>0\) such that \(t^{0}_{\lambda^{*}_{a,b}}u\in\mathcal{N}^{0}_{\lambda^{*}_{a,b}=\lambda_{a,b}(u)}\) and hence it's nonempty. Since \(\lambda_{a,b}(u)\) is unbounded above, therefore for all \(\lambda\geq\lambda^{*}_{a,b}\) there exists \(u\in\mathcal{C}^{+}\) such that \(\lambda=\lambda_{a,b}(u)\) and by definition of \(\lambda_{a,b}(u)\), \(t(u)u\in\mathcal{N}^{0}_{\lambda_{a,b}(u)=\lambda}\), hence \(\mathcal{N}^{0}_{\lambda_{a,b}}\neq\emptyset\) for all \(\lambda\geq\lambda^{*}_{a,b}\). Next we are going to prove an important Lemma which will help us to conclude that the minimizers of the energy functional are not in \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\), in spite of \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) being non-empty. **Proposition 5.2**.: _For any \(a>0\) and \(b>0\), the problem \((P_{\lambda^{*}_{a,b}})\) has no solution in \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\)._ Proof.: In order to give a proof of this Proposition, we first prove that for each \(u\in\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\), we have \[-2(a+\theta b\|u\|_{X}^{2\theta-2})(-\Delta)^{s}u-\lambda^{*}_{a,b}\gamma f|u|^{ \gamma-2}u-pg|u|^{p-2}u=0.\] In fact, for \(t(u)u\in\mathcal{N}_{\lambda}\), we have \[at^{2}\|u\|_{X}^{2}+bt^{2\theta}\|u\|_{X}^{2\theta}-\lambda t^{\gamma}\int_{ \Omega}f|u|^{\gamma}dx-t^{p}\int_{\Omega}g|u|^{p}dx=0.\] Differentiating above equation with respect to \(u\) then for all \(w\in X\), we \[t^{\prime}(u)(2at(u)\|u\|_{X}^{2}+2\theta b(t(u))^{2\theta-1}\|u\|_{X}^{2\theta }-\lambda(u)\gamma(t(u))^{\gamma-1}\int_{\Omega}f|u|^{\gamma}dx-p(t(u))^{p-1} \int_{\Omega}g|u|^{p}dx)\] \[-2a(t(u))^{2}(-\Delta)^{s}uw-2\theta b(t(u))^{2\theta}\|u\|_{X}^{2 \theta-2}(-\Delta)^{s}uw\] \[-\lambda(u)\gamma(t(u))^{\gamma}\int_{\Omega}f|u|^{\gamma-2}uwdx-( t(u))^{p}p\int_{\Omega}g|u|^{p-2}uwdx=0.\] As \(u\in\mathcal{N}^{0}_{\lambda^{*}_{a,b}=\lambda(u)}\) implies \(t(u)=1\), we get \[-2(a+\theta b\|u\|_{X}^{2\theta-2})(-\Delta)^{s}u-\lambda^{*}_{a,b}\gamma f|u| ^{\gamma-2}u-pg|u|^{p-2}u=0.\] Now if \(u\in X\) solves the problem \((P_{\lambda^{*}_{a,b}})\), we have \[-(a+b\|u\|_{X}^{2\theta-2})(-\Delta)^{s}u-\lambda^{*}_{a,b}f|u|^{\gamma-2}u-g| u|^{p-2}u=0.\] Now solving last two equations by eliminating \(\|u\|_{X}^{2\theta-2}\), we get \[-2a(\theta-1)(-\Delta)^{s}u=\lambda^{*}_{a,b}(2\theta-\gamma)f|u|^{\gamma-2}u +(2\theta-p)g|u|^{p-2}u,\] which implies, \[-(-\Delta)^{s}u=\lambda^{*}_{a,b}\frac{2\theta-\gamma}{2a(\theta-1)}f|u|^{ \gamma-2}u+\frac{2\theta-p}{2a(\theta-1)}g|u|^{p-2}u.\] Therefore near \(\partial\Omega\), we have \(-(-\Delta)^{s}u\geq 0\) leading to \[\lambda^{*}_{a,b}\gamma f|u|^{\gamma-2}u+pg|u|^{p-2}u =-\left(2a+2b\theta\|u\|_{X}^{2\theta-2}\right)(-\Delta)^{s}u\] \[\geq-(2a+2b\|u\|_{X}^{2\theta-2})(-\Delta)^{s}u\] \[=2\lambda^{*}_{a,b}f|u|^{\gamma-2}u+2g|u|^{p-2}u\] or \[\left(\frac{p-2}{2-\gamma}\right)g|u|^{p-\gamma}\geq\lambda^{*}_{a,b}f>0.\] Now using \(u(x)\to 0\) as \(x\to\partial\Omega\), left hand side of above inequality goes to zero while right hand side is bounded away from zero near boundary in the light of the assumption \((F)\) to give an absurd. **Remark 5.1**.: _In degenerate Kirchhoff case, after elimination of \((-\Delta)^{s}\) term, we get_ \[g|u|^{p-\gamma}=\lambda^{*}\frac{2\theta-\gamma}{p-2\theta}f. \tag{5.1}\] _Now, using \((F)\) and choosing \(x\) very close to boundary of \(\Omega\), one can observe that left hand side of (5.1) goes to 0, while the right hand side of (5.1) is finite and bounded away from zero, which gives us an absurd._ As a consequence of above Proposition, we have the following corollary. **Corollary 5.1**.: _The set \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) is compact._ In order to find solutions of \((P_{\lambda^{*}})\) we take a sequence \(\lambda_{n}\) such that \(\lambda_{n}\uparrow\lambda^{*}_{a,b}\). As \((P_{\lambda_{n}})\) has solutions \(u_{\lambda_{n}}\in\mathcal{N}^{-}_{\lambda_{n}}\) and \(v_{\lambda_{n}}\in\mathcal{N}^{+}_{\lambda_{n}}\) we expect strong limits of these solution sequences to give rise to solutions of \((P_{\lambda^{*}_{a,b}})\). Based on the definition in equation (3.4), we have the following couple of propositions which are required to study the limiting behaviour in case of \(\lambda\uparrow\lambda^{*}_{a,b}\). **Proposition 5.3**.: _There holds,_ \[\overline{\mathcal{N}^{+}_{\lambda^{*}_{a,b}}\cup\mathcal{N}^{-}_{\lambda^{*} _{a,b}}}=\mathcal{N}^{+}_{\lambda^{*}_{a,b}}\cup\mathcal{N}^{-}_{\lambda^{*}_ {a,b}}\cup\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\cup\{0\}.\] Proof.: The proof of this proposition follows by taking a sequence \(v_{n}\in\mathcal{N}^{-}_{\lambda^{*}_{a,b}}\) converging strongly to \(v\) in \(X\) and using compact sobolev embeddings. **Remark 5.2**.: _Remark 3.4 together with Proposition 5.3 leads to the following observation_ \[\overline{\hat{\mathcal{N}}_{\lambda^{*}_{a,b}}\cup\hat{\mathcal{N}}^{+}_{\lambda^ {*}_{a,b}}}=\hat{\mathcal{N}}_{\lambda^{*}_{a,b}}\cup\hat{\mathcal{N}}^{+}_{ \lambda^{*}_{a,b}}\cup\{tu:t>0,t\in\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\}\cup\{0\}.\] Now we have necessary mathematical background to show the existence of at least two positive solutions of the problem \((P_{\lambda^{*}_{a,b}})\) taking advantage of multiplicity of solutions \((P_{\lambda})\) when \(\lambda<\lambda^{*}_{a,b}\) via limiting behaviour. Define the map \(t_{\lambda^{*}_{a,b}}:\overline{\hat{\mathcal{N}}_{\lambda^{*}_{a,b}}}\setminus \{0\}\to\mathbb{R}\) as \[t_{\lambda^{*}_{a,b}}(u)=\begin{cases}t^{-}_{\lambda^{*}_{a,b}}(u)&u\in\hat{ \mathcal{N}}_{\lambda^{*}_{a,b}}\\ t^{0}_{\lambda^{*}_{a,b}}(u)&\text{otherwise}\end{cases}\] and \(s_{\lambda^{*}_{a,b}}:\overline{\hat{\mathcal{N}}_{\lambda^{*}_{a,b}}\cup\hat {\mathcal{N}}^{+}_{\lambda^{*}_{a,b}}}\to\mathbb{R}\) by \[s_{\lambda^{*}_{a,b}}(v)=\begin{cases}t^{+}_{\lambda^{*}_{a,b}}(v)&v\in\hat{ \mathcal{N}}_{\lambda^{*}_{a,b}}\cup\hat{\mathcal{N}}^{+}_{\lambda^{*}_{a,b}} \\ t^{0}_{\lambda^{*}_{a,b}}(v)&\text{otherwise}.\end{cases}.\] The following result can be adopted from [23]. **Proposition 5.4**.: _The following conclusions hold:_ * _The functions_ \(s_{\lambda^{*}_{a,b}}\) _and_ \(t_{\lambda^{*}_{a,b}}\) _are continuous. Moreover, once_ \(u\notin\hat{\mathcal{N}}^{+}_{\lambda^{*}_{a,b}}\)_, we have_ \[\lim_{\lambda\uparrow\lambda^{*}_{a,b}}t^{-}_{\lambda}(u)=t_{\lambda^{*}_{a,b} }(u),\ \ \lim_{\lambda\uparrow\lambda^{*}_{a,b}}t^{+}_{\lambda}(u)=s_{\lambda^{*}_{a,b} }(u)\] _and_ \[\lim_{\lambda\uparrow\lambda^{*}_{a,b}}\mathcal{E}_{\lambda}(t^{-}_{\lambda}( u)u)=\mathcal{E}_{\lambda^{*}_{a,b}}(t_{\lambda^{*}_{a,b}}(u)u),\ \ \lim_{\lambda\uparrow\lambda^{*}_{a,b}}\mathcal{E}_{\lambda}(t^{+}_{\lambda}(u)u )=\mathcal{E}_{\lambda^{*}_{a,b}}(s_{\lambda^{*}_{a,b}}(u)u).\] * _The set_ \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) _has empty interior in_ \(\mathcal{N}_{\lambda^{*}_{a,b}}\)_._ Now we are ready to discuss following minimization problems \[\hat{\mathcal{E}}^{-}_{\lambda^{*}_{a,b}}=\inf\{\mathcal{E}_{\lambda^{*}_{a,b} }(t_{\lambda^{*}_{a,b}}(u)u):u\in\mathcal{N}^{-}_{\lambda^{*}_{a,b}}\cup \mathcal{N}^{0}_{\lambda^{*}_{a,b}}\},\] and \[\hat{\mathcal{E}}^{+}_{\lambda^{*}_{a,b}}=\inf\{\mathcal{E}_{\lambda^{*}_{a,b} }(s_{\lambda^{*}_{a,b}}(v)v):v\in\mathcal{N}^{+}_{\lambda^{*}_{a,b}}\cup \mathcal{N}^{0}_{\lambda^{*}_{a,b}}\}.\] From Proposition 5.4 it follows that \(\hat{\mathcal{J}}^{\pm}_{\lambda^{*}_{a,b}}=\hat{\mathcal{E}}^{\pm}_{\lambda^{ *}_{a,b}}\). Next we observe behaviour of \((0,\lambda^{*}_{a,b}]\ni\lambda\mapsto\hat{\mathcal{J}}^{\pm}_{\lambda}\) in following proposition when \(\lambda\uparrow\lambda^{*}_{a,b}\). **Proposition 5.5**.: _The function \(\lambda\mapsto\hat{\mathcal{J}}^{\pm}_{\lambda}\) is decreasing for all \(\lambda\in(0,\lambda^{*}_{a,b}].\) Moreover,_ \[\lim_{\lambda\uparrow\lambda^{*}_{a,b}}\hat{\mathcal{J}}^{\pm}_{\lambda}=\hat {\mathcal{J}}^{\pm}_{\lambda^{*}_{a,b}}.\] Proof.: We know from Lemma 3.1 that \(\mathcal{J}^{\pm}_{\lambda}\) is decreasing on \((0,\lambda^{*}_{a,b})\), from there we can conclude that \(\hat{\mathcal{J}}^{\pm}_{\lambda}\) is decreasing on \((0,\lambda^{*}_{a,b})\). To show that \(\mathcal{J}^{\pm}_{\lambda}\) is decreasing on \((0,\lambda^{*}_{a,b}]\) take \(\lambda\in(0,\lambda^{*}_{a,b})\) arbitrary. Then for all \(u\in\mathcal{N}^{-}_{\lambda^{*}_{a,b}}\cup\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\) using decreasing behaviour of \(\mathcal{J}^{-}_{\lambda}\) and Proposition 5.4, we have \[\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}=\hat{\mathcal{E}}^{-}_{ \lambda^{*}_{a,b}}\leq\mathcal{E}_{\lambda^{*}_{a,b}}(t_{\lambda^{*}_{a,b}}(u)u) =\lim_{\Lambda\downarrow\lambda^{*}_{a,b}}\mathcal{E}_{\Lambda}(t ^{-}_{\Lambda}(u)u)=\lim_{\Lambda\downarrow\lambda^{*}_{a,b}}\mathcal{J}^{-}_{ \Lambda}(u)\] \[<\lim_{\Lambda\downarrow\lambda^{*}_{a,b}}\mathcal{J}^{-}_{ \lambda^{*}_{a,b}}(u)=\mathcal{J}^{-}_{\lambda^{*}_{a,b}}(u)<\mathcal{J}^{-}_{ \lambda}(u),\] and hence \(\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{-}\leq\hat{\mathcal{J}}_{\lambda}^{-}\). Similarly it follows that \(\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{+}\leq\hat{\mathcal{J}}_{\lambda}^{+}\). To prove remaining part take \(\lambda_{n}\in(0,\lambda_{a,b}^{*}]\) such that \(\lambda_{n}\uparrow\lambda_{a,b}^{*}\). Then \(\hat{\mathcal{J}}_{\lambda_{n}}^{-}\geq\hat{\mathcal{J}}_{\lambda_{a,b}^{-}}^ {-}\). Assume on contrary \(\hat{\mathcal{J}}_{\lambda_{n}}^{-}\to J>\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^ {-}\) and suppose that there exists \(\delta>0\) such that, \(J-\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{-}\geq\delta\). Choose \(\beta>0\) such that \(2\beta<\delta\) and \(u(\beta)\in\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\) such that \(\mathcal{J}_{\lambda_{a,b}^{*}}^{-}(u(\beta))-\hat{\mathcal{J}}_{\lambda_{a,b}^ {*}}^{-}\leq\beta\). Now, using continuity of \(\mathcal{J}_{\lambda}^{-}\) when \(\lambda<\lambda_{a,b}^{*}\), we have \[0\leq\mathcal{J}_{\lambda_{n}}^{-}(u(\beta))-\mathcal{J}_{\lambda_{a,b}^{*}}^ {-}(u(\beta))\leq\beta\] for large values of \(n\). Thus, \[\hat{\mathcal{J}}_{\lambda_{n}}^{-}\leq\mathcal{J}_{\lambda_{n}}^{-}(u(\beta ))\leq\mathcal{J}_{\lambda_{a,b}^{*}}^{-}(u(\beta))+\beta\leq\hat{\mathcal{J} }_{\lambda_{a,b}^{*}}^{-}+2\beta\leq J-\delta+2\beta<J.\] Therefore, for \(n\) sufficiently large we have \(J<J\), an absurd. Hence \(J=\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{-}\). Similarly, we can show that \(\lim_{\lambda\uparrow\lambda_{a,b}^{*}}\hat{\mathcal{J}}_{\lambda}^{+}=\hat{ \mathcal{J}}_{\lambda_{a,b}^{*}}^{+}\). **Lemma 5.1**.: _There exists \(u_{\lambda_{a,b}^{*}}\in\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\) and \(v_{\lambda_{a,b}^{*}}\in\mathcal{N}_{\lambda_{a,b}^{*}}^{+}\) such that \(\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{-}=\mathcal{J}_{\lambda_{a,b}^{*}}^{-}(u _{\lambda_{a,b}^{*}})\) and \(\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{+}=\mathcal{J}_{\lambda_{a,b}^{*}}^{+}( v_{\lambda_{a,b}^{*}})\)._ Proof.: We already know from Lemma 4.1, 4.2 that we have \(u_{\lambda_{n}}\in\mathcal{N}_{\lambda_{n}}^{-}\) and \(v_{\lambda_{n}}\in\mathcal{N}_{\lambda_{n}}^{+}\) such that \(\mathcal{J}_{\lambda}^{-}(u_{\lambda_{n}})=\hat{\mathcal{J}}_{\lambda_{n}}^{-}\) and \(\mathcal{J}_{\lambda_{n}}^{+}(v_{\lambda_{n}})=\hat{\mathcal{J}}_{\lambda_{n}}^ {+}\) respectively when \(\lambda_{n}<\lambda_{a,b}^{*}\) for each \(n\in\mathbb{N}\). To show the existence of minimizers for \(\mathcal{E}_{\lambda_{a,b}^{*}}\) we are looking to pass the limit \(\lambda_{n}\uparrow\lambda_{a,b}^{*}\) in \(\hat{\mathcal{J}}_{\lambda_{n}}^{\pm}\). We know for each \(\lambda_{n}<\lambda_{a,b}^{*}\), we can get a sequence \(\{u_{\lambda_{n}}\}\subset\mathcal{N}_{\lambda_{n}}^{-}\) such that \(\hat{\mathcal{J}}_{\lambda_{n}}^{-}=\mathcal{J}_{\lambda_{n}}^{-}(u_{\lambda_ {n}})\). Moreover, \(\{u_{\lambda_{n}}\}\) is bounded, otherwise, if \(\|u_{\lambda_{n}}\|_{X}\to\infty\), then \(\infty>\lim\hat{\mathcal{J}}_{\lambda_{n}}^{-}\geq\infty\), which is an absurd. Therefore, up to a subsequence, we get \[\left\{\begin{array}{ll}u_{\lambda_{n}}\rightharpoonup u_{\lambda_{a,b}^{*} }&\text{ in }X,\\ u_{\lambda_{n}}\to u_{\lambda_{a,b}^{*}}\text{ in }&\text{ in }L^{2}(\Omega)\ \forall\ q\in[1,2^{*}_{s}),\\ u_{\lambda_{n}}\to u_{\lambda_{a,b}^{*}}&\text{ a.e. in }\Omega\end{array}\right.\] with \(u_{\lambda_{a,b}^{*}}\geq 0\). Our claim is \(u_{\lambda_{a,b}^{*}}\not\equiv 0\) and \(u_{\lambda_{a,b}^{*}}\in\mathcal{C}^{+}\). Otherwise, we would have \(\|u_{\lambda_{n}}\|_{X}\to 0\), an absurd. Now, once \(u_{\lambda_{n}}\) is solution of \((P_{\lambda_{n}})\), for all \(\phi\in X\), we have \[(a+b\|u_{\lambda_{n}}\|_{X}^{2(\theta-1)})\iint_{\mathbb{R}^{2N}}\frac{(u_{ \lambda_{n}}(x)-u_{\lambda_{n}}(y))(\phi(x)-\phi(y))}{|x-y|^{N+2s}}dxdy\] \[-\lambda_{n}\int_{\Omega}fu_{\lambda_{n}}^{\gamma-1}\phi dx-\int_{\Omega}gu_{ \lambda_{n}}^{p-1}\phi dx=0.\] Taking \(u_{\lambda_{n}}-u_{\lambda_{a,b}^{*}}\) as a text function in above equation and suppose \(\|u_{\lambda_{n}}\|_{X}\to\alpha>0\), we get \[(a+ b\|u_{\lambda_{n}}\|_{X}^{2(\theta-1)})\iint_{\mathbb{R}^{2N}}\frac{|u_{ \lambda_{n}}(x)-u_{\lambda_{n}}(y)|^{2}}{|x-y|^{N+2s}}dxdy\] \[-(a+b\|u_{\lambda_{n}}\|_{X}^{2(\theta-1)})\iint_{\mathbb{R}^{2N}} \frac{(u_{\lambda_{n}}(x)-u_{\lambda_{n}}(y))(u_{\lambda_{a,b}^{*}}(x)-u_{ \lambda_{a,b}^{*}}(y))}{|x-y|^{N+2s}}dxdy\] \[-\lambda_{n}\int_{\Omega}fu_{\lambda_{n}}^{\gamma-1}(u_{\lambda_{ n}}(x)-u_{\lambda_{a,b}^{*}}(x))dx-\int_{\Omega}gu_{\lambda_{n}}^{p-1}(u_{\lambda_{n}}-u_{ \lambda_{a,b}^{*}})(x)dx=0.\] Using weak lower semi-continuity of norms, we get \[(a+b\alpha^{2(\theta-1)})(\lim_{n\to\infty}\|u_{\lambda_{n}}\|_{X}^{2}-\|u_{ \lambda_{a,b}^{*}}\|_{X}^{2})=0.\] Therefore, \(\lim_{n\to\infty}\|u_{\lambda_{n}}\|_{X}^{2}=\|u_{\lambda_{a,b}^{*}}\|_{X}^{2}\) ( because \(a+b\alpha^{2\theta-2}>0\)), which implies that \(u_{\lambda_{n}}\to u_{\lambda_{a,b}^{*}}\) in \(X\). From strong convergence, we have \[\psi^{\prime}_{\lambda^{*},u_{\lambda_{a,b}^{*}}}(1)=\lim_{n\to\infty}\psi^{ \prime}_{\lambda_{n},u_{\lambda_{n}}}(1)=0\ \ \text{and}\ \ \psi^{\prime\prime}_{\lambda_{a,b}^{*},u_{\lambda_{a,b}^{*}}}(1)=\lim_{n\to \infty}\psi^{\prime\prime}_{\lambda_{n},u_{\lambda_{n}}}(1)\leq 0.\] Also when \(u_{\lambda_{n}}\in\mathcal{N}_{\lambda_{n}}^{-}\), we have \[0<b(2\theta-\gamma)\|u_{\lambda_{a,b}^{*}}\|_{X}^{2\theta}=b(2\theta-\gamma) \lim_{n\to\infty}\|u_{\lambda_{n}}\|_{X}^{2\theta}\leq(p-\gamma)\int_{\Omega} gu^{p}_{\lambda_{a,b}^{*}}\,dx,\] thus, \(u_{\lambda_{a,b}^{*}}\in\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\cup\mathcal{N}_{ \lambda_{a,b}^{*}}^{0}.\) From strong convergence of \(\{u_{\lambda_{n}}\}\) and Proposition 5.5, we have \[\mathcal{E}_{\lambda_{a,b}^{*}}(u_{\lambda_{a,b}^{*}})=\lim_{n\to\infty} \mathcal{E}_{\lambda_{n}}(u_{\lambda_{n}})=\lim_{\lambda_{n}\uparrow\lambda_{a, b}^{*}}\hat{\mathcal{J}}_{\lambda_{n}}^{-}=\hat{\mathcal{J}}_{\lambda_{a,b}^{ *}}^{-}.\] We claim that \(u_{\lambda_{a,b}^{*}}\not\in\mathcal{N}_{\lambda_{a,b}^{*}}^{0}\). If not, we get a contradiction from Proposition 5.2. Therefore \(u_{\lambda_{a,b}^{*}}\) is a constrained minimizer of \(\mathcal{E}_{\lambda_{a,b}^{*}}\) forced to be in \(\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\). In order to achieve second minimizer \(v_{\lambda_{a,b}^{*}}\) of the energy functional \(\mathcal{E}_{\lambda_{a,b}^{*}}\), taking sequence \(\{v_{\lambda_{n}}\}\subset\mathcal{N}_{\lambda_{n}}^{+}\) where \(\lambda_{n}\uparrow\lambda_{a,b}^{*}\) and following same arguments as above we get the second minimizer \(v_{\lambda_{a,b}^{*}}\) of \(\mathcal{E}_{\lambda_{a,b}^{*}}\) such that \[\mathcal{E}_{\lambda_{a,b}^{*}}(v_{\lambda_{a,b}^{*}})=\lim_{n\to\infty} \mathcal{E}_{\lambda}(v_{\lambda_{n}})=\lim_{n\to\infty}\hat{\mathcal{J}}_{ \lambda_{n}}^{+}=\hat{\mathcal{J}}_{\lambda_{a,b}^{*}}^{+},\] forced to be in \(\mathcal{N}_{\lambda_{a,b}^{*}}^{+}\) following the same argument as above. This completes the proof. In this case, i.e., \(\lambda=\lambda_{a,b}^{*}\) the non-trivial minimizers \(u_{\lambda_{a,b}^{*}},v_{\lambda_{a,b}^{*}}\) obtained in Lemma 5.1 are avoided to be in \(\mathcal{N}_{\lambda_{a,b}^{*}}^{0}\) as in Proposition 5.2 and hence again from Theorem 2.3 of [6] are non trivial weak solutions of \((P_{\lambda_{a,b}^{*}})\). ## 6. **Existence of solutions when \(\lambda>\lambda_{a,b}^{*}\)** Since, \(\mathcal{N}_{\lambda_{a,b}^{*}}^{0}\not\equiv\emptyset\), the minimizers in \(\mathcal{N}_{\lambda}^{\pm}\) may not be the critical points of the associated energy functional of the problem \((P_{\lambda})\). Therefore, we look for the minimizers of associated energy functional over subsets of \(\mathcal{N}_{\lambda_{a,b}^{*}}^{+}\) and \(\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\) (defined below) which are strictly separated from \(\mathcal{N}_{\lambda_{a,b}^{*}}^{0}\). Subsequently, projections of these minimizers lying in \(\mathcal{N}_{\lambda}^{\pm}\) for \(\lambda\in(\lambda_{a,b}^{*},\lambda_{a,b}^{*}+\epsilon)\) for sufficiently small \(\epsilon>0\) turn out to be the desired critical points. **Proposition 6.1**.: _Let \(0<c^{+}<c^{-}\), \(a\geq 0,b>0\) and \(\lambda_{n}\downarrow\lambda_{a,b}^{*}\)_ * _if_ \(u_{n}\in\mathcal{N}_{\lambda_{a,b}^{*}}^{-}\) _which satisfy_ \(c^{+}\leq\|u_{n}\|_{X}\leq c^{-}\) _for all_ \(n\in\mathbb{N}\) _and_ \[a(t_{\lambda_{n}}^{-}(u_{n}))^{2}\|u_{n}\|_{X}^{2}+b(2\theta-1)(t_{\lambda_{n} }^{-}(u_{n}))^{2\theta}\|u_{n}\|_{X}^{2\theta}-\lambda(\gamma-1)(t_{\lambda_ {n}}^{-}(u_{n}))^{\gamma}\] \[\lambda_{n}\int_{\Omega}f|u_{n}|^{\gamma}dx-(p-1)(t_{\lambda_{n}}^{-}(u _{n}))^{p}\int_{\Omega}g|u_{n}|^{p}dx\to 0\] _as_ \(n\to\infty\)_, then_ \(\text{dist}(u_{n},\mathcal{N}_{\lambda_{a,b}^{*}}^{0})\to 0\)_._ * _if_ \(v_{n}\in\mathcal{N}_{\lambda_{a,b}^{*}}^{+}\) _which satisfy_ \(c^{+}\leq\|v_{n}\|_{X}\leq c^{-}\) _for all_ \(n\in\mathbb{N}\) _and_ \[a(t_{\lambda_{n}}^{+}(v_{n}))^{2}\|v_{n}\|_{X}^{2}+b(2\theta-1)(t_{\lambda_{n} }^{+}(v_{n}))^{2\theta}\|v_{n}\|_{X}^{2\theta}-\lambda(\gamma-1)(t_{\lambda_ {n}}^{+}(v_{n}))^{\gamma}\] \[\lambda_{n}\int_{\Omega}f|v_{n}|^{\gamma}dx-(p-1)(t_{\lambda_{n} }^{-}(v_{n}))^{p}\int_{\Omega}g|v_{n}|^{p}dx\to 0\] _as_ \(n\to\infty\)_, then_ \(\text{dist}(v_{n},\mathcal{N}^{0}_{\lambda^{*}_{a,b}})\to 0\)_._ Proof.: \((i)\) For \(u_{n}\in\mathcal{N}^{-}_{\lambda^{*}_{a,b}}\), we have \(\int_{\Omega}g|u_{n}|^{p}dx\geq c^{+}\). Our claim is \(\int_{\Omega}f|u_{n}|^{\gamma}dx\geq c^{+}\). Therefore from Proposition 3.2, there exist two critical points \(t^{-}_{\lambda_{n}}=t^{-}_{\lambda_{n}}(u_{n})\) and \(t^{+}_{\lambda_{n}}(u_{n})=t^{+}_{\lambda_{n}}\) for \(\psi_{\lambda_{n},u_{n}}\) for large \(n\) satisfying \(t^{+}_{\lambda_{n}}(u_{n})<t^{-}_{\lambda_{n}}(u_{n})\) such that \(t^{+}_{\lambda_{n}}u_{n}\in\mathcal{N}^{+}_{\lambda_{n}}\) and \(t^{-}_{\lambda_{n}}u_{n}\in\mathcal{N}^{-}_{\lambda_{n}}\). That is, \[\begin{cases}a(t^{-}_{\lambda_{n}})^{2}\|u_{n}\|^{2}_{X}+b(t^{-}_{\lambda_{n} })^{2\theta}\|u_{n}\|^{2\theta}_{X}-\lambda_{n}(t^{-}_{\lambda_{n}})^{\gamma} \int_{\Omega}f|u_{n}|^{\gamma}dx\\ \\ a(t^{-}_{\lambda_{n}})^{2}\|u_{n}\|^{2}_{X}+b(2\theta-1)(t^{-}_{\lambda_{n}})^ {2\theta}\|u_{n}\|^{2\theta}_{X}-\lambda_{n}(\gamma-1)(t^{-}_{\lambda_{n}})^{ \gamma}\int_{\Omega}f|u_{n}|^{\gamma}dx\\ \\ -(t^{-}_{\lambda_{n}})^{p}(p-1)\int_{\Omega}g(x)|u_{n}|^{p}dx=o(1),\\ \\ a(t^{+}_{\lambda_{n}})^{2}\|u_{n}\|^{2}_{X}+b(t^{+}_{\lambda_{n}})^{2}\|u_{n} \|^{2}_{X}-\lambda_{n}(t^{+}_{\lambda_{n}})^{\gamma}\int_{\Omega}f|u_{n}|^{ \gamma}dx\\ \\ -(t^{+}_{\lambda_{n}})^{p}\int_{\Omega}|u_{n}|^{p}dx=0.\end{cases} \tag{6.1}\] Getting the values of \(\int_{\Omega}f|u_{n}|^{\gamma}dx\) and \(\int_{\Omega}g(x)|u_{n}|^{p}dx\) by solving first and third equations of (6.1) and substituting the values in second equation of (6.1) and using simple calculus, we get \[\begin{split} a\|u_{n}\|^{2}_{X}(t^{-}_{\lambda_{n}})^{2}& \left(1-(\gamma-1)\left(\frac{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{2-p}}{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{\lambda_{ n}}}\right)^{\gamma-p}}\right)-(p-1)\left(\frac{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{2-\gamma}}{1+\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{p-\gamma}}\right)\right)\\ &+b\|u_{n}\|^{2\theta}_{X}(t^{-}_{\lambda_{n}})^{2\theta}\left( (2\theta-1)-(\gamma-1)\left(\frac{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{2\theta-p}}{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{\gamma-p}}\right)\right.\\ &\left.-(p-1)\left(\frac{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_{ \lambda_{n}}}\right)^{2\theta-\gamma}}{1-\left(\frac{t^{+}_{\lambda_{n}}}{t^{-}_ {\lambda_{n}}}\right)^{p-\gamma}}\right)\right)=o(1).\end{split} \tag{6.2}\] Since both term in left hand side of above equation are positive and entire term converges to zero implies both the term will converge to zero separately. Suppose \(t^{-}_{\lambda_{n}}\to\alpha\), \(t^{+}_{\lambda_{n}}\to\beta\). Taking limit \(n\to\infty\) in (6.2) we get, \(t^{-}_{\lambda_{n}}(u_{n})\backslash\)\(t^{+}_{\lambda_{n}}(u_{n})\to 1\),i.e. \(\alpha=\beta\), as \(1\) is the only zero of \[m(t)=(2p-2\theta-2)t^{\gamma-p}+(\gamma-p)t^{2-p}+(\gamma-p)t^{2\theta-p}+2-2 \gamma+2\theta.\] Once \(t^{+}_{\lambda_{n}}u_{n}\in\mathcal{N}^{+}_{\lambda_{n}}\), from (4.1) we have \(\int_{\Omega}f|u_{n}|^{\gamma}\geq c\). Thus \[a\|\alpha u_{n}\|^{2}_{X}+b\|\alpha u_{n}\|^{2\theta}_{X}-\lambda^{*}_{a,b}\int_ {\Omega}f|\alpha u_{n}|^{\gamma}dx-\int_{\Omega}g(x)|\alpha u_{n}|^{p}dx=o(1),\] \[a\|\alpha u_{n}\|^{2}_{X}+b(2\theta-1)\|\alpha u_{n}\|^{2\theta}_{X}-\lambda^{*} _{a,b}(\gamma-1)\int_{\Omega}f|\alpha u_{n}|^{\gamma}dx\] \[-(p-1)\int_{\Omega}g(x)|\alpha u_{n}|^{p}dx=o(1).\] Solving these two we get \[\frac{a(p-2)\|\alpha u_{n}\|_{X}^{2}+b(p-2\theta)\|\alpha u_{n}\|_{X}^{2\theta}} {(p-\gamma)f|\alpha u_{n}|^{\gamma}dx}=\lambda_{a,b}^{*}+o(1),\] \[\frac{a(\gamma-2)\|\alpha u_{n}\|_{X}^{2}+b(\gamma-2\theta)\|\alpha u_{n}\|_{X }^{2\theta}}{(\gamma-p)\int_{\Omega}g(x)|\alpha u_{n}|^{p}dx}=1+o(1).\] Therefore, from expression of \(\lambda(u)\), we have \[\lambda(\alpha u_{n})=\lambda(u_{n})=(1+o(1))^{\frac{2\theta-\gamma}{p-2 \theta}}(\lambda^{*}+o(1))\] leading to \(\lambda(u_{n})\to\lambda^{*}\). Hence \(u_{n}\) is a bounded minimizing sequence for \(\lambda^{*}\). Now, by following same argument as in proof of Proposition 3.1, up to a subsequence, we obtain \(u_{n}\to u\in\mathcal{N}^{0}_{\lambda^{*}}\) and thus \(\mathrm{dist}(u_{n},\mathcal{N}^{0}_{\lambda^{*}})\to 0\) as \(n\to\infty\). Similarly from expression of \(\lambda_{a,b}(u_{n})\), we have \(\lambda_{a,b}(u_{n})\to\lambda_{a,b}^{*}\) and thus \(\mathrm{dist}(u_{n},\mathcal{N}^{0}_{\lambda_{a,b}^{*}})\to 0\). \((ii)\) Follows similarly from item \((i)\). Consider the sets \[\mathcal{N}^{-}_{\lambda_{a,b}^{*},d,c^{-}}=\{u\in\mathcal{N}^{-}_{\lambda_{ a,b}^{*}}:\mathrm{dist}(u,\mathcal{N}^{0}_{\lambda_{a,b}^{*}})\geq d,\|u\|_{X} \leq c^{-}\},\] and \[\mathcal{N}^{+}_{\lambda_{a,b}^{*},d,c^{+}}=\{v\in\mathcal{N}^{+}_{\lambda_{ a,b}^{*}}:\mathrm{dist}(v,\mathcal{N}^{0}_{\lambda_{a,b}^{*}})\geq d,\|v\|_{X} \geq c^{+}\},\] where \(c^{\pm}\) and \(d\) are positive constants. In light of above proposition one can observe that elements of \(\mathcal{N}^{\pm}_{\lambda,d,c^{\pm}}\) can be projected over \(\mathcal{N}^{\pm}_{\lambda}\) when \(\lambda\downarrow\lambda_{a,b}^{*}\) i.e., if \(u\in\mathcal{N}^{\pm}_{\lambda,d,c^{\pm}}\) there exists \(\epsilon>0\) such that \(u\in\hat{\mathcal{N}}_{\lambda}\) or in \(\hat{\mathcal{N}}_{\lambda}\cup\hat{\mathcal{N}}^{+}_{\lambda}\) for all \(\lambda\in(\lambda_{a,b}^{*},\lambda_{a,b}^{*}+\epsilon)\). Also it is important to observe that \(\mathrm{dist}(\mathcal{M}^{\pm}_{\lambda_{a,b}^{*}},\mathcal{N}^{0}_{\lambda_{ a,b}^{*}})>0\), where \[\mathcal{M}^{\pm}_{\lambda}=\{u\in\mathcal{N}^{\pm}_{\lambda}:\mathcal{J}^{\pm}_{\lambda}(u)=\hat{\mathcal{J}}^{\pm}_{ \lambda}\}.\] In fact, \(\mathrm{dist}(\mathcal{M}^{\pm}_{\lambda_{a,b}^{*}},\mathcal{N}^{0}_{\lambda_ {a,b}^{*}})\to 0\) we can get a contradiction to the fact that no solution of \((P_{\lambda_{a,b}^{*}})\) can be in \(\mathcal{N}^{0}_{\lambda_{a,b}^{*}}\). Clearly the sets \(\mathcal{M}^{\pm}_{\lambda}\neq\emptyset\) for all \(\lambda\in(0,\lambda_{a,b}^{*}]\). Now define \[d^{+}_{\lambda_{a,b}^{*}}=\mathrm{dist}(\mathcal{M}^{+}_{\lambda_{a,b}^{*}}, \mathcal{N}^{0}_{\lambda_{a,b}^{*}})\ \ \text{and}\ \ d^{-}_{\lambda_{a,b}^{*}}=\mathrm{dist}(\mathcal{M}^{-}_{\lambda_{a,b}^{*}}, \mathcal{N}^{0}_{\lambda_{a,b}^{*}}).\] Now choose \(c^{-}_{\lambda_{a,b}^{*}}<c^{-}\) such that \(\|u\|_{X}\leq c^{-}_{\lambda_{a,b}^{*}}\) for all \(u\in\mathcal{M}^{-}_{\lambda_{a,b}^{*}}\) and \(d^{-}\in(0,d^{-}_{\lambda_{a,b}^{*}})\). With such controls for all \(\lambda\in(\lambda_{a,b}^{*},\lambda_{a,b}^{*}+\epsilon)\) we will study following minimization problem \[\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^{-}}=\inf\{\mathcal{J}^{-}_{\lambda}(u): u\in\mathcal{N}^{-}_{\lambda_{a,b}^{*},d^{-},c^{-}}\}. \tag{6.3}\] In a similar way, we define with the choice of \(d^{+}<d^{+}_{\lambda_{a,b}^{*}}\) and \(c^{+}<c^{+}_{\lambda_{a,b}^{*}}\) where \(\|u\|_{X}\geq c^{+}_{\lambda_{a,b}^{*}}\) for all \(u\in\mathcal{M}^{+}_{\lambda_{a,b}^{*}}\), \[\hat{\mathcal{J}}^{+}_{\lambda,d^{+},c^{+}}=\inf\{\mathcal{J}^{+}_{\lambda}(v): v\in\mathcal{N}^{+}_{\lambda_{a,b}^{*},d^{+},c^{+}}\}\] for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). With such choice of \(c^{\pm},d^{\pm}\) we can observe that \(\mathcal{M}^{-}_{\lambda^{*}_{a,b}}\subset\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^ {-},c^{-}}\) as, if \(u\in\mathcal{M}^{-}_{\lambda^{*}_{a,b}}\) we have \(\mathrm{dist}(u,\mathcal{N}^{0}_{\lambda^{*}_{a,b}})=d^{-}_{\lambda^{*}}>d^{-}\) implies \(u\in\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\). Similarly \(\mathcal{M}^{+}_{\lambda^{*}_{a,b}}\subset\mathcal{N}^{+}_{\lambda^{*}_{a,b},d ^{+},c^{+}}\). **Lemma 6.1**.: _For the above choices of \(c^{-},d^{-}\) there exits \(\epsilon^{-}>0\) such that the functional \(\mathcal{J}^{-}_{\lambda}\) constrained to \(\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) has a minimizer \(u(\lambda)\in\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\)._ Proof.: For each \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\), take a minimizing sequence \(u_{n}(\lambda)\in\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) for \(\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^{-}}\). From Proposition 6.1, we have \(u_{n}(\lambda)\in\hat{\mathcal{N}}_{\lambda}\) and we can get \(\delta<0\) such that \((t^{-}_{\lambda}(u_{n}(\lambda)))^{2}\psi_{\lambda,u_{n}}(t^{-}_{\lambda}(u_{n }(\lambda)))<\delta\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). Since \(u_{n}(\lambda)\) is bounded, hence up to a subsequence \(u_{n}(\lambda)\rightharpoonup u(\lambda)\) for some \(u(\lambda)\not\equiv 0\) in \(X\). We target to show that, the weak limit \(u(\lambda)\) belongs to \(\hat{\mathcal{N}}_{\lambda}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\). As \(u_{n}(\lambda)\in\hat{\mathcal{N}}_{\lambda}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\), there exists unique scalar \(t^{-}_{\lambda}(u_{n}(\lambda))>0\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\). Moreover, since \(t^{-}_{\lambda}(u_{n}(\lambda))<t^{-}_{\lambda^{*}_{a,b}}(u_{n}(\lambda))=1\) we get, \(t^{-}_{\lambda}(u_{n}(\lambda))\to t(\lambda)\in(0,1)\). We claim that there exists \(\epsilon^{-}>0\) such that \(u(\lambda)\in\hat{\mathcal{N}}_{\lambda}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\). Suppose on contrary \(u(\lambda)\notin\hat{\mathcal{N}}_{\lambda}\), thus there exists sequence \(\lambda_{k}\downarrow\lambda^{*}_{a,b}\) such that \(u(\lambda_{k})\notin\hat{\mathcal{N}}_{\lambda_{k}}\) for large value of \(k\). To get contradiction for this fact, assuming a minimizing sequence \(u_{n,k}\equiv t^{-}_{\lambda_{k}}(u_{n}(\lambda_{k}))u_{n}(\lambda_{k})\) for \(\hat{\mathcal{J}}^{-}_{\lambda_{k},d^{-},c^{-}}\) for large \(k\in\mathbb{N}\) (using (6.3)) and we show that it is also a minimizing sequence for \(\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}\). In the squeal we first prove following claim. _Claim-_ When \(\lambda\downarrow\lambda^{*}_{a,b}\) function \(\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^{-}}\) is decreasing and \[\lim_{\lambda\downarrow\lambda^{*}_{a,b}}\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c ^{-}}=\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}. \tag{6.4}\] In order to prove this claim, using decreasing behaviour of \(\mathcal{J}^{-}_{\lambda}(u)\) from Proposition (3.1) \((ii)\) for all \(u\in\hat{\mathcal{N}}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\), we have \(\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^{-}}\leq\hat{\mathcal{J}}^{-}_{\lambda^ {\prime},d^{-},c^{-}}\), where \(\lambda^{*}_{a,b}<\lambda^{\prime}<\lambda<\lambda^{*}_{a,b}+\epsilon\). Also when \(u_{\lambda^{*}_{a,b}}\in\mathcal{M}^{-}_{\lambda^{*}_{a,b}}\) we have \(\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^{-}}\leq\mathcal{J}^{-}_{\lambda}(u_{ \lambda^{*}_{a,b}})<\mathcal{J}^{-}_{\lambda^{*}_{a,b}}(u_{\lambda^{*}_{a,b}})= \hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). To prove (6.4) suppose on contrary that there exists a sequence \(\lambda_{n}\downarrow\lambda^{*}_{a,b}\) or \(\lambda_{n}\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\) for large \(n\), such that \[\lim_{n\to\infty}\hat{\mathcal{J}}^{-}_{\lambda_{n},d^{-},c^{-}}=\mathcal{J}< \hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}.\] Also from equation (6.3) we can have a sequence \(\{u_{k}\}\subset\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) such that for a given \(\epsilon>0\) there exists \(n_{0},k_{0}\in\mathbb{N}\) such that \[|\mathcal{J}^{-}_{\lambda_{n}}(u_{k})-\hat{\mathcal{J}}^{-}_{\lambda_{n},d^{-},c ^{-}}|<\frac{\epsilon}{2},\ \ \text{for all }n\geq n_{0},\ k\geq k_{0}. \tag{6.5}\] Using continuity of \(t^{-}_{\lambda}\) for \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\) following Lemma 3.1\((i)\), we have \[|\mathcal{J}^{-}_{\lambda_{n}}(u_{k})-\mathcal{J}^{-}_{\lambda^{*}_{a,b}}(u_{k})| <\frac{\epsilon}{2},\ \ \text{for all }n\geq n_{1}. \tag{6.6}\] Taking \(n\geq n_{2}=\max\{n_{0},n_{1}\}\) and using (6.5) and (6.6), we get \[\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}-\hat{\mathcal{J}}^{-}_{ \lambda_{n},d^{-},c^{-}} <\mathcal{J}^{-}_{\lambda^{*}_{a,b}}(u_{k})-\hat{\mathcal{J}}^{-}_{ \lambda_{n},d^{-},c^{-}}\] \[\leq|\mathcal{J}^{-}_{\lambda^{*}_{a,b}}(u_{k})-\mathcal{J}^{-}_{ \lambda_{n}}(u_{k})|+|\mathcal{J}^{-}_{\lambda_{n}}(u_{k})-\hat{\mathcal{J}}^{-}_{ \lambda_{n},d^{-},c^{-}}|\] \[\leq\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon,\] implies, \[\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}<\hat{\mathcal{J}}^{-}_{\lambda_{n},d^{-},c^ {-}}+\epsilon\ \text{for all }n\geq n_{2}.\] Therefore, \(\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}\leq\mathcal{J}\) as \(n\to\infty\) a contradiction. Thus \(\lim_{\lambda\downarrow\lambda^{*}_{a,b}}\hat{\mathcal{J}}^{-}_{\lambda,d^{-},c^ {-}}=\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}\), which complete the proof of the claim. As a consequence of this claim, we have \[|\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}-\mathcal{J}^{-}_{\lambda_{k}}(u_{n, k})|\leq|\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}-\hat{\mathcal{J}}^{-}_{ \lambda_{k},d^{-},c^{-}}|+|\mathcal{J}^{-}_{\lambda_{k}}(u_{n,k})-\hat{\mathcal{ J}}^{-}_{\lambda_{k},d^{-},c^{-}}|\to 0 \tag{6.7}\] as \(n\to\infty\) followed by \(k\to\infty\). Therefore, up to a subsequence, \(u_{n,k}\rightharpoonup u\) in \(X\setminus\{0\}\). Now we claim that \(u_{n,k}\to u\) in \(X\setminus\{0\}\). Suppose on contrary, by weak lower semi continuity of norm, we get \[\liminf_{n,k\to\infty}\psi^{\prime}_{\lambda_{k},u_{n,k}}(t_{\lambda^{*}_{a,b }}(u))>\psi^{\prime}_{\lambda^{*}_{a,b},u}(t_{\lambda^{*}_{a,b}}(u))=0.\] Hence \(t_{\lambda^{*}_{a,b}}(u)<t^{-}_{\lambda_{k}}(u_{n,k})\) for sufficiently large \(n,k\). Therefore, from (6.7), we get \[\mathcal{E}_{\lambda^{*}_{a,b}}(t_{\lambda^{*}_{a,b}}(u)u) <\liminf_{n,k\to\infty}\mathcal{E}_{\lambda_{k}}(t_{\lambda^{*}_{ a,b}}(u)u_{n,k}\] \[<\liminf_{n,k\to\infty}\mathcal{E}_{\lambda_{k}}(t^{-}_{\lambda_{ k}}(u_{n,k}))u_{n,k}=\hat{\mathcal{J}}^{-}_{\lambda^{*}_{a,b}}\] which is an absurd, therefore \(u_{n,k}\to u\) in \(X\setminus\{0\}\). Now, \(u_{n,k}-u(\lambda)\rightharpoonup u(\lambda_{k})-u(\lambda)\), implies \[\|u(\lambda_{k})-u(\lambda)\|_{X}\leq\liminf_{n\to\infty}\|u_{n,k}-u(\lambda) \|_{X},\] for large \(k\). Thus \(u(\lambda_{k})\to u(\lambda)\in\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) and consequently \(u(\lambda_{k})\in\hat{\mathcal{N}}_{\lambda_{k}}\) for large \(k\), which is a contradiction. Thus \(u_{n}(\lambda)\to u(\lambda)\) in \(X\) and \(u(\lambda)\in\hat{\mathcal{N}}_{\lambda}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\). Subsequently, \(t^{-}_{\lambda}(u_{n}(\lambda))u_{n}(\lambda)\to t(\lambda)u(\lambda)\) and \(u(\lambda)\in\mathcal{N}^{-}_{\lambda^{*}_{a,b},d^{-},c^{-}}\) with \[\mathcal{J}^{-}_{\lambda,d^{-},c^{-}}=\mathcal{J}^{-}_{\lambda}(u(\lambda))\] that is, \(u(\lambda)\) is minimizer for \(\mathcal{J}^{-}_{\lambda,d^{-},c^{-}}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{-})\). **Lemma 6.2**.: _For the above choices of \(c^{+},d^{+}\) there exits \(\epsilon^{+}>0\) such that the functional \(\mathcal{J}^{+}_{\lambda}\) constrained to \(\mathcal{N}^{+}_{\lambda^{*}_{a,b},d^{+},c^{+}}\) has a minimizer \(v(\lambda)\in\mathcal{N}^{+}_{\lambda^{*}_{a,b},d^{+},c^{+}}\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon^{+})\)._ Proof.: The proof of the Lemma is similar to Lemma 6.1 above. Now we choose \(\epsilon=\min\{\epsilon^{-},\epsilon^{+}\}\) and rename the minimizers obtained above as \(u_{\lambda}=u(\lambda)\) and \(v_{\lambda}=v(\lambda)\) for all \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). For \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\) the minimization problem posed in \(\mathcal{N}^{\pm}_{\lambda^{*}_{a,b},d^{\pm},c^{\pm}}\) ensures that minimizers \(u_{\lambda}\) and \(v_{\lambda}\) are separated from \(\mathcal{N}^{0}_{\lambda^{*}_{a,b}}\). Therefore, for \(\epsilon>0\) sufficiently small chosen above, \(u_{\lambda}\in\mathcal{N}^{-}_{\lambda}\) and \(v_{\lambda}\in\mathcal{N}^{+}_{\lambda}\) for \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). Finally, invoking Theorem 2.3 of [6], we get nontrivial weak solutions of \((P_{\lambda})\) for \(\lambda\in(\lambda^{*}_{a,b},\lambda^{*}_{a,b}+\epsilon)\). **Proof of Theorem 3.1 and Theorem 3.2 :** The proof of Theorem 3.1 and Theorem 3.2 follows by combining the arguments of previous sections. Eventually, we get the nontrivial weak solutions of \((P_{\lambda})\) for \(\lambda\in(0,\lambda^{*}_{a,b}+\epsilon)\). Next, we show that the weak solutions are positive. In fact, due to the presence of nonlocal fractional operator, we have \(\|u\|\neq\|u|\parallel\) in \(X\). As a consequence, \(\mathcal{E}_{\lambda}(|u|)\neq\mathcal{E}_{\lambda}(u)\). To overcome this situation, we define the perturbed problem with nonlinearity as \(\lambda f(x)(u^{+})^{\gamma-1}+g(x)(u^{+})^{p-1}\) and corresponding energy functional as follows \[\mathcal{E}^{+}_{\lambda}(u):=\frac{a}{2}\|u\|_{X}^{2}+\frac{b}{2\theta}\|u\|_{ X}^{2\theta}-\frac{\lambda}{\gamma}\int_{\Omega}f(u^{+})^{\gamma}dx-\frac{1}{p}\int_{ \Omega}g(u^{+})^{p}dx.\] It is easy to see that the critical points of \(\mathcal{E}_{\lambda}\) are also the critical points of \(\mathcal{E}^{+}_{\lambda}\). Consequently the weak solutions of the perturbed problem. Now testing the solutions of perturbed problem with test function \(\phi=u^{-}\) and using \[(u(x)-u(y))(u^{-}(x)-u^{-}(y))\leq-|u^{-}(x)-u^{-}(y)|^{2}\] we get, \(\|u^{-}\|_{X}^{2}=0\), thus \(u\) is a non-negative solution of (\(P_{\lambda}\)). The positivity of the solutions follows via maximum principle (see [24], Proposition 2.2.8) which completes the proof. ## Acknowledgments The research of the first author is supported by Science and Engineering Research Board, Govt. of India, grant SRG/2021/001076. ## Conflict of interest The authors declare that they have no conflict of interest.
2309.03281
Gamma-Ray Lines in 15 Years of Fermi-LAT Data: New Constraints on Higgs Portal Dark Matter
Monoenergetic $\gamma$-ray spectral lines are among the cleanest signatures of dark matter annihilation. We analyze 15 years of Fermi-LAT data, find no spectral lines, and place strong constraints on dark matter annihilation to monoenergetic $\gamma$-rays. Additionally, we produce the first double-line analysis of the coupled signals from $\gamma\gamma$ and $Z \gamma$ lines, which proves particularly powerful for dark matter masses above $\sim150$~GeV. From our constraints on a double-line feature, we investigate and constrain some minimal models where the Galactic Center Excess (GCE) can be fit by dark matter annihilation through the Higgs boson into Standard Model particles.
Pedro De La Torre Luque, Juri Smirnov, Tim Linden
2023-09-06T18:00:30Z
http://arxiv.org/abs/2309.03281v2
# Gamma-Ray Lines in 15 Years of Fermi-LAT Data: ###### Abstract Monoenergetic \(\gamma\)-ray spectral lines are among the cleanest signatures of dark matter annihilation. We analyze 15 years of Fermi-LAT data, find no spectral lines, and place strong constraints on dark matter annihilation to monoenergetic \(\gamma\)-rays. Additionally, we produce the first double-line analysis of the coupled signals from \(\gamma\gamma\) and \(Z\gamma\) lines, which proves particularly powerful for dark matter masses above \(\sim 150\) GeV. From our constraints on a double-line feature, we investigate and constrain some minimal models where the Galactic Center Excess (GCE) can be fit by dark matter annihilation through the Higgs boson into Standard Model particles. + Footnote †: preprint: LTH-1335 ## I Introduction Models that generate the observed dark matter (DM) abundance through thermal freezeout provide one of the most compelling explanations for the cosmological evolution of our universe [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Fortunately, scenarios dominated by a \(2\to 2\) process (e.g. \(\chi\chi\to\text{SM}\,\text{SM}\)) also provide us with a precise, testable target for DM annihilation searches [20; 21]. The thermal freeze-out mechanism does not generically predict the Standard Model (SM) final states or branching ratios, and thus it is common to examine DM models dominated by tree-level annihilations to different standard-model particle states, such as \(b\bar{b}\), \(\tau^{+}\tau^{-}\), \(W^{+}W^{-}\) or other leptonic and hadronic pairs. However, these models are simplified for two reasons. First, DM annihilation may include tree-level couplings to a number of final states, with branching ratios that depend on the decay widths of the intermediate particles. Second, in addition to tree-level processes, there are guaranteed loop-level processes. Some of these final states, like those that produce \(\gamma\gamma\) or \(\text{Z}\gamma\) lines, may be more detectable than tree-level annihilation processes despite their subdominant branching fractions. While the two-photon channel leads to a mono-energetic line at \(E_{\gamma}=m_{\rm DM}\). The \(Z\gamma\) channel is kinematically accessible at DM masses of \(m_{\rm DM}>m_{Z}/2\), and leads to final state photons with energies centered around \[E_{Z\gamma}=E_{\gamma\gamma}\left(1-\left(\frac{m_{Z}}{2E_{\gamma\gamma}} \right)^{2}\right)\,. \tag{1}\] The spectrum has an intrinsic width due to the finite life-time of the \(Z\) boson, which is given by [22; 23] \[\Gamma_{Z\gamma}\approx\frac{\Gamma_{Z}m_{Z}}{2\sqrt{3}E_{\gamma\gamma}}<\frac {\Gamma_{Z}}{\sqrt{3}}\approx 1.2\text{ GeV}\,. \tag{2}\] This width leads to an effectively monochromatic signal at the current Fermi-LAT energy resolution. Making use of this fact, we propose a double-line search that further increases our experimental sensitivity. The relationship between the branching ratios to all final states depends on the mediator choice. The simplest, and most predictive scenario is DM coupling though the Higgs portal, a singlet operator \(H^{\dagger}H\)[26; 27; 28; 29; 30; 31; 32; 33]. In this case, the branching ratios to all SM final states are entirely fixed by the well-known properties of the Higgs boson. Thus, combining collider-grade accuracy with the freeze-out condition entirely fixes our signal expectation for a given DM mass. In this work, we reanalyse existing Fermi-LAT data, choosing CLEAN events from 180 months of the PASS8 data [34], and perform both single- and double-line searches for annihilating DM. We find that double-line analyses have a superior constraining power, especially at large DM masses. Furthermore, we show that our limits on Higgs-mediated DM annihilation are in tension with Higgs-portal interpretations of the GCE at masses near the \(m_{H}/2\) resonance [33; 35]. Fig. 1 shows the limits of our single- and double-line analyses on the \(\left\langle\sigma v\right\rangle_{\gamma\gamma}\) annihilation cross section, and compares our results with previous work [24; 25]. Our analysis provides stronger constraints, particularly at large \(\gamma\)-ray energies. Figure 1: The results of our model-independent single-line (blue solid) and Higgs-portal double-line (red solid) analysis, given an NFW profile in ROI41. Single-line analyses by the Fermi-LAT collaboration (2015) [24](gray solid) and a recent result by Ref. [25] (magenta dashed) are shown for comparison. ## II Methodology and analysis ### Gamma-ray datasets The Fermi-LAT is a pair-conversion telescope that measures \(\gamma\)-rays with energies between \(\sim\)20 MeV and \(\sim\)1 TeV [34]. In this paper, we use \(\sim\)180 months of data spanning from 2008-08-04 to 2023-07-20 selecting CLEAN events from the PASS8 data. We include events from good quality time intervals and remove periods when the LAT was operating at rocking angles \(\theta_{r}>52\deg\) ((DATA_QUAL\(>0\)) && (LAT_CONFIG==1) && ABS(ROCK_ANGLE)\(<52\)). We also apply the zenith-angle cut \(\theta_{z}<90\deg\), to avoid contamination from the Earth limb. We limit our analysis to EDISP3 events \(\textit{evtype}=512\)), which have the best energy reconstruction (hence, best energy resolution), to minimize uncertainties relating to instrumental energy dispersion. We employ the P8R3_CLEAN_V3 version of the instrument response functions. The extraction of Fermi-LAT data and calculation of exposure maps is performed using the most up-to-date version of the ScienceTools[36] (2.0.8; released on 01/20/2021). In addition, we performed different consistency checks of our analyses using PASS7 and PASS8 front- and back-converted events, which allows us to compare our results with those obtained by the Fermi-LAT Collaboration [37]. We extract data spanning from \(10\) GeV to \(300\) GeV in \(130\) logarithmically uniform energy bins, which constitute an energy resolution \(\Delta E/E\sim 3\%\) (which is better by at least a factor of two than the intrinsic energy resolution of EDISP3 events). Closely following previous Fermi analyses, we divide our data into three regions of interest (ROIs), which are spherical regions of \(3\), \(16\), and \(41\) degrees around the Galactic center, focusing on the inner regions studied by the Fermi collaboration in Refs. [24; 37] (see also [38] for a similar approach). For each ROI, the regions of \(68\%\) containment around known sources are subtracted in every dataset following the _4FGL_DR2_ catalog, except for the ROI3 region, where no source subtraction is done. We also mask the galactic plane in regions with \(|b|<5^{\circ}\) and \(|l|>6^{\circ}\) as in Refs. [24; 37]. ### Single and double line analyses We closely follow the strategy used by the Fermi collaboration in Ref. [37] and search for spectral lines by performing maximum likelihood fits in each of our ROIs and in \(88\) sliding energy intervals (the sliding window technique from Ref. [39]) from \(10\) GeV to \(300\) GeV. For each interval, we fit the count spectrum in an energy window that surrounds the central energy with a width of \(\Delta E\geq 3\times\sigma_{E}(E)\)1 (where \(\sigma_{E}(E)\) is the half width of the \(68\%\) exposure-weighted energy resolution of each dataset (See [40]), assuming that the background is described by a power-law and adding a line-like signal of free amplitude, which is smeared due to the energy resolution of the Fermi-LAT (Following [41]). We use a likelihood function described by a Poisson distribution in the number of events at every energy window: Footnote 1: We repeated the analysis for window sizes between \(2\) and \(6\times\sigma_{E}(E)\) and found that the chosen width has almost no effect on our results. \[\mathcal{L}^{ROI}=\prod_{i}\frac{e^{-n(E_{i},E_{i}^{\prime})}\times n(E_{i}, E_{i}^{\prime})^{N_{i}(E_{i})}}{N(E_{i})!}, \tag{3}\] where \(N_{i}\) is the observed number of events at energy \(E_{i}\) in each dataset (ROI), and \(n_{i}\) is the expected number of counts at energy \(E^{\prime}\) that are reconstructed at energy \(E\) by the instrument. Under the null hypothesis, there is no line-like signal and the expected number of counts is found by fitting the data to a power-law describing the background emission (\(n_{i}=n_{bkg}\)). In the alternative hypothesis, the number of counts is described as \(n_{i}=n_{sig}D_{eff}+n_{bkg}\), where \(D_{eff}\) is the energy dispersion matrix that allow us to account for the energy reconstruction of the events by the LAT, and which was obtained using the _gtdrm_ Fermitool. It is important to remark that, within the energy range where we perform this analysis, the systematic uncertainties in the spectral reconstruction are expected to be negligible compared to the statistical uncertainties in the photon count, as reported by Refs. [24; 37]. This assumption does not hold at much lower energies, which would require more complex modeling of Fermi-LAT responses, as in Ref. [24]. The inclusion of systematic uncertainties would only slightly weaken our bounds at low energies, leaving the main conclusions of this manuscript unchanged. To perform the fits we rely on the Markov Chain Monte Carlo (MCMC) package _Emcee_[42], since this technique is more robust than conventional optimizers and less prone to finding false local minima. This analysis produces probability distribution functions for every parameter in the fit, which are used to estimate the credible intervals of each parameter and the DM limits. Since the best-fit number of signal events that we obtain is very low, we use the Feldman-Cousins (FC) method [43]2 to ensure that we are not mis-evaluating the confidence intervals and, hence, the limits. Using the FC method produces roughly the same upper-limits as the MCMC algorithm, except when the best-fit number of source counts is very small. Concretely, we take the best-fit values for the number of background events and number of signal events obtained from the MCMC procedure, and apply the FC method with the likelihood functions defined in Eq. 3. In this way, we reject unrealistically strong upper-limits, particularly for the downward fluctuations. Footnote 2: We acknowledge the use of the package from [https://github.com/usnistgov/FCpy/tree/main](https://github.com/usnistgov/FCpy/tree/main). For the _double-line analysis_, we repeat the same procedure as the single-line analysis, but add a correlated second line signal that accounts for DM annihilation through the Higgs into a second \(Z\gamma\) line (\(\chi+\chi\to H\to Z+\gamma\)). The relative amplitude of the \(\gamma\gamma\) and \(Z\gamma\) signals is correlated by the branching ratio of the Higgs boson to each channel. Moreover, the energy of the line signal for \(Z\gamma\) production is connected to the energy of the \(\gamma\gamma\) signal as described by Eq. 1, and we set \(E_{\gamma\gamma}=m_{\chi}\) for the annihilation process that we are considering. To account for the fact that the energy window in our double line analysis must accommodate both the \(\gamma\gamma\) and \(\mathrm{Z}\gamma\) line energies, the lower edge of the energy window considered in this analysis is set to be the minimum value between \(E_{Z\gamma}\) and \(E_{\gamma\gamma}-3\sigma_{E}\), to cover the double-line feature. This results in a larger energy window for the double-line analysis from \(50\) to \(\sim 80\) GeV, above which the lower limit of the sliding energy window coincides with the one used in the single-line analysis (i.e. the lower limit is always the \(3\sigma_{E}\) above \(80\) GeV). ## III DM bounds from the line search The expected \(\gamma\)-ray flux from the annihilation of DM particle through the process \(\chi+\chi\to\gamma+\gamma\) in a region of the sky with angular size \(\Delta\Omega\) is \[\frac{d\Phi}{dE}=\frac{1}{8\pi}\frac{\langle\sigma v\rangle_{\gamma\gamma}}{m_ {\chi}^{2}}\left(\frac{dN}{dE}\right)_{\gamma\gamma}\times\mathcal{J}^{ \mathcal{R}\mathcal{O}\mathcal{I}}(\Delta\Omega)\quad, \tag{4}\] where \(\mathcal{J}^{\mathcal{R}\mathcal{O}\mathcal{I}}(\Delta\Omega)\) is the astrophysical J-factor that describes the expected annihilation rate given a specific choice of ROI and a DM distribution, \(\left(\frac{dN}{dE}\right)_{\gamma\gamma}=2\times\delta(E-E_{\gamma\gamma})\) is the \(\gamma\)-ray yield per annihilation, \(m_{\chi}=E_{\gamma\gamma}\) is the mass of the WIMP, and \(\langle\sigma v\rangle_{\gamma\gamma}\) is the annihilation rate to the \(\gamma\gamma\) channel and is related to the total annihilation rate via the mediator-dependent branching fraction \(\langle\sigma v\rangle_{\gamma\gamma}=\text{BR}_{\gamma\gamma}\times\langle \sigma v\rangle_{\text{ann}}\), where \(\langle\sigma v\rangle_{\text{ann}}\) is the total DM annihilation rate. For the \(Z\gamma\) process, this same formula holds, but with \(\langle\sigma v\rangle_{Z\gamma}=\text{BR}_{Z\gamma}\times\langle\sigma v \rangle_{\text{ann}}\) and \(\left(\frac{dN}{dE}\right)_{Z\gamma}=\delta(E-E_{Z\gamma})\), where the photon is produced at the energy given by Eq. 1. The main uncertainty in deriving limits on the annihilation rate is the J-factor, \(\mathcal{J}^{\mathcal{R}\mathcal{O}\mathcal{I}}(\Delta\Omega)\), which directly depends on the Milky Way DM distribution. Here, we assume a local DM density of \(0.4\) GeV cm\({}^{-3}\)[44] and a distance from the Solar System to the GC of \(8.5\) kpc. We characterize two DM distributions, the NFW and a contracted-NFW profile with an index \(\gamma=1.3\) (motivated by studies of the GCE [45; 46]), both with a scale radius of \(r_{s}=20\) kpc. ### Significance for lines in the gamma-ray spectrum Figure 2 shows the test-statistic (TS) computed for the single-line and double-line analyses as a function of the DM mass. This produces an accurate calculation (assuming Wilks' theorem holds) for the local significance of any line signal (\(\sigma_{local}\sim\frac{n_{line}}{\sqrt{B_{\text{s}ck}}}\)), which can be calculated as \[\mathrm{TS}=2\frac{\mathcal{L}(n_{sig}=n_{sig,\text{Best}})}{\mathcal{L}(n_{ sig}=0)}. \tag{5}\] Although, the J-factor constitutes the largest uncertainty on the expected annihilation signal from the GC region, we remind the reader that the TS is independent of the J-factor employed. We note no statistically significant peaks (exceeding a \(3\sigma\) local significance) in any dataset. The most statistically significant peaks hardly exceed \(2\sigma\) and are not repeatedly present in the different ROIs (i.e. a fluctuation not present in all the datasets). We have performed an analogous analysis, without fixing the branching ratio between the \(Z\gamma\) and \(\gamma\gamma\) processes to the values predicted by the Higgs portal. Also in this general case no significant excess signals were observed and the local significance is roughly identical to the one obtained in the double-line analysis. ### DM bounds from the single-line and double-line analyses Given the lack of significant excesses in the \(\gamma\)-ray spectrum, we derive the confidence limits for both the single- and double-line analyses. We produced the bands depicting the \(68\%\) and \(95\%\) confidence intervals of the observed limits by generating mock data following a power-law distribution with spectral index of \(-2\) and Poissonian noise. We repeat the analysis for \(750\) iterations (we find that the bands remain stable above \(\sim 400\) iterations), similar to the approach of Refs. [24; 37]. Figure 3 shows the derived limits for ROI41, which provide slightly stronger (but similar) limits than the other ROIs. We show results for other ROIs in the Figure 7 in Appendix C. We also include the observed limits for the double-line analysis as a dashed line, finding that these constraints become stronger at higher energies when the \(Z\gamma\) cross-section becomes larger than the \(\gamma\gamma\) cross section. The ratio \(\text{BR}_{Z\gamma}\)/\(\text{BR}_{\gamma\gamma}\) is derived from [47] and is discussed in the supplemental material. In Figure 1, we compare our single-line limits with those obtained by the Fermi-LAT collaboration (using \(5.8\) yr of data) [24] and those obtained by Foster et al. [48] (using \(\sim\)14 years of Fermi-LAT data). The main differences between our analysis and Ref. [48] include: (1) their analysis employs Figure 2: Local significance obtained in the \(10\) to \(300\) GeV range. We compare the result of the single-line analysis (solid lines) and double-line analysis (dashed lines) for each ROI: \(3^{\circ}\) (upper), \(16^{\circ}\) (middle) and \(41^{\circ}\) (lower) around the center of the Galaxy. For those signals where there is no preference for a positive number of counts over the background the significance (\(\sigma_{local}\)) is set to 0. a constant energy-window size (of \(\Delta E/E\sim 0.64\)), while our analysis utilizes a variable window size based on the local Fermi-LAT energy resolution, (2) their analysis employs a single ROI focused on the inner \(30^{\circ}\) around the GC and utilizes a different Galactic plane cut, (3) they utilize SOURCE class photon events with energy reconstructions spanning EDISP 1-3 (the top 75% of well-reconstructed energy events) while we use only CLEAN class photon events from EDISP3 (the \(25\%\) of reconstructed events with the best energy resolution) (4) they do not subtract regions surrounding bright \(\gamma\)-ray point sources, while we eliminate these background-dominated regions from our analysis in all ROIs except for ROI3. Despite these differences, our single-line results are in good agreement, as seen in Fig. 1. The DAMPE collaboration recently published the results of their single-line analysis using \(5\) yr of data collected by the DAMPE instrument [49], obtaining similar bounds to those found in Ref. [24] (see also Ref. [50]). ## IV Implication for Higgs-mediated annihilation Higgs mediated annihilation is unique from the perspective that collider level precision can be used in a DM framework. Since SM processes govern the branching ratios for the annihilation final states, and the DM coupling to the Higgs is fixed by the freeze-out condition, the only unknown parameter is the DM mass. However, there are several model choices that affect the relationship between the annihilation cross-section and the expected event rates in direct detection and collider experiments [51, 35]. Here we mention two well-motivated models: * A singlet scalar model \(S\), with mass \(m_{S}\), which after the electroweak symmetry breaking has the following relevant coupling to the Higgs boson, \[-\mathcal{L}\supset\frac{1}{2}m_{S}^{2}S^{2}+\frac{\lambda_{p}v_{H}}{2}hS^{2}\,,\] (6) where \(\lambda_{p}\) is a dimensionless coupling, and \(v_{H}\) is the Higgs field vacuum expectation value [26, 27, 28]. In this case the spin-independent direct detection cross section is not suppressed, and only \(\lambda_{p}<10^{-3}\) values are compatible with the current limits [52, 32]. The GCE signal can thus only be explained in a very narrow mass range around the Higgs resonance [33]. * A Majorana fermion model \(\chi\), with mass \(m_{\chi}\) and the following low-energy couplings \[-\mathcal{L}\supset\frac{1}{2}m_{\chi}\bar{\chi}\chi+i\frac{y_{p}}{2}h\bar{ \chi}\gamma_{5}\chi\] (7) where \(y_{p}\) is a dimensionless coupling parameter. In this case the direct detection cross section is suppressed for real values of \(y_{p}\)[35, 25], which means that we would not expect a signal in direct detection searches. Therefore, this model is not constrained by direct detection experiments, and only visible in collider and indirect detection searches. We note that both scenarios are subject to invisible Higgs decay constraints, which limit BR\((h\to\text{inv.})<0.11\)[53], and difficult to avoid. The total annihilation cross section in both scenarios is given by \[\sigma_{\text{ann.}}(s)=\frac{f(m_{\text{DM}},\lambda_{\text{DM}})}{\left(s-m _{h}^{2}+\Gamma_{h}^{2}m_{h}^{2}\right)^{2}} \tag{8}\] where \(s\) is the total s-channel four-momentum, \(m_{\text{DM}}\) and \(\lambda_{\text{DM}}\) are the DM mass and coupling strength to the Higgs, and \(\Gamma_{h}\) is the total Higgs boson decay width. As discussed in Ref. [54] the thermally averaged cross-section at a resonance needs to be performed without the non-relativistic expansion of the annihilation cross section. The fact that the thermal average of the annihilation cross section in the early universe is significantly different from the average at late times leads to a strong late-time mass dependence of the annihilation cross section around the resonance, as discussed in detail in Ref. [32]. Figure 4 shows the Higgs portal parameter space with the predicted total annihilation rate as a function of the DM mass. It is intriguing that \(\gamma\)-ray line searches have the strongest sensitivity around the Higgs resonance, a region in parameter space that is typically challenging to test. We make use of the fact that, given the Fermi-LAT energy resolution, the line signatures can be well distinguished from the continuum emission, such as final state radiation, up to \(\gamma\)-ray energies of \(E_{\gamma\gamma}\sim 300\) GeV, as discussed in Refs. [31, 33]. ## V Implications for the Galactic center excess Observations of \(\gamma\)-ray emission from the Milky Way galactic center have long observed a \(\gamma\)-ray excess that has been Figure 3: DM bounds for the ROI41 region assuming a NFW Galactic DM profile, including the \(68\%\) and \(95\%\) confidence intervals obtained from the single-line analysis. We show the single-line and double-line limits as a solid and a dashed line, respectively, and compare to the limits from Refs. [24]. named the GCE [56; 57; 58; 59; 60; 61]. While the origin of the GCE is disputed, the two most compelling explanations involve DM annihilation or the combined emission from a population of millisecond pulsars (MSPs) [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75]. Within the context of DM models, a DM candidate that annihilates predominantly via \(\chi\chi\to b\bar{b}\) with a mass \(m_{\chi}\) between \(40\) GeV and \(70\) GeV is highly consistent with the data [59; 60]. Since this tree-level final state involves charged particles, there is unavoidably a loop process leading to monoenergetic \(\gamma\gamma\) and, as dictated by electroweak symmetry, narrow \(Z\gamma\) photon lines. However, the simple \(b-\)quark loop produces a branching ratio of the order of BR\({}_{\gamma\gamma}\sim\alpha_{\rm EM}^{2}/(4\pi)\sim 10^{-6}\), which falls far below current experimental sensitivities. On the other hand, a dominant branching fraction into \(b-\)quark states in this mass range is hard to explain unless the interaction is related to the quark Yukawa couplings. Thus, we are naturally led to the Higgs-boson mediated scenario. Notably, a Higgs-motivated \(b\bar{b}-\)annihilation rate that fits the GCE unambiguously predicts bright \(\gamma\gamma\) and \(Z\gamma\) signals. Figure 5 shows the \(95\%\) confidence interval for the GCE signal-predictions for the annihilation rates \(\langle\sigma v\rangle_{\gamma\gamma}\) and \(\langle\sigma v\rangle_{Z\gamma}\), as a red and magenta ellipse, respectively. Those regions are derived from the best-fit region of Ref. [60]. We compare the predicted rates with our best limits from the double-line analysis of 15 years of Fermi-LAT data, and find that are results are beginning to be in tension with minimal Higgs portal mediated scenarios as an explanation for the GCE. By minimal we mean models based on the interactions in Eqs. 6, and 7, in which the relic density is determined by the s-channel Higgs annihilation (shown as black dashed line), and given the searches for invisible Higgs decays apply. Note that other approaches find best-fit parameter regions [76], that are only consistent with the relic density predictions within the excluded parameter range. ## VI Discussion and Conclusion In this _letter_, we have reanalyzed 15 years of Fermi-LAT data and studied spectral signatures stemming from DM annihilation to monoenergetic lines. We find no evidence for any statistically significant excesses and set strong limits on DM annihilation to mono-energetic photons. Our results can be applied to a broad range of DM scenarios, constraining DM masses up to (or beyond) the electroweak scale in many well-motivated DM models. If non-perturbative effects, such as bound-state formation are taken into account [77; 78; 79], the reach extends to even higher DM masses. Additionally, we performed the first double-line analysis of the full Fermi-LAT data-set, using 15 years of data. We placed strong limits on thermal DM that is coupled to the Standard Model through the Higgs portal, in a largely model independent way. Our results are in moderate tension (but do not entirely rule out), Higgs Portal models of the GCE with dark matter masses that sit near the m\({}_{H}\)/2 resonance. This parameter space is of interest due to the fact that it is the only portion of the Higgs Portal parameter space that is consistent with the GCE and constraints from the branching ratios to invisible particles. Moreover, we note that our constraints are roughly independent of the dark matter density profile near the galactic center, as the cross-sections for both the GCE continuum and the line search shift in the same way. We emphasise, that the double-line technique can be applied to other datasets, and is particularly promising at energies near the Higgs resonance. Furthermore, it has an enhanced discovery potential for \(\gamma\)-ray signals that have a limited sensitivity due to low photon counts, as it makes use of additional information from photons in correlated energy bins. Figure 4: The Higgs portal total annihilation rate as a function of the DM mass. We superimpose our constraint from the double-line analysis on the total annihilation rate (red solid). Additionally we show the dwarf spheroidal limits from the Fermi-LAT collaboration [55] (green solid), as well as the constraints from invisible Higgs decay searches [53] (gray dashed). The rate factor predicted by the relic density constraint is shown as a function of DM mass (black dashed). Figure 5: The predictions for the \(\gamma\gamma\) (dark red) and \(Z\gamma\) (magenta) annihilation rates in the Higgs portal model, given the GCE \(95\%\) confidence interval found in Ref. [60]. We superimpose our double-line search limits from a gNFW profile in ROI41 (red solid), and the \(\langle\sigma v\rangle_{\gamma\gamma}\) values predicted by the freeze-out (black dashed). Note that in contrast to constraints from dwarf galaxies this search for \(\gamma-\)lines in the GC does not have an intrinsic J-factor uncertainty, when the comparison to the continuum excess is made. ## Acknowledgements We would like to thank Ben Safdi, Josh Foster, Rebecca Leane, and Linda Xu for helpful comments and discussions. PD and TL are supported in part by the European Research Council under grant 742104 and the Swedish National Space Agency under contract 117/19. JS was also supported by the European Research Council under grant 742104 during the initial stage of the project. TL is also supported by the Swedish Research Council under contracts 2019-05135 and 2022-04283. This project used computing resources from the Swedish National Infrastructure for Computing (SNIC) under project Nos. 2021/3-42, 2021/6-326, 2021-1-24 and 2022/3-27 partially funded by the Swedish Research Council through grant no. 2018-05973. ## Appendix A Fermi counts spectra and signal fit In this appendix we show examples of the derived count spectra, and the template functions used in the single- and double-line analyses in the first figure panel. We furthermore show the limits obtained from the single-line analysis for all ROIs studied in this work in the second figure panel, and for the double-line analysis in the third figure panel. Finally, we report the ratio \(\text{BR}_{2\gamma}/\text{BR}_{\gamma\gamma}\) as function of the DM mass, as derived from [47] (consistent with analytic calculations from [80]), which motivates the application of the double-line analysis for \(\mathcal{O}\)(TeV) gamma-ray data Figure 6 shows the count spectrum for the ROI16 region at two different energies: at around \(106\) GeV (upper row), where fluctuations at the level of \(1-2\sigma\) are observed for all ROIs (see Fig. 2) and at \(155\) GeV, where the signals from the photons produced in the \(\gamma\gamma\) and \(Z\gamma\) decay start to merge into the same energy bin (leading to a significantly stronger constraint compared to the one from the single-line analysis above this energy). Here, we include the fitted count spectra assuming only background (null fit) and assuming background+signal (signal fit), which allows us to see a comparison of the signals searched in the single-line (right panels) and double-line (left panels) analysis. Figure 6: Count spectra for two different energy windows comparing the double-line (left column) and single-line (right column) best-fit signals. In the upper panels, the energy window is centered at \(\text{E}_{\gamma\gamma}\sim 106\) GeV (\(\text{E}_{Z\gamma}\sim 87\) GeV) and in the lower panels at \(\text{E}_{\gamma\gamma}\sim 155\) GeV (\(\text{E}_{Z\gamma}\sim 142\) GeV). At energies greater than \(\text{E}_{\gamma\gamma}\sim 180\) GeV both signals expected in the double-line analysis merge. ## Appendix B Single-line limits Figure 7 shows the limits obtained from the single-line analysis compared to those from Refs. [48; 49; 24]. The ROI16 region constitutes the region that leads to better bounds. Thus, in the upper panels, we show the limits derived assuming an NFW profile, on the left, and a contracted-NFW profile (with index \(\gamma=1.3\)), on the right. The contracted-NFW profile has been found in several papers to be the DM profile most compatible with the GCE. In this case, the limits from other works have been rescaled accordingly. Then, in the lower panels we show the limits obtained for the ROI3 (left) and ROI41 (right) regions. As we see, the ROI3 region offers the most conservative limits, which is due to the smaller ROI, which produces many fewer counts and larger Poissonian uncertainties, while the ROI41 region leads to similar but slightly higher bounds than in the ROI16 region. Larger ROIs can have large systematic uncertainties associated to the analysis and a lower expected signal-to-noise ratio for a DM signal. Figure 7: DM bounds and confidence bands for different ROIs obtained from the single-line analysis, compared to those from Refs. [48; 49; 24]. The upper panels show the limits for the ROI16 region, assuming an NFW and a contracted-NFW (with index \(\gamma=1.3\)) DM profile, for the upper-left and upper-right panels, respectively. The limits from other works have been rescaled in the case where we show the limits assuming a c-NFW profile. The lower panels show the limits for the ROI3 (left) and ROI41 (right) regions. ## Appendix C Double-line limits Figure 8 shows the DM bounds and confidence bands obtained from the double-line analysis for the ROI16 (left panel) and ROI41 (right panel), compared to the limits derived from the single-line analyses, drawn assuming a c-NFW profile (with index \(\gamma=1.3\)). This show that the double-line analysis would constitute a more sensitive way to look for line-like gamma-ray signals at very high energies. The reason is that at high energies the photons from the \(\gamma\gamma\) and \(Z\gamma\) decays are produced at roughly the same energy and the branching ratio for the \(Z\gamma\) decay becomes much higher than for the \(\gamma\gamma\) process, leading to a more constraining limit of \(\left\langle\sigma v\right\rangle_{\gamma\gamma}\). This means that a search for lines at the energies where the \(Z\gamma\) decay dominates lead to a much stronger constraint due to the sum of the signals coming from the \(\gamma\gamma\) and \(Z\gamma\) decays and the branching ratio for the \(Z\gamma\) process. Figure 9 displays the signal strength ratio between the \(Z\gamma\) and \(\gamma\gamma\) mono-energetic photon signals for Higgs mediated annihilation. At low energies the massive Z-boson emission suppresses the signal, in the intermediate mass range the ratio is determined by the coupling ratio between the weak and electromagnetic couplings \(\alpha_{\rm EW}\sim 1/29\) and \(\alpha_{\rm EM}\sim 1/137\), as well as the photon multiplicity. Finally, at larger masses the loop factor of the \(\gamma\gamma\) signal is suppressed due to the destructive interference of virtual particles, which enhances the relative \(Z\gamma\) signal strength. Figure 8: DM bounds and confidence bands for different ROIs obtained from the double-line analysis, compared to the limits derived from the single-line analyses, for the ROI16 (left panel) and ROI41 (right panel), assuming a c-NFW profile (with index \(\gamma=1.3\)). Figure 9: The signal-strength ratio of the \(Z\gamma\) and \(\gamma\gamma\) mono-energetic photon signals for Higgs mediated annihilation [47].
2309.05828
Exploring Geometric Deep Learning For Precipitation Nowcasting
Precipitation nowcasting (up to a few hours) remains a challenge due to the highly complex local interactions that need to be captured accurately. Convolutional Neural Networks rely on convolutional kernels convolving with grid data and the extracted features are trapped by limited receptive field, typically expressed in excessively smooth output compared to ground truth. Thus they lack the capacity to model complex spatial relationships among the grids. Geometric deep learning aims to generalize neural network models to non-Euclidean domains. Such models are more flexible in defining nodes and edges and can effectively capture dynamic spatial relationship among geographical grids. Motivated by this, we explore a geometric deep learning-based temporal Graph Convolutional Network (GCN) for precipitation nowcasting. The adjacency matrix that simulates the interactions among grid cells is learned automatically by minimizing the L1 loss between prediction and ground truth pixel value during the training procedure. Then, the spatial relationship is refined by GCN layers while the temporal information is extracted by 1D convolution with various kernel lengths. The neighboring information is fed as auxiliary input layers to improve the final result. We test the model on sequences of radar reflectivity maps over the Trento/Italy area. The results show that GCNs improves the effectiveness of modeling the local details of the cloud profile as well as the prediction accuracy by achieving decreased error measures.
Shan Zhao, Sudipan Saha, Zhitong Xiong, Niklas Boers, Xiao Xiang Zhu
2023-09-11T21:14:55Z
http://arxiv.org/abs/2309.05828v1
# Exploring Geometric Deep Learning For Precipitation Nowcasting ###### Abstract Precipitation nowcasting (up to a few hours) remains a challenge due to the highly complex local interactions that need to be captured accurately. Convolutional Neural Networks rely on convolutional kernels convolving with grid data and the extracted features are trapped by limited receptive field, typically expressed in excessively smooth output compared to ground truth. Thus they lack the capacity to model complex spatial relationships among the grids. Geometric deep learning aims to generalize neural network models to non-Euclidean domains. Such models are more flexible in defining nodes and edges and can effectively capture dynamic spatial relationship among geographical grids. Motivated by this, we explore a geometric deep learning-based temporal Graph Convolutional Network (GCN) for precipitation nowcasting. The adjacency matrix that simulates the interactions among grid cells is learned automatically by minimizing the L1 loss between prediction and ground truth pixel value during the training procedure. Then, the spatial relationship is refined by GCN layers while the temporal information is extracted by 1D convolution with various kernel lengths. The neighboring information is fed as auxiliary input layers to improve the final result. We test the model on sequences of radar reflectivity maps over the Trento/Italy area. The results show that GCNs improves the effectiveness of modeling the local details of the cloud profile as well as the prediction accuracy by achieving decreased error measures. Shan Zhao \({}^{1}\), Sudipan Saha \({}^{1,4}\), Zhitong Xiong \({}^{1}\), Niklas Boers\({}^{2,3}\), Xiao Xiang Zhu \({}^{1}\) Data Science in Earth Observation, Technical University of Munich (TUM), Ottobrunn, Germany 1 Earth System Modelling, TUM, Ottobrunn, Germany 2 Potsdam Institute for Climate Impact Research, Potsdam, Germany 3 Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, India 4 ## 1 Introduction Earth observation and remote sensing are crucial technologies for monitoring our living environments [1]. Typically based on accurate and high-resolution radar data, precipitation nowcasting aims to estimate the rainfall intensity in the near future (usually up to 2 hours) [2], which is important for water resources management, agriculture, and disaster management. Precise precipitation prediction has remained challenging due to the multi-scale nature of precipitation, with relevant processes with scales ranging from millimeters (droplet formation and cloud microphysics) to thousands of kilometers (large-scale circulation patterns). Numerical Weather Prediction (NWP) integrates discretized versions of the underlying equations of fluid mechanics as initial value problem [3]; although the accuracy has improved greatly in recent decades [4], there remain considerable uncertainties [5] due to the finite grid resolution, errors in parameterizing unresolved processes, and the chaotic dynamics of the atmosphere. Alternatively, data-driven methods provide promising alternatives to process-based models for precipitation nowcasting. Deep learning (DL) models have been shown great potential in capturing the temporal characteristics of rainfall [2, 6]. In particular, deep generative models have recently been shown to outperform existing NWP and other DL (e.g. CNN) benchmarks in nowcasting precipitation [7]. Moreover, the DL and NWP combined methods drive the black-box DL to be more physics-aware and show an improved sensitivity to heavy rains [8, 9, 10]. Radar intensity maps, collected by local radar stations with the high spatial and temporal resolution, are commonly used as the input of precipitation nowcasting models. For data-driven nowcasting based on such data, Ayzel et al. [6] apply a U-shape encoder-decoder segmentation network to generate the output. ConvLSTM and Trajectory GRU (TrajGRU) [2] can learn the location-invariant/variant structures for recurrent connections. Precipitation nowcasting depends on complex spatial interactions that aren't fully taken into account by the above-mentioned models. Traditional CNNs are only suitable for grided-data assuming translation-invariance. This assumption doesn't hold when it comes to climate data because the cloud pattern usually displays transformation (e.g., rotation) [2]. Geometric deep learning can overcome such limits by dealing with graph structure data [11]. The graph is widely used for geographic applications. For example, climate networks present observations on each grid cell as nodes and compute the similarity between nodes as edges [12, 13, 14, 15]. This has lead to improved predictability and process understanding of extreme rainfall events [16, 17]. Keisler and Ryan [18] use GNNs to generate multi-variant global weather forecasts. Cachay et al. [19] adopted a graph learning layer to automatically study the adjacency matrix to predict El Ni'no events. Since precipitation patterns exhibit high spatial and temporal variability, we explore geometric DL for precipitation nowcasting. To be more specific, we expect that the relevant dynamical spatial relationships can be more effectively captured by the nodes and edges of a graph convolutional network(GCN). The main contributions of our work include: * We explore geometric deep networks for the task of precipitation nowcasting. Specifically, we apply a multivariate time series GCN to handle the complex relationships between different geographical locations on the radar data [20]. * To further enhance the communication between neighboring pixels, we employ feature layers from the geographical proximity area as the augmented inputs to improve the final results. * We conduct a comprehensive evaluation to assess both pixelwise accuracy and spatial granularity of the prediction. ## 2 Methods We formulate the precipitation nowcasting task as a single-step video prediction problem. Given \(S\) past observations _Xi_ (_i = -_T_-_S_+1, _t_-_S_+2, _..., _t_), _Xi_\(\in R^{C\times W\times H}\) where _W, H_ are the spatial width and height and \(C\) is the number of input channels, our task is to predict the most likely frame at a future time step, i.e. _Xi+T_. \(t\) is the current time step, and \(T\) is the horizon, i.e., how far ahead the model predicts the future. Here, we organize each pixel as a node in the graph and randomly initialize its representation. The node similarity is then learned using a small network by minimizing the final prediction loss. The temporal changes are captured by 1D convolutions along the temporal dimension and the adjacency matrix is shared among the frames in the same sequence. Last, the GCN is adopted to refine the graph structure and outputs the final pixel-wise prediction. ### Spatial relationship design For a given frame, to capture the spatial relationship, we treat each grid cell as a node in the graph. Let us assume that the number of nodes in a frame is _Nnode_. Since Wu et al. [21] validate that the geographical vicinity is not adequate in representing the closeness in the feature space, we adopt the Graph Learning Layer in [21] to let the model learn the adjacency matrix in an automatic and flexible manner. First, the node embedding is randomly initialized. Then, the linear transformation \(\theta\) and nonlinear transformation \(f_{a}\) are applied to the initial embedding. The adjacency matrix is learned by a parameterized function \(g_{\theta}\) which considers the difference between the updated node embedding. \[\mathbf{A}=g_{\theta}\big{(}f_{a1}\left(\theta_{1}\,\mathbf{E}_{1}\right),f_{a2 }\left(\theta_{2}\,\mathbf{E}_{2}\right)\big{)}, \tag{1}\] where \(\mathbf{E}_{1}\) and \(\mathbf{E}_{2}\) are node embeddings. Since in the real world most graphs are sparse and the full matrix contains noisy information, we post-prune the adjacency matrix by only preserving the top \(k\) neighbors of each node. ### Node representation update Afterwards, the graph convolutional layer is applied to refine the graph structure and to update the node embedding. The output from this step is an adjacency matrix of size \(N_{node}\times N_{node}\) and updated node representation. The same graph structure (\(\mathbf{A}\)) is shared among different input time steps for fast convergence. ### Temporal information extraction We apply a 1D convolution along the temporal dimension to extract the temporal correlation. Smaller kernels tend to capture the shortterm signal pattern while larger kernels are better at modeling longterm signal pattern. In our method, the inception module [21] is composed of several kernels of various sizes. The temporal features extracted by the differently sized kernels are concatenated with each other to cover all types of movement at different speeds. For one temporal convolutional module, two inception modules [21] run in parallel and are activated by tangent hyperbolic activation and sigmoid activation, respectively, to control the selection and flow of the information. The L1 loss between the prediction and the ground truth observation is computed and minimized. Figure 1: Single-step precipitation nowcasting given input sequence length \(S\) and horizon \(T\). The feature dimension is augmented. ## 3 Experimental Validation ### Dataset We validate the performance of the model on TAASRAD19 [20]. TAASRAD19 contains radar reflectivity data collected by a singlepolarization Doppler C-Band Radar of a \(250\times 250\)\(km^{2}\) area of Trentino-Alto Adige/Sudtirol (N 46\({}^{\circ}\)29\({}^{\circ}\)18\({}^{\circ}\), E 11\({}^{\circ}\)12\({}^{\circ}\)38\({}^{\circ}\)). The temporal resolution is 5 minutes and the spatial resolution is 500 meters. To reduce the computation cost, we downsample the image every 5 pixels. To prevent the loss of information caused by downsampling and promote neighboring information extraction, we smooth the raw inputs (without downsampling) using 3\(\times\)3, 5\(\times\)5, 9\(\times\)9, 25\(\times\)25 mean filter and attach them to the input frame. All input is normalized to (0,1) as a pre-processing step to make them suitable for the deep learning model. The output is the radar reflectivity which can be converted to rainfall intensity values (mm/h) by the Z-R relationship \[dBZ=10\text{log}a+10b\text{log}R, \tag{2}\] where \(R\) is the rain-rate level, \(dBZ\) is the logarithmic radar intensity values, and \(a=58.53\), \(b=1.56\), as provided in [20]. ### Evaluation We compute the mean absolute error (MAE), root mean squared error (RMSE), and correlation coefficient (CORR) to evaluate the proximity between the prediction and the ground truth pixel value. To test its functionality in predicting rainfall events, we further computed the Critical Success Index (CSI) and the Heidke Skill Score (HSS) [22]. However, all of the above metrics only measure pixelwise accuracy. Deep learning tends to fool them by producing oversmoothed results [7]. To access the spatial granularity, radiallyaveraged power spectral density (PSD) at different percentiles of all values are reported. All values below 1E-3 are masked during the assessment. We choose the following single-step prediction baseline models for comparison. * Mean of input states. The output is simply the average of the input frames. * ConvLSTM. ConvLSTM is developed from LSTM using convolutional operations as gates. We use a three-layer encoding-forecasting structure with the number of filters for RNNs set to be 64, 192, 192, and kernel sizes equal to 4 \(\times\) 4, 3 \(\times\) 3 and 2 \(\times\) 2. ### Results The network is optimized with the Adam optimizer [23] with the initial learning rate 1e-4 and weight decay 1e-5. We set the batch size to 32. The number of training epochs is 15 for GCN and 20 for ConvLSTM. We set the number of training epochs by inspecting performance on the validation set. The initial node embedding has a dimension of 40 and nonlinearity is activated by \(tanh\). The depth of the graph convolutional layer is 2. The top \(k=20\) neighbors of each node are preserved. We split the data in 2019 year to a ratio of 6/2/2 for training/validation/test set. We evaluate the model performance by comparing it with other temporal prediction methods. The results are shown in Table 1. We present the accuracy at T=10 as it attains the maximum potential during the nowcasting periods of 1-2 hours. As the horizon increases, the performance of all models decline. GCN outperforms \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Method** & **MAE** & **RMSE** & **CORR** \\ \hline Average & 56.8000 & 3.7880 & 0.5231 \\ \hline ConvLSTM [24] & 73.3423 & 3.7261 & **0.6831** \\ \hline Proposed GCN & **55.2068** & **3.2355** & 0.6784 \\ \hline \end{tabular} \end{table} Table 1: Error scores. MAE, RMSE, and correlation coefficient are computed at h=10. Figure 3: Metrics to evaluate predictability of rainfall events. CSI and HSS averaged over all test samples are reported. Figure 2: Results at horizon=10. From top to down are the ground truth image, prediction by ConvLSTM, and prediction by proposed geometric deep learning-based GCN. the benchmarks. The graph structure is constantly updated during the learning phase and it has the potential to store temporal information. The mean of the input states performs the worst in the three methods. Figure 2 displays the visual comparison between ground truth and output of deep learning models. Though ConvLSTM wins against GCN by a small margin on correlation, it's shown that convLSTM returns too blurred results due to the large kernel size and simulates fewer details of the cloud shape. In Figure 3, the CSI and HSS of geometric DL are better than ConvLSTM, indicating overall better prediction of rainfall events. The PSD decreases almost linearly with the increasing wavelength number in Figure 4, i.e., the picture is dominated by coarse features. The geometric DL is slightly better at capturing finer spatial patterns compared with convLSTM. Further, we test the performance of augmenting feature channels on GCN. The data dimension is shown in Figure 1. The result with and without augmented layers are compared in Table 2. To save time, only 10 sequences are trained for 1 epoch. With the mean filtered layers as the augmented feature dimensions, the error is reduced significantly. However, the training time is increased, which needs to be balanced against the model performance. ## 4 Conclusion In this paper, we investigated the potential of geometric DL in the context of precipitation nowcasting on TAASRAD19 radar echo data. Our experiment shows that the geometric DL-based GCN outperforms several baselines and yields improved prediction accuracy including better capturing of details. Especially, PSD validates that the geometric-based method is a promising method to preserve detailed spatial features. Moreover, we validate the merits of using augmented feature layers while dealing with a large number of input pixels. We postulate that the performance will vary with the cloud/precipitation type. In case the divergence brought by different cloud types is large, domain adaptation can be adopted to improve the generalization ability of the model in handling complex weather conditions. In the future, we plan to incorporate other climate variables, such as pressure, wind, and temperature into the model to further improve the prediction accuracy for longer horizons.
2301.13868
PADL: Language-Directed Physics-Based Character Control
Developing systems that can synthesize natural and life-like motions for simulated characters has long been a focus for computer animation. But in order for these systems to be useful for downstream applications, they need not only produce high-quality motions, but must also provide an accessible and versatile interface through which users can direct a character's behaviors. Natural language provides a simple-to-use and expressive medium for specifying a user's intent. Recent breakthroughs in natural language processing (NLP) have demonstrated effective use of language-based interfaces for applications such as image generation and program synthesis. In this work, we present PADL, which leverages recent innovations in NLP in order to take steps towards developing language-directed controllers for physics-based character animation. PADL allows users to issue natural language commands for specifying both high-level tasks and low-level skills that a character should perform. We present an adversarial imitation learning approach for training policies to map high-level language commands to low-level controls that enable a character to perform the desired task and skill specified by a user's commands. Furthermore, we propose a multi-task aggregation method that leverages a language-based multiple-choice question-answering approach to determine high-level task objectives from language commands. We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills.
Jordan Juravsky, Yunrong Guo, Sanja Fidler, Xue Bin Peng
2023-01-31T18:59:22Z
http://arxiv.org/abs/2301.13868v1
# PADL: Language-Directed Physics-Based Character Control ###### Abstract. Developing systems that can synthesize natural and life-like motions for simulated characters has long been a focus for computer animation. But in order for these systems to be useful for downstream applications, they need not only produce high-quality motions, but must also provide an accessible and versatile interface through which users can direct a character's behaviors. Natural language provides a simple-to-use and expressive medium for specifying a user's intent. Recent breakthroughs in natural language processing (NLP) have demonstrated effective use of language-based interfaces for applications such as image generation and program synthesis. In this work, we present PADL, which leverages recent innovations in NLP in order to take steps towards developing language-directed controllers for physics-based character animation. PADL allows users to issue natural language commands for specifying both high-level tasks and low-level skills that a character should perform. We present an adversarial imitation learning approach for training policies to map high-level language commands to low-level controls that enable a character to perform the desired task and skill specified by a user's commands. Furthermore, we propose a multi-task aggregation method that leverages a language-based multiple-choice question-answering approach to determine high-level task objectives from language commands. We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills. character animation, language commands, reinforcement learning, adversarial imitation learning + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal journal: Computer Vision and Pattern Recognition An ideal animation system should provide an accessible interface that allows users to easily specify desired behaviors for a character, while also being sufficiently versatile to enable control over a rich corpus of skills. Natural language offers a promising medium that is both accessible and versatile. The recent development of large and expressive language models has provided powerful tools for integrating natural language interfaces for a wide range of downstream applications (Brown et al., 2020; Devlin et al., 2018; Radford et al., 2021), such as generating functional code and realistic images from natural language descriptions (Chen et al., 2021; Ramesh et al., 2022; Tan et al., 2018). In this work, we aim to leverage these techniques from NLP to take steps towards developing a language-directed system for physics-based character animation. The central contribution of this work is a system for language-directed physics-based character animation, which enables users to direct the behaviors of a physically simulated character using natural language commands. Given a dataset of motion clips and captions, which describe the behaviors depicted in each clip, our system trains control policies to map from high-level language commands to low-level motor commands that enable a character to reproduce the corresponding skills. We present an adversarial imitation learning approach that allows a policy to reproduce a diverse array of skills, while also learning to ground each skill in language commands. Our policies can also be trained to perform additional auxiliary tasks. We present a language-based multi-task aggregation model, which selects between a collection of task-specific policies according to a given command, thereby allowing users to easily direct a character to perform various high-level tasks via natural language. We present one of the first systems that can effectively leverage language commands to direct full-body physically simulated character to perform a diverse array of complex motor skills. The code for this work is available at [https://github.com/nv-talbs/PADL](https://github.com/nv-talbs/PADL). ## 2. Related Work Synthesizing natural and intelligent behaviors for simulated characters has been a core subject of interest in computer animation, with a large body of work focused on building kinematic and physics-based control models that can generate life-like motions (Clegg et al., 2018; da Silva et al., 2008; Hodgins et al., 1995; Holden et al., 2016; Lee et al., 2010; Liu and Hodgins, 2018; Tan et al., 2014; Wang et al., 2009, 2012). While a great deal of emphasis has been placed on motion quality, considerably less attention has been devoted on the _directability_ of the resulting models at run-time. Directability is often incorporated into these models via control abstractions that allow users to direct a character's behaviors through high-level commands. These abstractions tend to introduce a trade-off between accessibility and versatility. Simple control abstractions, such as joystic commands or target waypoints, (Agrawal and van de Panne, 2016; Coros et al., 2009; Holden et al., 2017; Lee et al., 2021, 2021; Ling et al., 2020; Peng et al., 2018, 2022, 2021; Starke et al., 2019; Treuille et al., 2007; Zhang et al., 2020), provide an accessible interface that can be easily adopted by users. But these abstractions can also limit the versatility of the behaviors that can be actively controlled by a user. Alternatively, general motion tracking models can provide a versatile interface, which allows for fine-grain control over a character's movements through target motion trajectories (Bergamin et al., 2019; Park et al., 2019; Pollard et al., 2002; Wang et al., 2020; Won et al., 2020; Yamane et al., 2010). These target trajectories specify desired poses for the character to reach at every timesteps, which in principle can direct the character to perform any feasible motion. However, this versatility often comes at the cost of accessibility, since authoring target motion trajectories can be as tedious and labour intensive as manual keyframe animation. Motion capture can be a more expeditious approach for generating target trajectories for motion-tracking models (Peng et al., 2018; Wang et al., 2020; Yu et al., 2021; Yuan et al., 2021), but tends to require specialized equipment and may limit the reproducible behaviors to those that can be physically performed by the user. In this work, we aim to leverage natural language to develop an accessible and versatile control interface for physics-based character animation. _Natural Language Processing:_ Language models trained on increasingly large datasets have been shown to develop powerful representations for text data (Devlin et al., 2018; Liu et al., 2019; Raffel et al., 2019), which can be used for a wide range of downstream applications. One such example is text-guided synthesis, where a user's prompt, expressed in natural language, can be used to direct models to produce different types of content. Large autoregressive models are able to generate coherent text completions given a user's starter prompt (Brown et al., 2020). These models lead to the popularization of "prompt engineering", where the aim is to construct optimal prompt templates that elicit the desired behaviors from a language model. Such prompt-based systems, often combined with filtering or other post-processing techniques, have been successfully used to solve grade-school math problems and competitive programming challenges (Cobbe et al., 2021; Li et al., 2022). Text-guided synthesis can also be applied across different modalities. Here, the language model does not directly generate the desired content, instead it provides a semantically meaningful encoding for a user's language prompt, which can then be used by a separately trained decoder to generate content in a different modality. Nichol et al. (2021) and Ramesh et al. (2022) successfully used this approach to generate photo-realistic images from natural language, leveraging the text encoder from CLIP (Radford et al., 2021). In this work, we aim to leverage powerful language models to develop language-directed controllers for physics-based character animation. _Language-Directed Animation:_ Synthesizing motion from language is one of the core challenges of audio-driven facial animation, where the goal is to generate facial motions for a given utterance. These models typically take advantage of the temporal correspondence between units of speech (phonemes) and facial articulations (visemes) in order to synthesize plausible facial animations for a particular utterance (Brand, 1999; Deena and Galata, 2009; Hong et al., 2002; Karras et al., 2017; Pelachaud et al., 1996). A similar temporal correspondence can also be leveraged to generate full-body gestures from speech (Ahuja and Morency, 2019; Alexanderson et al., 2020; Levine et al., 2009). While these techniques can be highly effective for generating realistic motions from speech, they are not directly applicable in more general settings where there is no clear temporal correspondence between language and motion. For example, a high-level command such as "knock over the red block" implicitly encodes a sequence of skills that a character should perform. Sequence-to-sequence models have been proposed to map high-level language descriptions to motion trajectories (Lin et al., 2018; Plappert et al., 2017). Ahuja and Morency (2019) and Tevet et al. (2022) proposed autoencoder frameworks that learns a joint embedding of language and motion, which can be used to generate full-body motions from language descriptions. While these techniques have demonstrated promising results, they have been primarily focused on developing kinematic motion models. In this work, we aim to develop a language-directed model for physics-based character animation, which maps high-level language commands to low-level controls that enable a character to perform the desired behaviors. ## 3. Background Our characters are trained using a goal-conditioned reinforcement learning framework, where an agent interacts with an environment according to a control policy \(\pi\) in order to fulfill a given goal \(\mathbf{g}\in\mathcal{G}\), drawn from a goal distribution \(\mathbf{g}\sim p(\mathbf{g})\). At each time step \(t\), the agent observes the state of the environment \(\mathbf{s}_{t}\in\mathcal{S}\), and responds by applying an action \(\mathbf{a}_{t}\in\mathcal{M}\), sampled from the policy \(\mathbf{a}_{t}\sim\pi(\mathbf{a}_{t}|\mathbf{s}_{t},\mathbf{g})\). After applying the action \(\mathbf{a}_{t}\), the environment transitions to a new state \(\mathbf{s}_{t+1}\), and the agent receives a scalar reward \(r_{t}=r(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},\mathbf{g})\) that reflects the desirability of the state transition for the given goal \(\mathbf{g}\). The agent's objective is to learn policy \(\pi\) that maximizes its expected discounted return \(J(\pi)\), \[J(\pi)=\mathbb{E}_{p(\mathbf{g})}\mathbb{E}_{p(\tau|\pi,\mathbf{g})}\left[ \sum_{t=0}^{T-1}\nu^{t}r_{t}\right], \tag{1}\] where \(p(\tau|\pi,\mathbf{g})=p(\mathbf{s}_{0})\prod_{t=0}^{T-1}p(\mathbf{s}_{t+1}| \mathbf{s}_{t},\mathbf{a}_{t})\pi(\mathbf{a}_{t}|\mathbf{s}_{t},\mathbf{g})\) denotes the likelihood of a trajectory \(\tau=(\mathbf{s}_{0},\mathbf{a}_{0},\mathbf{s}_{1},...,\mathbf{s}_{T})\) under a policy \(\pi\) given a goal \(\mathbf{g},p(\mathbf{s}_{0})\) is the initial state distribution, and \(p(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{s}_{a})\) represents the transition dynamics of the environment. \(T\) is the time horizon of a trajectory, and \(\gamma\in[0,1]\) is a discount factor. ## 4. Overview In this paper we introduce Physics-based Animation Directed with Language (PADL; pronounced "paddle"), a system for developing language-directed control models for physics-based character animation. Our framework allows users to control the motion of a character by specifying a _task_ to complete, as well as a specific _skill_ to use while completing that task. Tasks represent high-level objectives that the agent must accomplish, such as navigating to a target location or interacting with a specific object. In addition to specifying _what_ task an agent must accomplish, it is important for users to be able to control _how_ the task is accomplished. For example, given the task of navigating to a target location, an agent can walk, run, or jump to the target. In our system, the desired task and skill for the character are specified separately via natural language in the form of a task command and a skill command. Our framework consists of three stages, and a schematic overview of the system is available in Figure 2. First, in the _Skill Embedding_ stage, a reference motion dataset \(\mathcal{M}=\{(\mathbf{m}^{i},c^{i})\}\), containing motion clips \(\mathbf{m}^{i}\) annotated with natural language captions \(c^{i}\), is used to learn a shared embedding space \(\mathcal{Z}\) of motions and text. Each motion clip \(m^{i}=\{\hat{\mathbf{a}}_{t}^{i}\}\) is represented by a sequence of poses \(\hat{\mathbf{q}}_{t}^{i}\). A motion encoder \(z_{m}^{i}=\mathrm{Enc}_{m}(\mathbf{m}^{i})\) and language encoder \(z_{l}^{i}=\mathrm{Enc}_{l}(c^{i})\) are trained to map each motion and caption pair to similar embeddings \(z_{m}^{i}\approx z_{l}^{i}\). Next, in the _Policy Training_ stage, this embedding is used to train a collection of reinforcement learning policies, where each policy \(\pi^{i}(\mathbf{a}_{t}|\mathbf{s}_{t},\mathbf{g},\mathbf{z})\) is trained to perform a particular task using various skills \(\mathbf{z}\in\mathcal{Z}\) from the embedding. Once trained, the policy can then be directed to execute a particular skill by conditioning \(\pi\) on the embedding of a given language command \(z_{l}=\mathrm{Enc}_{l}(c)\). Finally, in the _Multi-Task Aggregation_ stage, the different policies are integrated into a multi-task controller that can be directed using language commands to perform a specific task using a desired skill. ## 5. Skill Embedding In the Skill Embedding stage, our objective is to construct an embedding space that aligns motions with their corresponding natural language descriptions. To do this, we follow a similar procedure as MotionCLIP (Tevet et al., 2022), where a transformer autoencoder is Figure 2. The PADL framework consists of three stages. 1) In the Skill Embedding stage, a dataset of motion clips and corresponding text captions are used to learn a joint embedding of motions and captions. 2) In the Policy Training stage, the learned skill embedding is used to train a collection of policies to perform various tasks, while imitating behaviors in the dataset. 3) Finally, in the Multi-Task Aggregation stage, policies trained for different tasks are combined into a multi-task controller that can be directed to perform different tasks and skills via language commands. trained to encode motion sequences into a latent representation that "aligns" with the language embedding from a pre-trained CLIP text encoder (Radford et al., 2021). Given a motion clip \(\hat{\mathbf{m}}=(\hat{\mathbf{q}}_{1},...,\hat{\mathbf{q}}_{n})\) and its caption \(c\), a motion encoder \(\mathbf{z}=\operatorname{Enc}_{m}(\hat{\mathbf{m}})\) maps the motion to an embedding \(\mathbf{z}\). The embedding is normalized to lie on a unit sphere \(||\mathbf{z}||=1\). Following Tevet et al. (2022), \(\operatorname{Enc}_{m}\left(\mathbf{m}\right)\) is modeled by a bidirectional transformer (Devlin et al., 2018). A motion decoder is jointly trained with the encoder to produce a reconstruction sequence \(\mathbf{m}=(\mathbf{q}_{1},...,\mathbf{q}_{n})\) to recover \(\hat{\mathbf{m}}\) from \(\mathbf{z}\). The decoder is also modelled as a birectional transformer \(\mathbf{m}=\operatorname{Dec}(\mathbf{z},\mathbf{U})\), which decodes all frames of in parallel using a learned constant query sequence \(\mathbf{U}=(\mathbf{u}_{1},...,\mathbf{u}_{n})\), similar to the final layer of Carion et al. (2020). The autoencoder is trained with the loss: \[\mathcal{L}_{\text{auto}}=\mathcal{L}_{\text{recon}}+0.1\mathcal{L}_{\text{ align}}. \tag{2}\] The reconstruction loss \(\mathcal{L}_{\text{recon}}\) measures the error between the reconstructed sequence and original motion: \[\mathcal{L}_{\text{recon}}=\frac{1}{n}\sum_{t=1}^{n}||\hat{\mathbf{q}}_{t}- \operatorname{Dec}\left(\operatorname{Enc}_{m}\left(\hat{\mathbf{m}}\right),\mathbf{U}\right)||_{2}^{2}. \tag{3}\] The alignment loss \(\mathcal{L}_{\text{align}}\) measures the cosine distance between a motion embedding and the language embedding: \[\mathcal{L}_{\text{align}}=1-d_{\text{cos}}\left(\operatorname{Enc}_{m} \left(\hat{\mathbf{m}}\right),\operatorname{Enc}_{l}(c)\right). \tag{4}\] The language encoder \(\operatorname{Enc}_{l}\left(\mathbf{m}\right)\) is modeled using a pre-trained CLIP text encoder with an added head of two fully-connected layers, where only this output head is fine-tuned according to Eq. 4. To help avoid overfitting, for every minibatch of motion sequences sampled from the dataset we also extract a random subsequence from each motion and add these slices to the batch that the model is trained on. These subsequences only contribute to the reconstruction loss. ## 6. Policy Training Once we have a joint embedding of motions and captions, we will next use the embedding to train control policies that enable a physically simulated character to perform various high-level tasks while using skills specified by language commands. At each timestep \(t\), the policy \(\pi(\mathbf{a}|\mathbf{s}_{t},\mathbf{g},\mathbf{z})\) receives as input the state of the character \(\mathbf{s}_{t}\), a task-specific goal \(\mathbf{g}\), and a skill latent \(\mathbf{z}\). The goal \(\mathbf{g}\) specifies high-level task objectives that the character should achieve, such as moving to a target location or facing a desired direction. The skill latent \(\mathbf{z}\) specifies the skill that the character should use to achieve the desired goal, such as walking vs running to a target location. The latents are generated by encoding motion clips \(\mathbf{z}=\operatorname{Enc}_{m}(\mathbf{m})\) sampled from the dataset \(\mathcal{M}\). In order to train a policy to perform a given task using a desired skill, we utilize a reward function consisting of two components: \[r_{t}=r_{t}^{\text{skill}}+\lambda^{\text{task}}r_{t}^{\text{task}}, \tag{5}\] where \(r_{t}^{\text{skill}}\) is a skill-reward, and \(r_{t}^{\text{task}}\) is a task-reward with coefficient \(\lambda^{\text{task}}\). ### Skill Objective To train the policy to perform the skill specified by a particular \(\mathbf{z}_{t}\), we enforce that the policy's distribution of state transitions \((\mathbf{s},\mathbf{s}^{\prime})\) matches that of the corresponding motion clip \(\mathbf{m}^{\hat{t}}\). To accomplish this, we train an adversarial discriminator \(D(\mathbf{s},\mathbf{s}^{\prime},\mathbf{z})\) on the joint distribution of state transitions and skill encodings (Ho and Ermon, 2016; Merel et al., 2017; Peng et al., 2021). The discriminator is trained to predict if a given state transition \((\mathbf{s},\mathbf{s}^{\prime})\) is from the motion clip corresponding to \(\mathbf{z}\), or if the transition is from the simulated character or from a different motion clip in the dataset. The discriminator is trained by minimizing the following loss: \[\mathcal{L}_{D}=\mathbb{E}_{p_{\mathcal{M}}(\mathbf{m})} \left[-\mathbb{E}_{p_{\mathbf{m}}(\mathbf{s},\mathbf{s}^{\prime}) }\left[\log(D(\mathbf{s},\mathbf{s}^{\prime},\mathbf{z}))\right]\right. \tag{7}\] \[-w_{D}\left.\mathbb{E}_{p_{\mathcal{M}}(\mathbf{s},\mathbf{s}^{ \prime}|\mathbf{z})}\left[\log(1-D(\mathbf{s},\mathbf{s}^{\prime},\mathbf{ z}))\right]\right.\] (8) \[-(1-w_{D})\left.\mathbb{E}_{p_{\mathcal{M}}(\mathbf{s},\mathbf{s}^ {\prime})}\left[\log(1-D(\mathbf{s},\mathbf{s}^{\prime},\mathbf{z}))\right]\right. \tag{6}\] (9) \(p_{\mathcal{M}}(\mathbf{m})\) represents the likelihood of sampling a motion clip \(\mathbf{m}\) from a dataset \(\mathcal{M}\), and \(\mathbf{z}=\operatorname{Enc}_{m}(\mathbf{m})\) is the encoding of the motion clip. \(p_{\mathbf{m}}(\mathbf{s},\mathbf{s}^{\prime})\) denotes the likelihood of observing a state transition from a given motion clip, and \(p_{\pi}(\mathbf{s},\mathbf{s}^{\prime}|\mathbf{z})\) is the likelihood of observing a state transition from the policy \(\pi\) when conditioned on \(\mathbf{z}\). \(p_{\mathcal{M}}(\mathbf{m},\mathbf{s}^{\prime})\) represents the likelihood of observing a state transition by sampling random transitions from other motion clips in the dataset, excluding \(\mathbf{m}\), and \(w_{D}\) is a manually specified coefficient. The final term in the loss is a gradient penalty with coefficient \(w_{\text{gp}}\)(Peng et al., 2021), which improves stability of the adversarial training process. The skill-reward is then given by: \[r_{t}^{\text{skill}}=-\log\left(1-D(\mathbf{s}_{t},\mathbf{s}_{t+1},\mathbf{ z})\right). \tag{10}\] To direct the policy with a skill command \(c_{\text{skill}}\) after it has been trained, the model is provided with the encoding \(\mathbf{z}=\operatorname{Enc}_{l}(c_{\text{skill}})\). By conditioning the discriminator on both state transitions and latents, our method explicitly encourages the policy to imitate every motion clip in the dataset, which can greatly reduce mode collapse. We elaborate on this benefit and compare our approach to related adversarial RL frameworks in Appendix D. ## 7. Multi-Task Aggregation Each policy from the Policy Training stage is capable of performing a variety of skills, but each is only able to perform a single high-level task involving a single target object. We show that these individual policies can be aggregated into a more flexible composite policy, which allows users to direct the character to perform a variety of different tasks in an environment containing multiple objects. However, in our experiments, we found that attempting to use the procedure in Section 6 to train a single multi-task policy to perform all tasks leads to poor performance. Effectively training multi-task policies remains a challenging and open problem in RL, and prior systems have often taken a divide-and-conquer approach for multi-task RL (Ghosh et al., 2018; Ruder, 2017; Rusu et al., 2015). To create a more flexible multi-task, multi-object controller, we aggregate a collection of single-task policies together. At each timestep, the user's current task command is used to generate prompts that are fed to a multiple-choice question-answering (QA) model. The QA model identifies which task and environment object are being referenced by the user. The single-task controller for the identified task is then set as the active policy controlling the character, and the state of the identified object is passed to the selected policy. An overview of this procedure is provided with pseudocode in Algorithm 1 in the Appendix. Note that since the character is being driven by a single policy from Section 6 at every timestep, the aggregated controller can only follow one high-level task involving a single object at a time. However, with this controller the user can dynamically control which task and object are focused on using natural language. ### Multiple Choice Question Answering An overview of the language-based selection model is shown in Figure 3. The multiple-choice QA model is constructed using a pre-trained BERT model fine-tuned on the SWAG dataset (Zellers et al., 2018). Each multiple-choice question is formulated as an initial prompt sentence (Sentence A) alongside \(n\) candidate follow-up sentences (Sentence B) (Devlin et al., 2018). The model then outputs scores for \(n\) distinct sequences, where sequence \(i\) is the concatenation of the prompt sentence with the \(i\)-th candidate sentence. The object corresponding to the candidate sentence with the highest score is selected as the target object for the policy. A similar process is used to identify the task from the user's command. For each task command provided by the user, the model is provided with two separate multiple-choice questions to identify the relevant task and object, respectively. The first question identifies the task, where each multiple choice option corresponds to a trained policy. The inputs to the QA model follow a story-like format in order to mimic the elements of the SWAG dataset that the model was fine-tuned on. For example, if the task command is _"knock over the blue tower"_, the candidate sequence for the strike policy is: * "Bob wants to _knock over the blue tower_. This should be easy for him since he possesses the ability to _knock over a specified object_." Similarly, the candidate sequence for the location policy is given by: * "Bob wants to _knock over the blue tower_. This should be easy for him since he possesses the ability to _navigate to a specified destination_." The multiple-choice QA model will then predict which sequence of sentences are most likely. Similarly, in the multiple-choice question to extract the target object, each object is given a multiple choice option describing the object's appearance. The candidate sequence for the green block is given by: * "Bob wants to _knock over the blue tower_. He starts by turning his attention to _the green object_ nearby." ## 8. Experimental Setup We evaluate the effectiveness of our framework by training language-directed control policies for a 3D simulated humanoid character. The character is equipped with a sword and shield, similar to the one used by Peng et al. (2022), with 37 degrees-of-freedom, and similar state and action representations. The dataset contains a total of 131 individual clips, for a total of approximately 9 minutes of motion data. Each clip is manually labeled with 1-4 captions that describe the behavior of the character within a particular clip, for a total of 265 captions in the entire dataset. Fig. 4 illustrates examples of motion clips in the dataset along with their respective captions. ### Tasks In addition to training policies to imitate skills from the dataset, each policy is also trained to perform an additional high-level task. Here, we provide an overview of the various tasks, and more detailed descriptions are available in Appendix B. 1. **Facing:** First, we have a simple facing task, where the objective is for the character to turn and face a target direction \(\mathbf{d}^{*}\), encoded as a 2D vector on the horizontal plane. The goal input \(\mathbf{g}_{t}=\tilde{\mathbf{d}}_{t}^{*}\) for the policy records the goal direction in the character's local coordinate frame. 2. **Location:** Next, we have a target location task, where the objective is for the character to navigate to a target location \(\mathbf{x}^{*}\). The goal \(\mathbf{g}_{t}=\tilde{\mathbf{x}}_{t}^{*}\) records the target location in the character's local coordinate frame \(\tilde{\mathbf{x}}_{t}^{*}\). 3. **Strike:** Finally, we have a strike task, where the objective is for the character to knock over a target object. The goal \(\mathbf{g}_{t}=(\tilde{\mathbf{x}}_{t}^{*},\tilde{\mathbf{x}}_{t}^{*},\tilde{ \mathbf{q}}_{t}^{*},\tilde{\mathbf{q}}_{t}^{*})\) records the target object's position \(\tilde{\mathbf{x}}_{t}^{*}\), rotation \(\tilde{\mathbf{q}}_{t}^{*}\), linear velocity \(\tilde{\mathbf{x}}_{t}^{*}\), and angular velocity \(\tilde{\mathbf{q}}_{t}^{*}\). All features are expressed in the character's local frame. ### Training All physics simulations are performed using Isaac Gym, a massively parallel GPU-based physics simulator (Makovivchuk et al., 2021). The simulation is performed at a frequency of 120Hz, while the policies operate at a frequency of 30Hz. 4096 environments are simulated in parallel on a single A100 GPU. A 128D latent space is used for the skill embedding. The policy, value function, and discriminator are modeled using separate multi-layer perceptrons Figure 3. Overview of the language-based selection model used to select a target object based on the user’s task command. The task command is used to generate a collection of candidate sentences, each corresponding to a particular object in the environment. A multiple-choice QA model is then used to predict the most likely candidate sentence, based on the task command. The model’s prediction is used to identify the target object the user referenced. with ReLU units and hidden layers containing \([1024,1024,512]\) units. Each policy is trained using proximal policy optimization with about 7 billion samples [Schulman et al., 2017], corresponding to approximately 7 years of simulated time, which requires about 2.5 days of real-world time. Selecting a weight \(\lambda^{\text{task}}\) for the task reward that effectively balances the task and skill reward can be challenging, and may require task-specific tuning. We therefore apply an adaptive method to dynamically adjust \(\lambda^{\text{task}}\) based on a target task-reward value [Mentzer et al., 2021]. More details are available in Appendix B.4. ## 9. Results We first train policies without auxiliary tasks to evaluate the model's ability to reproduce skills from a motion dataset. Examples of the policy's behaviors when given various skill commands are available in Fig. 4. The policy is able to follow a variety of language commands, ranging from locomotion skills, such as walking and running, to more athletic behaviors, such as sword swings and shield bashes. Since the language encoder is built on a large CLIP model [Radford et al., 2021], it exhibits some robustness to new commands, which were not in the dataset. For example, the model correctly performs a casual walking motion when prompted with: _"take a leisurely stroll"_, even though no captions in the dataset contained _"leisurely"_ or phrased walking as _"taking a walk"_. However, due to the relatively small amount of captions used to train the encoder, the model can still produce incorrect behaviors for some new commands. The character successfully performs a right slash when given the prompt: _"right slash"_. However, _"right slash with sword"_ leads the character to perform a left slash. In addition to learning skills from a motion dataset, our policies can also be trained to perform additional high-level tasks, as outlined in Section 8.1. Examples of the tasks are available in Figure 4. Separate policies are trained for each task, which can then be integrated into a single multi-task controller that activates the appropriate policy given a task command. We demonstrate the effectiveness of the multi-task controller in an environment containing multiple objects that the character can interact with. The user can issue a task command for specifying the target object and the desired task that the character should perform. Our multiple-choice question-answering framework is able to consistently identify the correct task and target object from a user's commands. For example, given the command: _"knock over the blue block"_. the selection model correctly identifies the policy for the Strike task, and selects the blue block as the target. The selection model can also parse more unusual commands, such as "mosely on down to the maroon saloon", which correctly identifies the Location task and selects the red block. Despite the generalization capabilities of large language models, some commands can still lead to incorrect behaviors. More examples of task commands and the resulting behaviors from the model are available in Appendix C. ### Dataset Coverage To determine the impact of learning a skill embedding that aligns motions and text, we evaluate our model's ability to reproduce Figure 4. (a) - (c): Reference motion clips (left side) and their corresponding captions, along with motions produced by a simulated character when directed to perform the reference skills through language commands (right side). More reference motions and policy trajectories are shown in Fig. 7 in the Appendix. (d) – (e): Trained policies completing tasks with different skills. various motions in the dataset when given the respective commands. We conduct this evaluation using a thresholded coverage metric. Given a sequence of states specified by a motion clip \(\hat{\mathbf{m}}=(\hat{\mathbf{s}}_{0},\hat{\mathbf{s}}_{2},...,\hat{\mathbf{s}}_ {n})\), a policy trajectory \(\tau=(\mathbf{s}_{0},\mathbf{s}_{2},...,\mathbf{s}_{k})\) for a skill encoding \(\mathbf{z}=\text{Enc}_{l}(c)\) (where \(c\) is a caption for \(\hat{\mathbf{m}}\)), and a threshold parameter \(\epsilon>0\), we define the coverage to be: \[\text{coverage}(\tau,\hat{\mathbf{m}},c,\epsilon)=\frac{1}{n}\sum_{i=0}^{n}T \left(\min_{j\in\{0,...,k\}}||\hat{\mathbf{s}}_{i}-\mathbf{s}_{j}||_{2}\right) \leq\epsilon\right) \tag{11}\] This metric determines the fraction of the states in a motion clip that are sufficiently close to a state in the policy's trajectory. In our experiments we collect 300 timesteps (10 seconds) per trajectory. Instead of selecting a fixed threshold \(\epsilon\), we apply Equation 11 with different values of \(\epsilon\) between \([0,3]\) to produce a coverage curve. Figure 5 compares the performance of the PADL model with baseline models that directly use the CLIP encoding of a caption as input to the policy. Coverage statistics are averaged across all the captions for each motion clip in the dataset, and then averaged across all motion clips. The raw CLIP encoding is 512D, while our learned skill embedding is 128D. We include an additional baseline model, which uses PCA to reduce the dimensionality of the CLIP encoding to 128D. Our learned embedding is able to better reproduce behaviors in the dataset. Directly using the CLIP encoding as input to the policy tends to result in lower quality motions, and has a higher tendency of performing incorrect behaviors when directed with language commands. ### Skill Interpolation In addition to enabling language control, the learned skill embedding also leads to semantically meaningful interpolations between different skills. Given two skill commands \(c_{1}\) and \(c_{2}\), we encode each caption into the corresponding latents \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) using the language encoder. We then interpolate between the two latents using spherical interpolation, and condition the policy on the interpolated latent to produce a trajectory. For example, given two commands: _"walk forward"_ and _"sprint forward while swinging arms"_, interpolating between the two latents leads to locomotion behaviors that travel at different speeds. Figure 6 records the average velocity of the character when the policy is conditioned on different interpolated latents. Similarly, interpolating between _"walk forward"_ and _"crouching walk forward"_ leads to gaits with different walking heights. However, not all pairs of commands lead to intuitive intermediate behaviors. ## 10. Conclusions In this work we presented PADL, a framework for learning language-directed controllers for physics-based character animation. Language is used to specify both high-level tasks that a character should perform and low-level skills that the character should use to accomplish the tasks. While our models are able to imitate a diverse array of skills from motion data, the models remain limited in the variety of high-level tasks that they can perform. We are interested in exploring more scalable approaches to modelling character interactions with the environment, replacing the finite _a priori_ collection of tasks with a more general strategy that allows the user to specify arbitrary environment interactions with natural language. We are additionally interested in scaling PADL to much larger labelled motion capture datasets (Punnakkal et al., 2021), which may lead to agents and language encoders that can model a greater diversity of skills while being more robust to paraphrasing and capable of generalizing to new commands. In particular, we expect the language encoder from the Skill Embedding stage to improve significantly with more text data. We are excited for further advances in language-guided physics-based character animation and hope that our work contributes towards the development of powerful, high-quality animation tools with broadly accessible, versatile, and easy-to-use interfaces. Figure 5. Comparing dataset coverage when different skill encodings are used during the Policy Training stage. “Learned Skill Embeddings” use the 128D embedding from the learned motion encoder detailed in Section 5. We compare against baselines where policies are trained directly using the 512D CLIP text encodings of the dataset captions and where these encodings are reduced to 128D using PCA. Figure 6. Interpolating skills in the latent space leads to semantically meaningful intermediate behaviors, such as traveling with different walking heights and speeds. ## Acknowledgments We would like to thank Reallusion1 for providing motion capture reference data for this project. Additionally, we would like to thank the anonymous reviews for their feedback, and Steve Masseroni and Margaret Albrecht for their help in producing the supplementary video. Footnote 1: [https://actororcore.reallusion.com/](https://actororcore.reallusion.com/)
2309.07781
A Deductive Verification Infrastructure for Probabilistic Programs (Extended Version)
This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a real-valued domain is required. Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs.
Philipp Schröer, Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja
2023-09-14T15:12:39Z
http://arxiv.org/abs/2309.07781v2
# A Deductive Verification Infrastructure for Probabilistic Programs (Extended Version)+ ###### Abstract. This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a _real-valued_ domain is required. Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs. deductive verification, quantitative verification, probabilistic programs, weakest preexpectations, real-valued logics, automated reasoning + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). + Footnote †: This is the extended version of the publication at OOPSLA 2023 ([https://doi.org/10.1145/3622870](https://doi.org/10.1145/3622870)). ## 1. Introduction and Overview Probabilistic programs differ from ordinary programs by the ability to base decision on samples from probability distributions. They are found in randomized algorithms, communication protocols, models of physical and biological processes, and - more recently - statistical models used in machine learning and artificial intelligence (cf. [Barthe et al.2020; Gordon et al.2014]). Typical questions in the design and analysis of probabilistic programs are concerned with _quantifying_ aspects of their _expected_ - or average - _behavior_, e.g. the _expected runtime_ of a randomized algorithm, the _expected number of retransmissions_ in a protocol, or the _probability_ that a particle reaches its destination. Writing correct probabilistic programs is notoriously hard. They may contain subtle bugs occurring with low probability or undesirably favor certain results in the long run. In fact, reasoning about the expected behavior of probabilistic programs is known to be strictly harder than for ordinary programs [Kaminski et al.2019]. There exists a plethora of research on verification techniques for probabilistic programs, ranging from program logics (cf. [Kaminski et al. 2018; McIver and Morgan 2005]) to highly specialized proof rules [Hark et al. 2019; McIver et al. 2018], often with little (if any) automation. These techniques are based on different branches of mathematics - e.g. domain theory or martingale analysis - and their relationships are non-trivial (cf. Takisaka et al. [2021]). This poses major challenges for comparing - let alone _combining_ - such different approaches. In this paper, we build a _verification infrastructure_ for reasoning about the expected behavior of (discrete) probabilistic programs; Figure 1 gives an overview. Modern program verifiers for non-probabilistic programs often have a front-end that translates a given program and its specification into an intermediate language, such as Boogie[Leino 2008], Why3[Filliatre and Paskevich 2013], or Viper[Muller et al. 2016b]. Such intermediate languages enable the encoding of complex verification techniques, while allowing for the separate development of efficient back-ends, e.g. verification condition generators. In this very spirit, we introduce a novel _quantitative intermediate verification language_ that enables researchers to (i) prototype and automate new verification techniques, (ii) combine proof rules, and (iii) benefit from back-end improvements. Before we dive into details, we discuss five examples of probabilistic programs from the literature that have been verified with five different techniques - all of them have been encoded in our language and verified with our tool. **Example 1.1** (Rabin's Mutual Exclusion Protocol [Kushilevitz and Rabin 1992]).: This protocol controls processes competing for access to a critical section. To determine which process gets access, every process will repeatedly toss a fair coin until it sees heads; the process that needed the largest number of tosses is then granted access. Figure 2 shows a probabilistic program modeling Rabin's protocol: \(i\) is the number of remaining processes competing for access. While more than 1 competitor remains, each competitor tosses one coin (inner loop). If the coin shows heads (i.e. if \(\mathsf{flip}(0.5)\) samples a 1), that competitor is removed from the pool of remaining competitors (by subtracting \(d=1\) from \(i\)). One can verify with the _weakest liberal preexpectation calculus_ by McIver and Morgan [2005] that _the probability to select exactly one process (plus the probability of nontermination) is at least \(\nicefrac{{2}}{{3}}\)_ if there are initially at least \(2\) processes. **Example 1.2** (The Coupon Collector [Wikipedia 2023a]).: Figure 3 models the coupon collector problem - a well-known problem in probability theory: Suppose any box of cereals contains one of \(N\) different coupons. What is the average number of boxes one needs to buy to collect at least one of all \(N\) different coupons, assuming that each coupon type occurs with the same probability? Our formulation is taken from [Kaminski et al. 2018]; the authors develop an _expected runtime Figure 1. Architecture of our verification infrastructure. calculus_ and use _invariant-based arguments_ to show that the _expected number of loop iterations_, which coincides with the average number of boxes one needs to buy, _is bounded from above by \(N\cdot H_{N}\)_, where \(H_{N}\) is the \(N\)-th harmonic number. **Example 1.3** (Lossy List Traversal [1]).: Figure 4 depicts a recursive function implementing a lossy list traversal; it flips a fair coin (using the probabilistic choice \(\{\dots\}\)[0.5]\(\{\dots\}\)) and, depending on the outcome, either calls itself with the list's tail or diverges, i.e. enters an infinite loop. Using the _weakest preexpectation calculus_[15, 16], one can prove that this program terminates with probability _at most_\(0.5^{len(1)}\). Analyzing the lossy list traversal is intuitive - for every non-empty list, there is exactly one execution that does not diverge; its probability is \(0.5^{len(1)}\). What is noteworthy, however, is that even for such a simple program, we need to reason about an exponential function. This is common when verifying probabilistic programs: proving non-trivial bounds often requires non-linear arithmetic. **Example 1.4** (Fair Random Walk [23]).: Figure 5 depicts a variant of a one-dimensional random walk of a particle with position \(x\) - a well-studied model in physics. Analyzing the program's termination behavior is hard because the probability \(q\) of moving to the left or right changes in every loop iteration depending on the previous position \(x\). McIver et al. (2018) propose a proof rule based on _quasi-variants_ that allows proving that _this program terminates almost-surely_, i.e. with probability one. Fair random walks, i.e. if \(q=\nicefrac{{1}}{{2}}\), are well-known to terminate almost-surely but still have infinite expected runtime. **Example 1.5** (Lower Bounds on Expected Values [1]).: Figure 6 shows an another loop whose control flow depends on the outcome of coin flips. Hark et al. (2019) studied this example Figure 4: Lossy list traversal Figure 5: Variant of a random walk Figure 3: The Coupon Collector’s Problem Figure 6: Counterexample from [1] to demonstrate that induction-based proof rules for _lower bounds1_, which are sound for classical verification, may become unsound when reasoning about probabilistic programs. The authors used martingale analysis and the optional stopping theorem to develop a sound proof rule capable of proving that, whenever \(x\neq 0\) initially holds, then the expected value of \(y\) after the program's termination is _at least \(1+y\)_. Footnote 1: Specifically: _lower bound on partial correctness_ plus _proof of termination_ gives _lower bound on total correctness_. _Challenges._ We summarize the challenges of developing an infrastructure for automated verification of probabilistic programs unvealed by the examples in Figures 2 to 6: First, there are many different verification techniques for probabilistic programs that are based on different concepts, e.g. quantitative invariants, quasi-variants, different notions of martingales, or stopping times of stochastic processes. Developing a language that is sufficiently expressive to encode these techniques while keeping it amenable to automation is a major challenge. Second, verification of probabilistic programs involves _reasoning about both lower- and upper bounds_ on expected values. This is different from classical program verification, which can be understood as proving that a given precondition implies a program's weakest precondition, i.e. \(\mathsf{pre}\Rightarrow\mathsf{wp}\llbracket\mathbb{C}\rrbracket(\mathsf{post})\). In other words, \(\mathsf{pre}\) is a _lower bound_ (in the Boolean lattice) on \(\mathsf{wp}\llbracket\mathbb{C}\rrbracket(\mathsf{post})\). Proving _upper bounds_, i.e. \(\mathsf{wp}\llbracket\mathbb{C}\rrbracket(\mathsf{post})\Rightarrow\mathsf{pre}\), has received scarce attention.2 Footnote 2: Notable exceptions are Cousot’s necessary preconditions [11] and recent works on (partial) incorrectness logic [12, 13]. Third, in Figures 3 to 5, we noticed that verification of probabilistic programs often involves reasoning about _unbounded_ random variables and non-linear arithmetic involving exponentials, harmonic numbers, limits, and possibly infinite sums. _Our approach._ We address the first challenge by developing a quantitative IVL and a real-valued logic tailored to verification of probabilistic programs. The IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Our quantitative constructs are inspired by Godel logics [1, 10]. In particular, they have _dual_ co-_constructs_ for verifying upper- instead of lower bounds, thereby addressing the second challenge. These dual constructs are not only interesting for quantitative reasoning, but indeed also for Boolean reasoning a la \(\mathsf{wp}\llbracket\mathbb{C}\rrbracket(\mathsf{post})\Rightarrow\mathsf{pre}\). To address the third challenge, we rely on modern SMT solvers' abilities to deal with custom theories, standard techniques for limiting the number of user-defined function applications, and custom optimizations. Figure 7 shows a program written in our quantitative IVL; it encodes the verification of Example 1.3. We use a _copro_ procedure to prove that the quantitative precondition \(\mathsf{exp}(0.5,\,len(I))=0.5^{len(1)}\) is an _upper_ bound on the procedure's termination probability3 given by the quantitative postcondition 1. We establish the above bound for the procedure body while assuming that it holds for recursive calls (cf. [10]). Our dual quantitative assert- and assume-statements encode the call in the usual way: we assert the procedure's \(\mathsf{pre}\) and assume its post. Footnote 3: Technically, \(\mathsf{exp}(0.5,\,len(I))\) upper-bounds the expected value of the random variable \(1\) after the procedure’s termination. _Contributions._ The main contributions of our work are: 1. A _novel intermediate verification language_ (\(\rightarrow\) Section 3) for automating probabilistic program verification techniques featuring _quantitative generalizations of standard verification constructs_, e.g. assert and assume, and a _formalization_ of its semantics based on a real-valued logic (\(\rightarrow\) Section 2) with constructs inspired by Godel logics. 2. _Encedings of verification techniques and proof rules_ with different theoretical underpinnings (e.g. domain theory, martingales, and the optional stopping theorem) taken from the probabilistic program verification literature into our intermediate language (\(\rightarrow\) Section 4). 3. An SMT-backed _verification infrastructure_ that enables researchers to prototype and automate verification techniques for probabilistic programs by encoding to our intermediate language, an _experimental evaluation_ of its feasibility, and a prototypical _frontend_ for verifying programs written in the probabilistic guarded command language (\(\rightarrow\) Section 5). ## 2. HeyLo: A Quantitative Assertion Language When analyzing quantitative program properties such as runtimes, failure probabilities, or space usage, it is often more direct, more intuitive, and more practical to reason directly about _values_ like the runtime \(n^{2}\), the probability \(\nicefrac{{1}}{{2^{x}}}\), or a list's length, instead of _predicates_ like \(rt=n^{2}\), \(prob\leq\nicefrac{{1}}{{2^{x}}}\), or \(\operatorname{length}(ls)>0\) (cf., (Kaminski et al., 2018; Ngo et al., 2018)). This section introduces HeyLo - a real-valued logic for quantitative verification of probabilistic programs, which aims to take the role that predicate logic has for classical verification. By syntactic-fying real-valued functions, HeyLo serves as (1) a language for specifying quantitative properties - in particular those that McIver and Morgan (2005) (and many other authors) call _expectations4_ -, and (2) a foundation for automation by reducing many verification problems to a decision problem for HeyLo, e.g. validity or entailment checking. To ensure that HeyLo is expressive enough for (1), we design it reminiscently of the language by Batz et al. (2021), which is relatively complete for the verification of probabilistic programs. Footnote 4: For historical reasons, the term _expectations_ refers to random variables on a program’s state space. To ensure that HeyLo is suitable for (2), HeyLo is _first-order_, so as to simplify automation. Moreover, verification problems can often be stated as inequalities between to functions. To ensure that such inequalities can, in principle, be encoded into a _single_ decision problem for HeyLo, we introduce _quantitative (co)implications_ - which provide a syntax for comparing HeyLo formulae - and prove an analogue to the classical deduction theorem for predicate logic (Kleene, 1952). Supporting comparisons between expectations via (co)implications is essential for encoding proof rules for probabilistic programs. The (co)implications are inspired by intuitionistic Godel logics (Baaz, 1996; Preining, 2010) and form Heyting algebras (cf. Theorem 2.1), hence the name HeyLo. Figure 7. Encoding of the lossy list traversal (see Figure 4) in our intermediate language. ### Program States and Expectations Let \(\mathsf{Vars}=\{x,y,\ldots\}\) be a countably infinite set of typed variables. We write \(x\colon\tau\) to indicate that \(x\) is of type \(\tau\), i.e. \(\tau\) is the set of values \(x\) can take. We assume the built-in types \(\mathbb{B}=\{\mathsf{true},\mathsf{false}\}\), \(\mathbb{N}\), \(\mathbb{Z}\), \(\mathbb{Q}\), \(\mathbb{Q}_{\geq 0}\), \(\mathbb{R}\), \(\mathbb{R}_{\geq 0}\), and \(\mathbb{R}_{\geq 0}^{\infty}=\mathbb{R}_{\geq 0}\cup\{\infty\}\); our verification infrastructure also supports user-defined mathematical types (cf. Section 5.1). We collect all types in Types and all values in \(\mathsf{Vals}=\bigcup_{\tau\in\mathsf{Types}}\tau\). A _(program) state_\(\sigma\) maps every variable \(x\colon\tau\) to a value in \(\tau\). The set of states is thus \[\mathsf{States}\quad=\quad\{\sigma\colon\mathsf{Vars}\to\mathsf{Vals}\quad| \quad\text{ for all }x\in\mathsf{Vars}\colon\quad x\colon\tau\quad\text{ implies }\quad\sigma(x)\in\tau\,\}\enspace.\] _Expectations_ are the quantitative analogue to logical predicates: they map program states to \(\mathbb{R}_{\geq 0}^{\infty}\) instead of truth values. The complete lattice (\(\mathbb{B}\), \(\leq\)) of expectations is given by \[\mathbb{B}\ =\ \left\{X\,\middle|\,X\colon\mathsf{States}\to\mathbb{R}_{\geq 0}^{ \infty}\right\}\qquad\text{ with }\qquad X\ \preceq\ Y\quad\text{iff}\quad\text{for all }\sigma\in\mathsf{States}\colon\ X(\sigma)\ \leq\ Y(\sigma)\enspace.\] ### Syntax of HeyLo We start with the construction of HeyLo's atoms. The set \(\mathcal{T}\) of _terms_ is given by the grammar \[t\quad\coloneqq\quad c\mid x\mid f(t,\ldots,t)\enspace,\] where \(c\) is a _constant_ in \(\mathbb{Q}\cup\mathbb{B}\), \(x\) is a _variable_ in \(\mathsf{Vars}\), and \(f\) is either one of the _built-in function_ symbols \(+,\cdot,-,\dot{\cdot}\), \(<,=,\wedge\), \(\neg\) (\(\dot{\cdot}\) is subtraction truncated at \(0\)) or a typed _user-defined function_ symbol \(f\colon\tau_{1}\times\ldots\times\tau_{n}\to\tau\) for some \(n\geq 0\) and types \(\tau_{1},\ldots,\tau_{n}\), \(\tau\) (cf. Section 5.1). Function symbols include, for example, the length of lists \(\mathsf{len}\colon\mathsf{Lists}\to\mathbb{N}\) and the exponential function \(\mathit{exp}\colon\mathbb{R}\times\mathbb{Z}\to\mathbb{R}\) mapping \((r,n)\) to \(r^{n}\). We write \(t\colon\tau\) to indicate that term \(t\) is of type \(\tau\). Typing and subtyping of terms is standard. In particular, if \(t\colon\tau_{1}\) and \(\tau_{1}\subseteq\tau_{2}\), then \(t\colon\tau_{2}\). We only consider well-typed terms. We denote terms of type \(\mathbb{Q}_{\geq 0}\) (resp. \(\mathbb{B}\)) by \(a\) (resp. \(b\)) and call them _arithmetic_ expressions (resp. _Boolean expressions_). The set of HeyLo _formulae_ is given by the following grammar: \[\begin{array}{llll}\varphi\ \coloneqq\ a&\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad For \(\varphi,\psi\in\text{HeyLo}\), we define \[\underbrace{\varphi\sqsubseteq\psi}_{\text{read: $\varphi$ lower-bounds $\psi$}}\quad\text{ iff }\quad\underbrace{\llbracket\varphi\rrbracket\ \leq\ \llbracket\psi\rrbracket}_{\text{ pointwise inequality}}\quad\quad\text{ and }\quad\underbrace{\varphi\sqsupseteq\psi}_{\text{read: $\varphi$ upper-bounds $\psi$}}\quad\quad\text{ iff }\quad\quad\llbracket\varphi\rrbracket\ \geq\ \llbracket\psi\rrbracket.\] These notions are central since we will encode verification problems as inequalities between HeyLo formulae. In contrast to classical IVLs, HeyLo contains constructs for _both_ reasoning about lower-bounds and for reasoning about upper bounds. We briefly go over each construct in Figure 8. Arithmetic- and Boolean Expressions.These expressions form the atoms of HeyLo. Consider, e.g. the arithmetic expressions \(x+1\) for some numeric variable \(x\) and \(2\cdot\mathsf{len}(y)\) for a variable \(y\colon\text{Lists}\). On state \(\sigma\), \(x+1\) evaluates to \(\sigma(x)+1\), and \(2\cdot\mathsf{len}(y)\) evaluates to \(2\) times the length of list \(\sigma(y)\). Boolean expressions \(b\) are embedded in HeyLo using the _embedding operator_\(?(\cdot)\): On state \(\sigma,?(b)\) evaluates to \(\infty\) (think: true, since \(\infty\) is the top element in the lattice of expectations) if \(\sigma\) satisfies \(b\), and to \(0\) otherwise. For instance, \(?(x+1=2\cdot\mathsf{len}(y))\) evaluates to \(\infty\) if \(\sigma(x)+1\) is equal to two times the length of the list \(\sigma(y)\), and to \(0\) otherwise. Addition, Multiplication, Minimum, and Maximum.HeyLo formulae can be composed by standard binary arithmetic operations for sums (\(+\)), products (\(\cdot\)), minimum (\(\sqcap\)), and maximum (\(\sqcup\)). Each of these operations are understood pointwise (with the assumption that \(\infty\cdot 0=0\)). For instance, \(\llbracket\mathsf{len}(y_{1})\sqcap\mathsf{len}(y_{2})\rrbracket(\sigma)\) is the minimum length of lists \(\sigma(y_{1})\) and \(\sigma(y_{2})\). Quantifiers.The _infimum quantifier_\(\ell\) and the _supremum quantifier_\(\mathcal{G}\) from [1] are the quantitative analogues of the universal \(\mathsf{V}\) and the existential \(\exists\) quantifier from predicate logic. Intuitively, the \(\ell\) quantifier minimizes a quantity, just like the \(\forall\) quantifier minimizes a predicate's truth value. Dually, the \(\mathcal{G}\) quantifier maximizes a quantity just like \(\exists\) maximizes a predicate's truth value. The quantitative quantifiers embed \(\mathsf{V}\) and \(\exists\) in HeyLo, i.e. for \(b\colon\mathbb{B}\) and \(\sigma\in\text{States}\), \[\llbracket\ell\,x\colon\tau.\,?(b)\rrbracket(\sigma)=\begin{cases}\infty,& \text{if }\sigma\models\forall x\colon\tau.\,b\\ 0,&\text{otherwise}\end{cases}\quad\quad\text{ and }\quad\quad\llbracket\mathcal{G}x \colon\tau.\,?(b)\rrbracket(\sigma)=\begin{cases}\infty,&\text{if }\sigma\models\exists x \colon\tau.\,b\\ 0,&\text{otherwise}\end{cases}\] Here, \(\models\) denotes the standard satisfaction relation of first-order logic. The above construction extends canonically to nested quantifiers, e.g. \(\exists x\colon\tau.\,\forall y\colon\tau^{\prime}.\,b\) corresponds to \(\mathcal{G}x\colon\tau.\,\&y\colon\tau^{\prime}.\,?(b)\). For a quantitative example, consider the formula \(\varphi=\mathcal{Q}x\colon\mathbb{Q}_{\geq 0}\). \(?(x\cdot x<2)\sqcap x\). On state \(\sigma\), the subformula \(?(x\cdot x<2)\sqcap x\) evaluates to \(\sigma(x)\) if \(\sigma(x)\cdot\sigma(x)<2\), and to \(0\) otherwise. Consequently, \[\llbracket\varphi\rrbracket(\sigma)\ =\ \sup\,\{\,r\in\mathbb{Q}_{\geq 0}\ \mid\ r \cdot r<2\,\}\ =\ \sqrt{2}\;.\] Notice that \(\llbracket\varphi\rrbracket(\sigma)\) is _irrational_ even though all constituents of \(\varphi\) are rational-valued. It has been shown in [1] that -- similar to our above construction of \(\sqrt{2}\) -- the quantitative quantifiers combined with arithmetic- and (embedded) Boolean expressions over \(\mathbb{Q}_{\geq 0}\) enable the construction of _all_ expected values emerging from discrete probabilistic programs. _(Co)implication._\(\to\) and \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}\) generalize Boolean implication and converse nonimplication.5 For state \(\sigma\), the _implication_\(\varphi\to\psi\) evaluates to \(\infty\) if \(\llbracket\varphi\rrbracket(\sigma)\leq\llbracket\psi\rrbracket(\sigma)\), and to \(\llbracket\psi\rrbracket(\sigma)\) otherwise. Dually, the _coimplication_\(\varphi\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}\psi\) evaluates to \(0\) if \(\llbracket\varphi\rrbracket(\sigma)\geq\llbracket\psi\rrbracket(\sigma)\), and to \(\llbracket\psi\rrbracket(\sigma)\) otherwise. Footnote 5: The converse nonimplication of propositions \(P\) and \(Q\) is defined as \(\neg(P\gets Q)\) and is to be read as \(\neg Q\) does _not imply_\(P\). To gain some intuition, we first note that the top element \(\infty\) of our quantitative domain \(\mathbb{R}_{\geq 0}^{\infty}\) can be viewed as "entirely true" (i.e. as true as it can possibly get) and \(0\) can be viewed as "entirely false" (i.e. as false as it can possibly get). The implication \(\varphi\to\psi\) makes \(\psi\)_more true_ by _lowering the threshold above which \(\psi\) is considered entirely true_ - and thus \(\infty\) - to \(\varphi\). In other words: Anything that is at least as true as \(\varphi\) is considered entirely true. Anything less true than \(\varphi\) remains as true as \(\psi\). Figure 10 illustrates this for the formula \(5\to x\). As another example, \(x^{2}\to x\) evaluates to \(\infty\) for states \(\sigma\) with \(\sigma(x)\in[0,1]\); otherwise, \(x\) is below the threshold \(x^{2}\) at which \(x\) is considered entirely true and thus the implication evaluates to \(x\). The intuition underlying the coimplication is dual: \(\varphi\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}\psi\) makes \(\psi\)_less true_ by _raising the threshold below which \(\psi\) is considered entirely false_ - and thus \(0\) - to \(\varphi\). In other words: Anything that is not more true than \(\varphi\) is considered entirely false. Anything that is more true than \(\varphi\) remains as true as \(\psi\). Figure 10 illustrates this for the formula \(5\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}x\). Chained implications can also be understood in terms of lowering thresholds: \(\varphi\to(\psi\to\rho)\) lowers the threshold at which \(\rho\) is considered entirely true to \(\varphi\)_and_\(\psi\), whichever is lower. Formally, \(\varphi\to(\psi\to\rho)\) is equivalent to \((\varphi\sqcap\psi)\to\rho\). More generally, (co)implications are the adjoints of the minimum \(\sqcap\) and maximum \(\sqcup\): Theorem 2.1 (Adjointness Properties).: _For all \(\mathsf{Hey}\mathsf{Lo}\) formulae \(\varphi\), \(\psi\), and \(\rho\), we have_ \[\varphi\sqcap\psi\ \sqsubseteq\ \rho\quad\text{iff}\quad\varphi\sqsubseteq\ \psi\to\rho\qquad\text{and}\qquad\psi\sqcup\rho\sqsupseteq\ \varphi\quad\text{iff}\quad\rho\sqsupseteq\ \psi\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}\varphi\;.\] Both \(\to\) and \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}\) are backward compatible to Boolean implication and converse nonimplication: \[\llbracket?(b_{1})\to?(b_{2})\rrbracket(\sigma)=\begin{cases}\infty,&\text{if } \sigma\models b_{1}\to b_{2}\\ 0,&\text{otherwise}\end{cases}\qquad\llbracket?(b_{1})\mathrel{\mathop{ \kern 0.0pt\hbox to 0.0pt{\hss\rm c}}\limits}?(b_{2})\rrbracket(\sigma)= \begin{cases}\infty,&\text{if }\sigma\models\neg(b_{1}\gets b_{2})\\ 0,&\text{otherwise}\end{cases}\] We will primarily use (co)implications to (1) incorporate the capability of _comparing_ expectations syntactically in HeyLo and to (2) express _assumptions_. Application (1) is justified by the following quantitative version of the well-known deduction theorem6 from first-order logic [Kleene 1952]: Footnote 6: We mean the deduction theorem that relates semantical entailment \(\models\) with the material conditional \(\rightarrow\). Another theorem also known as _deduction theorem_ relates syntactical entailment (i.e. provability) \(\vdash\) with the material conditional \(\rightarrow\). Theorem 2.2 (HeyLo Deduction Theorem).: _For all HeyLo formulae \(\varphi\) and \(\psi\), we have_ \[\varphi\ \sqsubseteq\ \psi\quad\text{iff}\quad\varphi\rightarrow\psi\text{ is valid}\qquad\text{and}\qquad\varphi\ \sqsupseteq\ \psi\quad\text{iff}\quad\varphi\ \mbox{\raisebox{-1.0pt}{\text{\circlecircle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\mbox{\raisebox{-1.0pt}{\text{ \circle{.}}}}\ \mbox{\raisebox{-1.0pt}{\text{\circle{.}}}}\ \mbox{\raisebox{-1. expectation into a qualitative statement. Formally, we define the _(pointwise) validation_\(\triangle(\varphi)\) and _(pointwise) covalidation_\(\triangledown(\varphi)\) by7 Footnote 7: In Gödel logics, these are also called _projection modalities_(Baaz, 1996). \[\llbracket\Delta(\varphi)\rrbracket\ =\ \llbracket\sim\sim\varphi\rrbracket\ =\ \lambda \sigma.\begin{cases}\infty,&\text{if }\llbracket\varphi\rrbracket(\sigma)=\infty\\ 0,&\text{otherwise}\end{cases}\quad\text{and}\quad\llbracket\triangledown( \varphi)\rrbracket\ =\ \llbracket\neg\neg\varphi\rrbracket\ =\ \lambda\sigma.\begin{cases}0,&\text{if } \llbracket\varphi\rrbracket(\sigma)=0\\ \infty,&\text{otherwise}\end{cases}.\] In words, the validation \(\triangle(\varphi)\) is (pointwise) entirely true whenever \(\varphi\) is entirely true, and entirely false otherwise. Dually, \(\triangledown(\varphi)\) is entirely false whenever \(\varphi\) is entirely false, and entirely true otherwise. Thus, both validations and covalidations "boloify" HeyLo formulae. The difference is that validations pull intermediate truth values down to entire falsehood whereas covalidations lift intermediate truth values up to entire truth. Turning expectations into qualitative statements has an important application, which often arises when encoding verification problems: Suppose we are given two formulae \(\varphi,\psi\) with free variables \(y_{1},\ldots,y_{n}\). Moreover, our goal is to construct a HeyLo formula \(\rho\) that evaluates to \(x\) of type \(\mathbb{Q}_{\geq 0}\) if \(\varphi\sqsubseteq\psi\), and to \(0\) otherwise. For that, we first construct the formula \(\ \boldsymbol{\zeta}\,y_{1},\ldots,y_{n}\). \(\triangle(\varphi\to\psi)\). Due to the infimum quantifier over all free variables, this formula is _equivalent_ to \(\infty\) if \(\varphi\sqsubseteq\psi\), and _equivalent_ to \(0\) otherwise. Hence, we construct \(\rho\) as \[\underbrace{\left(\boldsymbol{\zeta}\,y_{1}\colon\tau_{1},\ldots,y_{n}\colon \tau_{n}.\ \triangle(\varphi\to\psi)\ \right)}_{\text{evaluate to }0\text{ if }\varphi\sqsubseteq\psi} \underbrace{\sqsubseteq\neg}_{\text{and}}\underbrace{x}_{\text{ evaluate to }x\text{ otherwise}}.\] Moreover, we obtain a dual construction using \(\boldsymbol{\Longleftrightarrow}\) and the supremum quantifier: \[\underbrace{\left(\boldsymbol{\beta}\,y_{1}\colon\tau_{1},\ldots,y_{n}\colon \tau_{n}.\ \triangledown(\varphi\leftarrow\psi)\ \right)}_{\text{evaluate to }\infty\text{ if }\varphi\sqsubseteq\psi} \underbrace{\sqcup}_{\text{and}}\underbrace{x}_{\text{evaluate to }x\text{ otherwise}}\] ## 3. HeyVL: A Quantitative Intermediate Verification Language Many verification problems for probabilistic programs reduce naturally to checking inequalities between HeyLo formulae.8 Consider, for instance, the program Footnote 8: Or equivalently by Theorem 2.2: Checking (co)validity, i.e. whether a HeyLo formula is equivalent to \(\infty\) (resp. \(0\)). \[y\coloneqq\nicefrac{{1}}{{2}}\cdot\left\langle x\right\rangle+\nicefrac{{ 1}}{{2}}\cdot\left\langle x+1\right\rangle,\] which sets \(y\) either to \(x\) or to \(x+1\), depending on the outcome of a fair coin flip. Suppose we want to verify that \(x+\frac{1}{2}\) is a _lower_ bound on the expected value of \(y\) after executing above program. According to McIver and Morgan (2005), verifying this bound amounts to proving the inequality (ex) where the weakest preexpectation \(\operatorname{wp}\llbracket C\rrbracket(f)\) is a function (which we can represent as a HeyLo formula) that maps every initial state \(\sigma\) to the expected value of \(f\) after executing the program \(C\) on input \(\sigma\). Our goal is to simplify writing, composing, and reasoning _modularly_ about such expected values and similar quantities. To this end, we propose HeyVL, a novel intermediate verification language for modeling quantitative verification problems. HeyVL _programs_ are organized as a _collection of procedures_. Each procedure \(P\) is equipped with a body \(S\) and a specification. The body \(S\) is a HeyVL _statement_ and can for now be thought of as a more or less ordinary probabilistic program.9 The specification of a procedure comprises a _pre_\(\varphi\) and a _post_\(\psi\), both HeyLo formulae. Intuitively, a procedure \(P\)_verifies_ if its body \(S\) adheres to \(P\)'s specification, meaning essentially that the inequality \(\varphi\sqsubseteq\mathsf{wp}\llbracket S\rrbracket(\psi)\) holds, i.e. the expected value of \(\psi\) after executing \(S\) is lower-bounded by \(\varphi\). This inequality will be called the _verification condition_ of \(P\). An entire \(\mathsf{HeyVL}\) program _verifies_ if all of its procedures verify. How do we describe the verification problem (ex) in \(\mathsf{HeyVL}\)? As shown in Figure 11, we write a single procedure \(P\) with body \(y\coloneqq\sfrac{1}{2}\cdot\langle x\rangle+\sfrac{1}{2}\cdot\langle x+1\rangle\), pre \(x+\frac{1}{2}\), and post \(y\). This gives rise to the verification condition \(x+\frac{1}{2}\sqsubseteq\mathsf{wp}\llbracket y\coloneqq\sfrac{1}{2}\cdot \langle x\rangle+\sfrac{1}{2}\cdot\langle x+1\rangle\rrbracket(y)\), which is precisely the inequality (ex) we aim to verify. The \(\mathsf{HeyLo}\) program (i.e. the single procedure \(P\)) verifies if and only if we have positively answered the verification problem (ex). To encode more complex verification problems or proof rules, one may need to write more than one \(\mathsf{HeyVL}\) procedure. For example, in Section 4.1, we will encode a proof rule for conditional expected values that requires establishing a lower _and_ a different upper bound. The latter can be described using a second \(\mathsf{HeyVL}\) procedure, see Section 3.1. Furthermore, it is natural to break down large programs and/or complex proof rules into smaller (possibly mutually recursive) procedures, which can be verified modularly based on the truth of their verification conditions. ### HeyVL Procedures A \(\mathsf{HeyVL}\) procedure consists of a name, a list of (typed) input and output variables, a body, and a quantitative specification. Syntactically, a \(\mathsf{HeyVL}\) procedure is of the form \[\begin{array}{ll}\mathsf{proc}\ P\ (\overline{in\colon\tau})\ ->\ (\overline{out \colon\tau})&\mbox{// procedure name $P$ with read-only inputs $\overline{in}$ and outputs $\overline{out}$}\\ \mathsf{pre}\ \varphi&\mbox{// pre: $\mathsf{HeyLo}$ formula over inputs}\\ \mathsf{post}\ \psi&\mbox{// post: $\mathsf{HeyLo}$ formula over inputs or outputs}\\ \{S\}&\mbox{// procedure body}\end{array}\] where \(P\) is the procedure's name, \(\overline{in}\) and \(\overline{out}\) are (possibly empty and pairwise distinct) lists of typed program variables called the _inputs_ and _outputs_ of \(P\). The specification is given by a \(\mathsf{pre}\ \varphi\) which is a \(\mathsf{HeyLo}\) formula over variables in \(\overline{in}\) and a _post_\(\psi\) which is also a \(\mathsf{HeyLo}\) formula but ranging over variables in \(\overline{in}\) or \(\overline{out}\). The _procedure body_\(S\) is a \(\mathsf{HeyVL}\) statement, whose syntax and semantics will be formalized in Sections 3.2 and 3.3. As mentioned above, the procedure \(P\) gives rise to a verification condition, namely \(\varphi\sqsubseteq\mathsf{wp}\llbracket S\rrbracket(\psi)\). However, this is only accurate if \(S\) is an ordinary probabilistic program. As our statements \(S\) may also contain non-executable10 verification-specific assume and assert commands, the _verification condition generated by \(P\)_ is actually Footnote 10: But expected value changing. \[\varphi \sqsubseteq \mathsf{wp}\llbracket S\rrbracket(\psi)\,,\] Figure 11. A \(\mathsf{HeyVL}\) procedure whose verification condition is equation (ex). where vp is the _verification preexpectation transformer_ that extends the aforementioned weakest preexpectation wp by semantics for the verification-specific statements, see Section 3.3. For procedure calls, we approximate the weakest preexpectation based on the callee's specification to enable modular verification, see Section 3.5. Readers familiar with classical Boolean deductive verification may think of the verification condition as a _quantitative Hoare triple_\(\langle\varphi\rangle\ S\langle\psi\rangle\), where \(\sqsubseteq\) takes the quantitative role of the Boolean \(\Longrightarrow\), i.e. we have \[\langle\varphi\rangle\ S\langle\psi\rangle\text{ is valid}\qquad\text{iff} \qquad\varphi\sqsubseteq\quad\text{vp}\llbracket S\rrbracket(\psi).\] Indeed, if \(\varphi\) and \(\psi\) are ordinary Boolean predicates and \(S\) is a non-recursive non-probabilistic program, then \(\langle\varphi\rangle\ S\langle\psi\rangle\) is a standard Hoare triple: whenever state \(\sigma\) satisfies precondition \(\varphi\), then procedure body \(S\) must successfully terminate on \(\sigma\) in a state satisfying postcondition \(\psi\). Phrased differently: for every initial state \(\sigma\), the truth value \(\varphi(\sigma)\) lower-bounds the _anticipated_ truth value (evaluated in \(\sigma\)) of postcondition \(\psi\) after termination of \(S\) on \(\sigma\). For arbitrary HeyLo formulae \(\varphi,\psi\) and probabilistic procedure bodies \(S\), the second view generalizes to quantitative reasoning a la McIver and Morgan (2005): The quantitative triple \(\langle\varphi\rangle\ S\langle\psi\rangle\) is valid iff the pre \(\varphi\) lower-bounds the _expected value_ (evaluated in initial states) of the post \(\psi\) after termination of \(S\). In Section 3.5, we will describe how _calling_ a (verified) procedure \(P\) can be thought of as "invoking" the validity of the quantitative Hoare triple that is given by \(P\)'s specification. Notice that the above inequality is our definition of validity of a quantitative Hoare triple and we do not provide an operational definition of validity. This is due to a lack of an intuitive operational semantics for quantitative assume and assert statements (cf. also Section 7). _Examples_. Besides Figure 12, Figures 12 and 13 further illustrate how HeyVL procedures specify quantitative program properties; we omit concrete procedure bodies \(S\) to focus on the specification. The procedure in Figure 12 specifies that the expected value of output \(r\) must be at least \(3.5\cdot n\) - a property satisfied by any statement \(S\) that rolls \(n\) fair dice. The procedure in Figure 13 specifies that the expected value of output \(ok\) being true after termination of \(S\), i.e. the probability that the returned value \(ok\) will be true, is at least \(\nicefrac{{2}}{{3}}\) whenever input \(i\) is greater than one - a key property of Rabin's randomized mutual exclusion algorithm (Kushilevitz and Rabin 1992) from Figure 2 and discussed in the introduction. Since we aim to reason about probabilities, we ensure that the post is one-bounded by considering \(1\sqcap?(ok)\) instead of \(?(ok)\). _Coprocedures - Duals to Procedures_. Proving _upper_ bounds is often relevant for quantitative verification, e.g. when analyzing expected runtimes of randomized algorithms (cf. (Kaminski et al. 2018)). HeyVL also supports _coprocedures_ which give rise to the dual verification condition \(\varphi\sqsupseteq\text{vp}\llbracket S\rrbracket(\psi)\).11 The syntax of coprocedures is analogous to HeyVL procedures; the only difference is the keyword coproc instead of proc. For example, a coprocedure which was defined as in Figure 12 (except for replacing proc by coproc) would specify that the expected value of output \(r\) must be _at most \(3.5\cdot n\)_. We demonstrate in Section 4 that intricate verification techniques for probabilistic programs may require lower _and_ upper bound reasoning, i.e. \(\mathsf{HeyVL}\) programs that are collections of both procedures and coprocedures. \(\mathsf{HeyVL}\) Programs.To summarize, a \(\mathsf{HeyVL}\)_program_ is a list of procedures and coprocedures that each give rise to a verification condition, i.e. a \(\mathsf{HeyLo}\) inequality. We say that a \(\mathsf{HeyVL}\) program _verifies_ iff all verification conditions of its (co)procedures hold. Design Decisions.Since \(\mathsf{HeyVL}\) is an intermediate language, we favor simplicity over convenience. In particular, we require procedure inputs to be read-only, i.e. evaluate to the same values in initial and final states. Moreover, \(\mathsf{HeyVL}\) has no loops and no global variables. All variables that can possibly be modified by a procedure call are given by its outputs. All of the above restrictions can be lifted by high-level languages that encode to \(\mathsf{HeyVL}\). ### Syntax of \(\mathsf{HeyVL}\) Statements \(\mathsf{HeyVL}\) statements, which appear in procedure bodies, provide a programming-language-style to express and approximate _expected values_ arising in the verification of probabilistic programs, including expected outcomes of program variables, reachability probabilities such as the probability of termination, and expected rewards. \(\mathsf{HeyVL}\) statements consist of (a) _standard constructs_ such as assignments, sampling from discrete probability distributions, sequencing, and nondeterministic branching, and (b) _verification-specific constructs_ for modeling rewards such as runtime, quantitative assertions and assumptions, and for forgetting values of program variables in the current state. The syntax of \(\mathsf{HeyVL}\) statements \(S\) is given by the grammar \[S \mathrel{\mathop{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, and coassume \(\psi\) are novel statements that enable reasoning about upper bounds; there is yet no analogue in classical verification infrastructures. havoc \(x\) and cohavoc \(x\) forget the current value of \(x\) by branching nondeterministically over all possible values of \(x\) either in a minimizing (havoc \(x\)) or maximizing (cohavoc \(x\)) manner. Finally, validate and covalidate turn _quantitative_ expectations into _qualitative_ expressions, much in the flavor of validation and covalidation described earlier (see Section 2.4). _Declarations and Types_. We assume that all local variables (those that are neither inputs nor outputs) are initialized by an assignment before they are used; those assignments also declare the variables' types. If we assign to an already initialized variable, we often write \(x\mathrel{\mathop{:}}\approx\mu\) instead of var \(x\colon\tau\mathrel{\mathop{:}}\approx\mu\). Moreover, if \(\mu\) is a _Dirac_ distribution, i.e. if \(p_{1}=1\), we often write \(x\mathrel{\mathop{:}}\approx\ t_{1}\) instead of \(x\mathrel{\mathop{:}}\approx\mu\). Finally, we assume that all programs and associated HeyLo formulae are well-typed. ### Semantics of HeyVL Statements Inspired by weakest preexpectations (Kaminski, 2019; McIver and Morgan, 2005), we give semantics to HeyVL statements as a backward-moving continuation-passing style HeyLo transformer \[\mathsf{vp}[S]\colon\mathsf{HeyLo}\to\mathsf{HeyLo}\] by induction on \(S\) in Figure 14. (Co)procedure calls are treated separately in Section 3.5. We call \(\mathsf{vp}[S](\varphi)\) the _verification preexpectation_ of \(S\) with respect to post \(\varphi\). Intuitively, \(\llbracket\mathsf{vp}[S](\varphi)\rrbracket(\varphi)\) is the expected value of \(\varphi\) w.r.t. the distribution of final states obtained from "executing"13\(S\) on \(\sigma\). The post \(\varphi\) is either given by the surrounding procedure declaration or can be thought of as the verification preexpectation described by the _remaining_ HeyVL statement: for \(S=S_{1}\); \(S_{2}\), we first obtain the intermediate verification preexpectation \(\mathsf{vp}[S_{2}](\varphi)\) -- the expected value of what remains after executing \(S_{1}\) -- and pass this into \(\mathsf{vp}[S_{1}]\). Footnote 13: Some verification-specific statements are not really _executable_ but serve the purpose of manipulating expected values. _Random Assignments_. The expected value of \(\varphi\) after executing var \(x\colon\tau\mathrel{\mathop{:}}\approx\mu\) is the weighted sum \(p_{1}\cdot\varphi[x\mapsto t_{1}]+\ldots+p_{n}\cdot\varphi[x\mapsto t_{n}]\), where each \(p_{i}\) is the probability that \(x\) is assigned \(t_{i}\). _Rewards_. Suppose that the post \(\varphi\) captures the expected reward collected in an execution that follows _after_ executing reward \(a\). Then the entire expected reward is given by \(\varphi+a\). Figure 14. Semantics of HeyVL statements. Here \(\mu=p_{1}\cdot\langle t_{1}\rangle+\ldots+p_{n}\cdot\langle t_{n}\rangle\) and \(\varphi[x\mapsto t_{i}]\) is the formula obtained from substituting every occurrence of \(x\) in \(\varphi\) by \(t_{i}\) in a capture-avoiding manner. For procedure calls, see Section 3.5. _Nondeterministic Choices_. \(\mathsf{vp}\llbracket\mathsf{if}\ (\cdot)\ \{S_{1}\}\ \mathsf{else}\ \{S_{2}\}\rrbracket(\varphi)\) is the pointwise minimum (\(\cdot=\sqcap\)) or maximum (\(\cdot=\sqcup\)) of the expected values obtained from \(S_{1}\) and \(S_{2}\), respectively. _(Co)assertions_. In _classical_ intermediate verification languages, the statement \(\mathsf{assert}\ A\) for some predicate \(A\) models a proof obligation: All states reaching \(\mathsf{assert}\ A\) on some execution must satisfy \(A\). In terms of classical weakest preconditions, \(\mathsf{assert}\ A\) transforms a postcondition \(B\) to \[\mathsf{wp}\llbracket\mathsf{assert}\ A\rrbracket(B)\ =\ A\wedge B\.\] In words, \(\mathsf{assert}\ A\)_caps_ the truth of postcondition \(B\) at \(A\): all lower-bounds on the above weakest precondition (in terms of the Boolean lattice (\(\mathsf{States}\to\mathbb{B}\), \(\Rightarrow\))) must not exceed \(A\). This perspective generalizes well to our quantitative assertions: Given a \(\mathsf{HeyLo}\) formula \(\psi\), the statement \(\mathsf{assert}\ \psi\)_caps_ the post at \(\psi\). Thus, analogously to classical assertions, all _lower_ bounds on the verification preexpectation \(\mathsf{vp}\llbracket\mathsf{assert}\ \psi\rrbracket(\varphi)\) (in terms of \(\sqsubseteq\)) must not exceed \(\psi\). Coassertions are dual to assertions: \(\mathsf{coassert}\ \psi\)_raises_ the post \(\varphi\) to at least \(\psi\). Hence, all _upper_ bounds on \(\mathsf{vp}\llbracket\mathsf{coassert}\ \psi\rrbracket(\varphi)\) must not _sub_ceed \(\psi\). _(Co)assumptions_. In the classical setting, the statement \(\mathsf{assume}\ A\) for some predicate \(A\)_weakens_ the verification condition: verification succeeds vacuously for all states not satisfying \(A\). In terms of classical weakest preconditions, \(\mathsf{assume}\ A\) transforms a postcondition \(B\) to \[\mathsf{wp}\llbracket\mathsf{assume}\ A\rrbracket(B)\ =\ A\to B\] i.e. \(\mathsf{assume}\ A\)_lowers_ the threshold at which the post \(B\) is considered true (the top element of the Boolean lattice) to \(A\). Indeed, if we identify \(\mathsf{true}=1\) and \(\mathsf{false}=0\), then \[\llbracket\mathsf{wp}\llbracket\mathsf{assume}\ A\rrbracket(B)\rrbracket( \sigma)\ =\ \begin{cases}1,&\text{if }\llbracket A\rrbracket(\sigma)\leq \llbracket B\rrbracket(\sigma)\\ \llbracket B\rrbracket(\sigma),&\text{otherwise}\.\end{cases}\] The above perspective on classical assumptions generalizes to our quantitative assumptions. Given a \(\mathsf{HeyLo}\) formula \(\psi\), \(\mathsf{assume}\ \psi\) lowers the threshold above which the post \(\varphi\) is considered entirely true (i.e. \(\infty\) - the top element of the lattice of expectations) to \(\psi\). Formally, \[\llbracket\mathsf{vp}\llbracket\mathsf{assume}\ \psi\rrbracket(\varphi) \rrbracket(\sigma)\ =\ \begin{cases}\infty,&\text{if }\llbracket\psi\rrbracket(\sigma)\leq \llbracket\varphi\rrbracket(\sigma)\\ \llbracket\varphi\rrbracket(\sigma),&\text{otherwise}\.\end{cases}\] Reconsider Figure 9 on page 9, which illustrates \(\mathsf{vp}\llbracket\mathsf{assume}\ 5\rrbracket(x)\): \(\mathsf{assume}\ 5\) lowers the threshold at which the post \(x\) is considered entirely true to \(5\), i.e. whenever the post-expectation \(x\) evaluates at least to \(5\), then \(\mathsf{vp}\llbracket\mathsf{assume}\ 5\rrbracket(x)\) evaluates to \(\infty\). Notice furthermore that our quantitative \(\mathsf{assume}\) is backward compatible to the classical one in the sense that \(\mathsf{vp}\llbracket\mathsf{assume}\?(b)\rrbracket(\varphi)\) evaluates to \(\varphi\) for every state satisfying \(b\), and to \(\infty\) otherwise. Coassumptions are dual to assumptions. \(\mathsf{coassume}\ \psi\) raises the threshold at which the post \(\varphi\) is considered entirely false (i.e. \(0\) - the bottom element of the lattice of expectations) to \(\psi\). Reconsider Figure 10 on page 9 illustrating \(\mathsf{vp}\llbracket\mathsf{coassume}\ 5\rrbracket(x)\): \(\mathsf{coassume}\ 5\) raises the threshold below which the post \(x\) is considered entirely false to \(5\), i.e. if the post \(x\) evaluates at most to \(5\), then \(\mathsf{vp}\llbracket\mathsf{coassume}\ 5\rrbracket(x)\) evaluates to \(0\). Example 3.1 (Modeling Conditionals): We did not include \(\mathsf{if}\ (b)\ \{S_{1}\}\ \mathsf{else}\ \{S_{2}\}\) for conditional branching in \(\mathsf{HeyVL}\)'s grammar. We can encode it as follows (and will use it from now on): \[\mathsf{if}\ (\sqcap)\ \{\mathsf{assume}\?(b)\ ;\ S_{1}\}\ \mathsf{else}\ \{\mathsf{assume}\?(\neg b)\ ;\ S_{2}\}\] The \(\mathsf{vp}\) semantics of this statement is analogous to the formula described in Example 2.3 and complies with our above description of assumptions: Depending on the satisfaction of \(b\) by the current state \(\sigma\), the \(\mathsf{vp}\) of \(S\) either evaluates to the \(\mathsf{vp}\) of \(S_{1}\) or \(S_{2}\), respectively. _(Co)havocs_. In the classical setting, \(\mathsf{havoc}\ x\) forgets the current value of \(x\) by universally quantifying over all possible initial values of \(x\). In terms of classical weakest preconditions, we have \[\mathsf{wp}\llbracket\mathsf{havoc}\ x\rrbracket(B)\ =\ \forall x\colon\tau.\ B\,,\] i.e. \(\mathsf{havoc}\ x\)_minimizes_ the post \(B\) under all possible values for \(x\), thus requiring \(B\) to hold for all \(x\). This perspective generalizes to our quantitative setting: In terms of \(\mathsf{vp}\), \(\mathsf{havoc}\ x\) forgets the current value of \(x\) by minimizing the post-expectation under all possible values of \(x\). Dually, \(\mathsf{cohavoc}\ x\) forgets the value of \(x\) but this time _maximizes_ the post-expectation under all possible values for \(x\). _(Co)validations_. These statements convert quantitative statements into qualitative ones by casting expectations into the \(\{0,\infty\}\)-valued realm, thus eradicating intermediate truth values strictly between \(0\) and \(\infty\). Their classical analogues would be effectless, as the Boolean setting features no intermediate truth values. We briefly explained in Section 2.4 how such a conversion to a qualitative statement works in \(\mathsf{HeyLo}\). An example will be discussed in Section 4.2. ### Properties of \(\mathsf{HeyVL}\) Statements We study two properties of \(\mathsf{HeyVL}\). First, our \(\mathsf{vp}\) semantics is _monotonic_ -- a crucial property for encoding proof rules (cf. Section 3.5). Theorem 3.2 (Monotonicity of \(\mathsf{vp}\)).: _For all \(\mathsf{HeyVL}\) statements \(S\) and \(\mathsf{HeyLo}\) formulae \(\varphi,\varphi^{\prime}\),_ \[\varphi\ \sqsubseteq\ \varphi^{\prime}\quad\text{ implies}\quad\mathsf{vp} \llbracket S\rrbracket(\varphi)\ \subseteq\ \mathsf{vp}\llbracket S\rrbracket(\varphi^{\prime})\.\] Furthermore, \(\mathsf{HeyVL}\) conservatively extends an existing IVL for non-probabilistic programs due to Muller (2019) in the following sense: Theorem 3.3 (Conservativity of \(\mathsf{HeyVL}\)).: _Let \(C\) be a program in the programming language of Muller (2019) and let \(B\) be a postcondition. Moreover, let \(\overline{C}\) be obtained by replacing every \(\mathsf{assert}\ A\) and every \(\mathsf{assume}\ A\) occurring in \(C\) by \(\mathsf{assert}\?(A)\) and \(\mathsf{assume}\?(A)\), respectively (cf. Boolean embeddings, Section 2.3). Then_ \[\stackrel{{?}}{{\underbrace{\mathsf{wp}\llbracket C\rrbracket(B )}}}\quad\quad\equiv\quad\quad\stackrel{{\mathsf{HeyVL}}}{{ \underbrace{\mathsf{vp}\llbracket\overline{C}\rrbracket(\gamma(B))}}}\.\] ### Procedure Calls We conclude this section with a treatment of (co)procedure calls. Consider a callee _procedure P_ as shown in Figure 15. Intuitively, the effect of a call \(z_{1},\ldots,z_{m}\ \coloneqq\ P(t_{1},\ldots,t_{n})\) corresponds to (1) initializing \(P\)'s formal input parameters \(x_{1},\ldots,x_{n}\) with the arguments \(t_{1},\ldots,t_{n}\), (2) inlining \(P\)'s body \(S\), and (3) assigning to \(z_{1},\ldots,z_{m}\) the values of outputs \(y_{1},\ldots,y_{m}\). The semantics of Figure 15. A procedure \(P\). We encode calls \(z_{1},\ldots,z_{n}\ \coloneqq\ P(t_{1},\ldots,t_{n})\) for arbitrary probabilistic statements \(S\). \(z_{1},\ldots,z_{m}\ \coloneqq\ P(t_{1},\ldots,t_{n})\) can be thought of as the statement14 Footnote 14: For the sake of simplicity, we ignore potential scoping issues arising if \(S\) uses variables that are declared in the calling context; these issues can be resolved by a straightforward yet tedious variable renaming. \[\underbrace{x_{1}\ \coloneqq\ t_{1};\ \ldots;\ x_{n}\ \coloneqq\ t_{n}\ ;\ \ Taking a closer look, recall from above that assume \(2\cdot x\) is used to encode a monotonicity check,17 which is an inherently _qualitative_ property. However, verifying _bar_ involves proving \(x\sqsubseteq x\sqcap(2\cdot x\to x)\), where the quantitative implication \(2\cdot x\to x\) evaluates to \(x\) for \(x>0\); the expectation \(x\) does not reflect the inherently qualitative nature of the monotonicity check. To fix this issue, we add a validate statement that turns _quantitative_ results into _qualitative_ ones: it reduces any value less than \(\infty\), which indicates a failed monotonicity check, to \(0\). An encoding underapproximating the inlining of \(\mathit{foo}(x)\) - and thus correctly failing verification of _bar_ - is assert\(x\); validate; assume \(2\cdot x\). Similarly to Section 2.4, verifying _bar_ for the fixed encoding involves proving \(x\sqsubseteq x\sqcap\Delta(2\cdot x\to x)\), which does not hold for \(x>0\). Footnote 17: More precisely: a check whether monotonicity of vp can be applied, namely whether \(\psi\sqsubseteq\varphi\) holds where \(\psi\) is the callee’s _specified_ post and \(\varphi\) is the _actual_ post at the call-site. More generally, a sound construction of \(S_{\mathit{encoding}}\) (wrt. underapproximating procedure body \(S\)) is \[S_{\mathit{encoding}}\colon\qquad\mathsf{assert}\;\rho\colon\mathsf{havoc} \;z_{1};\;\;\ldots\colon\mathsf{havoc}\;z_{m}\colon\mathsf{validate};\mathsf{ assume}\;\psi.\] Formally, we obtain an underapproximating \(\mathsf{HeyVL}\) encoding of procedure calls of the form \(z_{1},\ldots,z_{m}\;\coloneqq\;P(t_{1},\ldots,t_{n})\) for arbitrary probabilistic procedures as in Figure 15: Theorem 3.4 ().: _Let \(S\) be the body of the procedure \(P\) in Figure 15. Then, for every \(\mathsf{HeyL}\) formula \(\varphi\),_ \[\mathsf{vp}[\![S_{\mathit{encoding}}]\!](\varphi)\sqsubseteq\mathsf{vp}[\![S] \!](\varphi)\quad\text{and}\quad\mathsf{vp}[\![init\colon S_{\mathit{encoding}} ;\mathit{return}]\!](\varphi)\sqsubseteq\mathsf{vp}[\![init\colon S;\mathit{ return}]\!](\varphi).\] A proof is found in Appendix B. A \(\mathsf{HeyVL}\) encoding that _over_approximates calls of _coprocedures_ is analogous - it suffices to use the dual _cost_atements in \(S_{\mathit{encoding}}\). The presented under- and overapproximations are useful when encoding proof rules in \(\mathsf{HeyVL}\). Whether they are meaningful does, however, depend on the verification technique at hand that should be encoded. ## 4. Encoding Case Studies To evaluate the expressiveness of our verification language, we encoded various existing calculi and proof rules targeting verification problems for probabilistic programs in \(\mathsf{HeyVL}\). We will first focus on programs without while loops (Section 4.1) and then consider loops (Section 4.2). The practicality of our automated verification infrastructure will be evaluated separately in Section 5. A summary of all encodings is given at the end of this section. Further details are found in Appendix C. ### Reasoning about While-Loop-Free pGCL Dialects Pioneered by Kozen (1983), expectation-based techniques have been successfully applied to analyze various probabilistic program properties. McIver and Morgan (2005) incorporated nondeterminism Figure 16. Unsound encoding of a procedure call \(\mathit{foo}(x)\) in _bar_. Both procedures verify but inlining the body of _foo_ in _bar_ does not as it produces the (wrong) inequality \(x\sqsubseteq x\sqcap(0.5\cdot\infty)\). and introduced the probabilistic Guarded Command Language (pGCL), which is convenient for modelling probabilistic systems. The syntax of while-loop-free pGCL programs \(C\) is18 Footnote 18: pGCL usually supports only one type, e.g. integers, rationals, or reals. We are more liberal and admit arbitrary terms \(t\) but assume a sufficiently strong type inference system and consider only well-typed programs. \[C\coloneqq\texttt{skip}\mid\text{diverge}\mid x:=t\mid C_{1};C_{2}\mid\text{ if }(b)\;\{C_{1}\}\;\texttt{else}\;\{C_{2}\}\mid\{C_{1}\}\;\{p\}\;\{C_{2}\}\mid\{C_{1} \}\;\{p\}\;\{C_{2}\}\mid\{C_{1}\}\;\{p\}\;\{C_{2}\}\;,\] where skip has no effect, diverge never terminates, \(x:=t\) assigns the value of term \(t\) to \(x\), \(C_{1};C_{2}\) executes \(C_{2}\) after \(C_{1}\), if \((b)\;\{C_{1}\}\;\texttt{else}\;\{C_{2}\}\) executes \(C_{1}\) if Boolean expression \(b\) holds and \(C_{2}\) otherwise, \(\{C_{1}\}\;\{p\}\;\{C_{2}\}\) executes \(C_{1}\) with probability \(p\in[0,1]\) and \(C_{2}\) with probability \((1-p)\), and \(\{C_{1}\}\;\{p\}\;\{C_{2}\}\) nondeterministically executes either \(C_{1}\) or \(C_{2}\). We now outline encodings of several reasoning techniques targeting pGCL and extensions thereof. We will only consider expectations that can be expressed as HeyLo formulae. To improve readability, we identify every HeyLo formula \(\varphi\) with its expectation \(\llbracket\varphi\rrbracket\in\mathbb{E}\). Weakest Preexpectations (wp)The _weakest preexpectation calculus_ of McIver and Morgan (2005) maps every pGCL command \(C\) and postexpectation \(\varphi\) to the _minimal_ (to resolve nondeterminism) _expected value \(\mathit{wp}(C,\varphi)\) of \(\varphi\) after termination of \(C\)_ - the same intuition underlying HeyVL's vp transformer. Figure 17 shows a sound and complete HeyVL encoding \(\mathit{enc}_{\mathrm{wp}}\!\left\lfloor{C}\right\rfloor\) of the weakest preexpectation calculus, i.e. \(\texttt{vp}\llbracket\mathit{enc}_{\mathrm{wp}}\!\left\lfloor{C}\right\rfloor \rrbracket(\varphi)=\mathit{wp}(C,\varphi)\). Most pGCL commands have HeyVL equivalents; conditionals are encoded as in Example 3.1.diverge is encoded as assert 0 as it never terminates, i.e. \(\mathit{wp}(\mathtt{diverge},\varphi)=0\). The program in Figure 17 then verifies iff \(\psi\) lower bounds \(\mathit{wp}(C,\varphi)\), i.e. \(\psi\sqsubseteq\mathit{wp}(C,\varphi)\). To reason about _upper_ bounds, it suffices to use a coprocedure instead. Weakest Liberal Preexpectations (wlp)McIver and Morgan (2005) also proposed a _liberal_ weakest preexpectation calculus, a partial correctness variant of weakest preexpectations. More precisely, if \(\varphi\sqsubseteq 1\), then the weakest liberal preexpectation \(\mathit{wlp}(C,\varphi)\) is the expected value of \(\varphi\) after termination of \(C\)_plus_ the probability of non-termination of \(C\) (on a given initial state). We denote by \(\mathit{enc}_{\mathrm{wlp}}\!\left\lfloor{C}\right\rfloor\) the HeyVL encoding of the weakest liberal preexpectation calculus; it is defined analogously to Figure 17 except for diverge. Since diverge never terminates, the probability of non-termination is one, i.e. \(\mathit{wlp}(\mathtt{diverge},\ldots)=1\). The updated encoding of diverge is \[\mathit{enc}_{\mathrm{wlp}}\!\left\lfloor{\mathtt{diverge}}\right\rfloor =\quad\text{assert 1; assume 0},\] Figure 17: Encoding of weakest preexpectation for pGCL, where \(\mathit{tmp}\) is a fresh variable. where assert 1 ensures one-boundedness and assume 0 lowers the threshold at which the post is considered entirely true to 0. Put together, we have \(\mathtt{vp}[\mathit{enc}_{\mathtt{wlp}}[\mathit{diverge}]](\varphi)=1\sqcap \infty=1=\mathit{wlp}(\mathtt{diverge},\varphi)\). _Conditional Preexpectations (cwp). Conditioning_ on observed events (in the sense of conditional probabilities) is a key feature of modern probabilistic programming languages [10]. Intuitively, the statement \(\mathtt{observe}\mathit{b}\) discards an execution whenever Boolean expression \(b\) does not hold. Moreover, it re-normalizes such that the accumulated probability of all executions violating no observation equals one. Olmedo et al. [2018] showed that reasoning about \(\mathtt{observe}\mathit{b}\) requires a combination of \(\mathit{wp}\) and \(\mathit{wlp}\) reasoning. They extended both calculi such that violating an observation is interpreted as a failure resulting in pre-expectation zero; we can encode it with an assertion: \[\mathtt{w}(l)p(\mathtt{observe}\mathit{b},\varphi)\ =\?(b)\sqcap\varphi\ =\ \mathtt{vp}[\mathtt{assert}\?(b)](\varphi).\] For every pGCL program \(C\) with observe statements, initial state \(\sigma\) and expectation \(\varphi\), the _conditional_ expected value \(\mathit{cwp}(C,\varphi)(\sigma)\) of \(\varphi\) after termination of \(C\) is then given by the expected value \(\mathit{wp}(C,\varphi)(\sigma)\) normalized by the probability \(\mathit{wlp}(C,1)(\sigma)\) of violating no observation: \[\mathit{cwp}(C,\varphi)(\sigma)\quad=\quad\frac{\mathit{wp}(C,\varphi)( \sigma)}{\mathit{wlp}(C,1)(\sigma)}\qquad(\text{undefined if }\mathit{wlp}(C,1)(\sigma)\ =\ 0)\] We can re-use our existing \(\mathtt{HeyVL}\) encodings to reason about conditional expected values. Notice that proving bounds on \(\mathit{cwp}\) requires establishing both lower and upper bounds. For example, the pGCL program \(C_{\mathit{die}}\) in Figure 19 assigns to \(r\) the result of a six-sided die roll, which is simulated using three fair coin flips and an observation. To show that the expected value of \(r\) is at most 3.5 - the expected value of a six-sided die roll - we prove the upper bound \(\mathit{wp}(C_{\mathit{die}},r)\sqsubseteq 2.625\) and the lower bound \(0.75\sqsubseteq wlp(C_{\mathit{die}},1)\). Then, \(\mathit{cwp}(C_{\mathit{die}},r)\sqsubseteq\frac{2.625}{0.75}=3.5\). Figure 20 shows the \(\mathtt{HeyVL}\) encoding of \(C_{\mathit{die}}\) (cleaned up for readability). As shown in Figure 21, the proof obligations \(\mathit{wp}(C_{\mathit{die}},r)\sqsubseteq 2.625\) and \(0.75\sqsubseteq wlp(C_{\mathit{die}},1)\) are then encoded using a coprocedure for the upper bound and a procedure for the lower bound, respectively. There exist alternative interpretations of conditioning. For instance, Nori et al. [2014] use \(\mathit{wp}(C,1)(\sigma)\) in the denominator in the above fraction. A benefit of \(\mathtt{HeyVL}\) is that such alternative interpretations can be realized by a straightforward adaptation of our encoding. ### Reasoning about Expected Values of Loops We encoded various proof rules for loops while \((b)\)\(\{C\}\) in HeyVL. As an example, we consider the Park induction rule (Kaminski, 2019; Park, 1969) for lower bounds on weakest liberal preexpectations: for all \(\varphi,I\sqsubseteq 1\), \[\underbrace{I\sqsubseteq\ (?(b)\to wlp(C,I))\sqcap(?(-b)\to\varphi)}_{I\text{ is an inductive invariant}}\quad\text{ implies}\quad\underbrace{I\sqsubseteq\ wlp(\text{while }(b)\ \{C\},\varphi)}_{I\text{ underapproximates the loop's }wlp}.\] The rule can be viewed as a quantitative version of the loop rule from Hoare (1969) logic, where \(I\) is an _inductive invariant_ underapproximating the expected value of any loop iteration. Figure 22 depicts an encoding \(enc_{\text{wlp}}\lfloor\text{while }(b)\ \{C\}\rfloor\) that underapproximates \(wlp(\text{while }(b)\ \{C\},\varphi)\), i.e. \[\text{vp}\llbracket enc_{\text{wlp}}\lfloor\text{while }(b)\ \{C\}\rfloor\rrbracket(\varphi) = \begin{cases}I,&\text{if }I\sqsubseteq(?(b)\to wlp(C,I))\sqcap(?(-b)\to\varphi)\\ 0,&\text{otherwise}\end{cases}\sqsubseteq wlp(\ldots,\varphi).\] Before we go into details, we remark for readers familiar with classical deductive verification that our encoding is almost identical to standard loop encodings (cf. (Muller, 2019)). Apart from the quantitative interpretation of statements, the only exception is the validate in line 3. It is instructive to go over the encoding in Figure 22 step by step for a given initial state \(\sigma\). The following expanded version of the above equation's right-hand side serves as a roadmap: \[I(\sigma)\sqcap\inf_{\sigma^{\prime}\in\text{States}}\begin{cases}\infty,&\text {if }I(\sigma^{\prime})\ \leq\ (?(b)(\sigma^{\prime})\to wlp(C,I)(\sigma^{\prime}))\ \sqcap(?(-b)(\sigma^{\prime})\to\varphi(\sigma^{\prime}))\\ 0,&\text{otherwise},\end{cases}\] Reading the HeyVL code in Figure 22 top-down then corresponds to reading the equation from left to right as indicated by the colors. We first assert that our underapproximation of the loop's \(wlp\) is at most \(\bar{I}\). The remaining code will ensure that said underapproximation is exactly \(\bar{I}\) whenever \(\bar{I}\) is an inductive loop invariant; it will be \(0\) otherwise. Proving that \(\bar{I}\) is an inductive loop invariant requires checking an inequality \(\sqsubseteq\), where \(\psi\sqsubseteq\rho\) holds iff \(\psi(\sigma^{\prime})\leq\rho(\sigma^{\prime})\) for all states \(\sigma^{\prime}\). We havoc the values of all program variables such that the invariant check encoded afterward is performed for every evaluation of the program variables, i.e. for every state \(\sigma^{\prime}\).19 Moreover, havoc picks the minimal_ result of all those invariant checks. The statement "\(\cap\)" is an inductive loop invariant" is inherently qualitative. We thus validate that the invariant check encoded next is a qualitative statement that can only have two results: \(\infty\) if \(\cap\) is an inductive invariant and \(0\) if it is not. To check if \(\cap\) is an inductive invariant for a fixed state \(\sigma^{\prime}\), we need to prove an inequality, namely that \(I(\sigma^{\prime})\) lower bounds \(\mathit{wlp}(C,I)(\sigma^{\prime})\) if loop guard \(b\) holds and \(\varphi(\sigma^{\prime})\) if \(b\) does not hold. We first use assume\(I\) to lower the threshold for the expected value of the remaining code to be considered \(\infty\) to \(I(\sigma^{\prime})\). Hence, we obtain \(\infty\) if the invariant check succeeds for \(\sigma^{\prime}\). The conditional choice is the invariant check's right-hand side. If state \(\sigma^{\prime}\) satisfies \(b\), we use our existing \(\mathit{wlp}\) encoding to compute \(\mathit{wlp}(C,I)(\sigma^{\prime})\), where assert\(I\); assume\(?(\mathsf{false})\) ensures that \(\mathit{wlp}\) is computed with respect to postexpectation \(I\). If state \(\sigma^{\prime}\) satisfies \(\neg b\), we do nothing and just take the postexpectation \(\varphi\). _Upper bounds._ Consider an iterative version of the lossy list traversal from Figure 4 on page 4: \[\mathsf{while}\;(\mathit{len}(I)>0)\;\{\;\{\;\{\;\{\;\{\;\mathit{l}\;\coloneqq \;\mathit{pop}(I)\;\}\;\{\;\{\;\mathit{lo}o}(I)\;\}\;\}\;\}\;\}\] The Park induction rule can also be used to _over_approximate weakest preexpectations. The encoding is dual, i.e. it suffices to use the _co_-versions of the involved statements. For example, Figure 23 encodes the above loop with \(\mathit{exp}(0.5,\mathit{len}(I))\) as inductive invariant overapproximating the loop's termination probability. The list type and the exponential function \(\mathit{exp}(0.5,\mathit{len}(I))\) are represented in \(\mathsf{HeyLo}\) by custom domain declarations (cf. Section 5.1). _Recursion._ We can encode verification of \(\mathsf{wlp}\)-lower bounds for recursive procedure calls of pGCL programs as discussed in Section 3.5 and justified by Olmedo et al. (2016) and Matheja (2020) - it is another application of Park induction. For \(\mathsf{wp}\)-upper bounds, the encoding is dual. Hence, Figure 7 on page 4 encodes that the termination probability of the program in Figure 4 is at most \(0.5^{\mathit{len}(I)}\). ### Overview of Encodings Table 1 summarizes all verification techniques - program logics and proof rules - that have been encoded in \(\mathsf{HeyVL}\). While a detailed discussion is beyond the scope of this paper, we briefly go over Table 1. The main takeaway is that \(\mathsf{HeyVL}\) enables the encoding - and thus automation - of advanced verification methods based on diverse theoretical foundations and targeting different verification problems. The practicality of our encodings will be evaluated in Section 5. _Expected Values._ We encoded McIver and Morgan (2005)'s weakest (liberal) preexpectation calculus for analyzing expected values of probabilistic programs (cf. Section 4.1). To analyze _conditional_ expected values, we combined the two calculi as suggested by Olmedo et al. (2018). For loops, we encoded three proof rules based on domain theory: First, _Park Induction_ generalizes the standard loop rule from Hoare logic (Hoare, 1969) to a quantitative setting; it can be applied to lower bound weakest liberal preexpectations and upper bound weakest preexpectations (cf. Section 4.2). However, it is unsound for the converse directions. Second, _\(\omega\)-Invariants_ are sound and complete for proving lower and upper bounds. However, they are arguably more complex because users must provide a family of invariants and compute limits. We modeled families of invariants as \(\mathsf{HeyLo}\) formulas with additional free variables and used \(\mathsf{havoc}\)\(x\) and \(\mathsf{cohavoc}\)\(x\) to represent limits. Third, we encoded a quantitative version of _\(k\)-induction_ (for proving upper bounds) - an established verification technique (cf. (Sheeran et al., 2000)). The encodings are based on latticed \(k\)-induction (Batz et al., 2021), a generalization of \(k\)-induction to arbitrary complete lattices. After encoding \(k\)-induction for upper bounds on \(\mathsf{wp}\), we benefited from the duality of \(\mathsf{HeyVL}\) statements: we obtained a dual encoding for lower bounds on \(\mathsf{wlp}\) that has, to our knowledge, not been implemented before. Furthermore, we encoded an advanced proof rule for lower bounds on expected values by Hark et al. (2019). In contrast to the above rules, this rule is based on stochastic processes, particularly the Optional Stopping Theorem. Using our encoding, we automated the main examples in (Hark et al., 2019). _Expected Runtimes_. To analyze the performance of randomized algorithms, we encoded the expected runtime calculus by Kaminski et al. (2016, 2018) and its recent extension to amortized analysis (Batz et al., 2023). Although reasoning about expected runtimes of loops involves some subtleties, we could adapt our HeVVL encodings for expected values by inserting reward statements. We encoded and automated examples from (Kaminski et al., 2016, 2018) and (Ngo et al., 2018). _Almost-Sure Termination (AST)_. McIver et al. (2018) proposed a proof rule for almost-sure termination - does a probabilistic program terminate with probability one? The rule is based on a parametric martingale that must satisfy four conditions, which we encoded in separate HeVVL (co)procedures. We automated the verification of their examples, including the one in Figure 5. _Positive Almost-Sure Termination (PAST)_. PAST is a stronger notion than almost-sure termination, which requires a program's expected runtime to be finite. We can apply our HeVVL encodings for upper bounding expected runtimes to prove PAST. Moreover, we encoded a dedicated proof rule for PAST by Chakarov and Sankaranarayanan (2013) based on martingales and concentration bounds. ## 5. Implementation We first describe user-defined types and functions by means of _domain declarations_ in Section 5.1. We then describe our tool Caesar alongside with empirical results validating the feasibility of our deductive verification infrastructure for the automated verification of probabilistic programs. ### Domain Declarations Recall from Section 2 that we assume all type- and function symbols to be interpreted. In practice, we support custom first-order theories via _domain declarations_ as is standard in classical deductive \begin{table} \begin{tabular}{l l l l} \hline \hline **Problem** & **Verification Technique** & **Source** & **Encoding** \\ \hline \multirow{3}{*}{LPROB} & wlp + Park induction & McIver and Morgan (2005) & Section 4.2 \\ & wlp + latticed \(k\)-induction & (new?) & Appendix C.1 \\ & wlp + \(\omega\)-invariants & Kaminski (2019) & Appendix C.3 \\ & wp + Park induction & McIver and Morgan (2005) & Appendix C.2 \\ & wp + latticed \(k\)-induction & Batz et al. (2021a) & Appendix C.2 \\ & wp + \(\omega\)-invariants & Kaminski (2019) & Appendix C.4 \\ & wp + Optional Stopping Theorem & Hark et al. (2019) & Appendix C.5 \\ & conditional wp & Olmedo et al. (2018) & Section 4.1 \\ & ert calculus + UEXP rules & Kaminski et al. (2016) & Appendix C.6 \\ & ert calculus + \(\omega\)-invariants & Kaminski et al. (2016) & Appendix C.6 \\ & parametric super-martingale rule & McIver et al. (2018) & Appendix C.7 \\ & program analysis with martingales & Chakarov and Sankaranarayanan (2013) & Appendix C.8 \\ \hline \hline \end{tabular} \end{table} Table 1. Verification techniques encoded in HeVVL sorted by verification problem: lower- and upper bounds on probability of events (LPROB and UPROB), upper- and lower bounds on expected values (UEXP and LEXP), conditional expected values (CEXP), almost-sure termination (AST), positive almost-sure termination (PAST), upper bounds on expected runtimes (UERT), and lower bounds on expected runtimes (LERT). verification infrastructures (Muller et al. 2016b). A domain declaration introduces a new type symbol alongside with a set of typed function symbols and first-order formulae (called _axioms_) characterizing feasible interpretations of the type- and function symbols. Consider the harmonic numbers -- often required for, e.g., expected runtime analysis -- as an example. The \(n\)-th harmonic number is given by \(H_{n}=\sum_{k=1}^{n}\frac{1}{k}\). To enable reasoning about verification problems involving the harmonic numbers, we introduce the following domain declaration: \[\mathsf{domain}\;HarmonicNums\;\{ \mathsf{func}\;H(n\colon\mathbb{N})\colon\mathbb{R}_{\geq 0}\] \[\mathsf{axiom}\;h_{0}\;H(0)=0\] \[\mathsf{axiom}\;h_{n}\;\forall n\colon\mathbb{N}.\;H(n+1)=H(n)+ \nicefrac{{1}}{{n+1}}\qquad\}\] \(HarmonicNums\) introduces a new function symbol \(H\colon\mathbb{N}\to\mathbb{R}_{\geq 0}\) and two axioms \(h_{0}\) and \(h_{n}\) characterizing feasible interpretations of \(H\) recursively. Other non-linear functions such as exponential functions (e.g., \(\mathit{exp}(0.5,n)\) from Section 4.2) as well as algebraic data types can be defined in a similar way (see, e.g., (Muller et al. 2016a)). In our implementation, validity of verification conditions -- inequalities between \(\mathsf{HeyLo}\) formulae -- is defined _modulo_ validity of all user-provided axioms. ### The Verifier Caesar We have implemented \(\mathsf{HeyVL}\) in our tool Caesar20 which consists of approximately 10k lines of Rust code. Caesar takes as input a \(\mathsf{HeyVL}\) program \(C\) and a set of domain declarations (cf. Section 5.1). It then generates all verification conditions described by \(C\), i.e, inequalities between \(\mathsf{HeyLo}\) formulae of the form \(\varphi\sqsubseteq\mathsf{vp}\llbracket\!\!\!\!\llbracket 5\rrbracket(\psi)\) or \(\varphi\sqsupseteq\mathsf{vp}\llbracket\!\!\!\!\llbracket 5\rrbracket(\psi)\), and translates these verification conditions to a Satisfiability Modulo Theories (SMT) query. Our SMT back end is z3 (de Moura and Bjorner 2008). Since the translation to SMT can involve undecidable theories, Caesar might return _unknown_. Otherwise, Caesar either returns _verified_ or _not verified_. In the latter case, z3 often reports a counterexample state witnessing the violation of one of the verification conditions, which helps, e.g., debugging loop invariants. Footnote 20: All tools and benchmarks are available as open-source software at [https://github.com/moves-rwth/caesar](https://github.com/moves-rwth/caesar). Moreover, we have implemented a _prototypical front-end_ that translates (numeric) pGCL programs and their specifications to \(\mathsf{HeyVL}\), and invokes Caesar for automated verification. Currently, it supports all techniques from Table 1 targeting loops. _SMT Encodings and Optimizations._ We translate validity of inequalities between \(\mathsf{HeyLo}\) to SMT following the semantics of formulae from Figure 8. To encode the sort \(\mathbb{R}_{\geq 0}^{\infty}\), we evaluated to two options, which are both supported by our implementation. The first option represents every number of sort \(\mathbb{R}_{\geq 0}^{\infty}\) as a pair \((r,\mathit{isInfy})\), where \(r\) is a real number and \(\mathit{isInfy}\) is a Boolean flag that is true if and only if the represented number is equal to \(\infty\). We add constraints \(r\geq 0\) to ensure that \(r\) is non-negative. All operations on \(\mathbb{R}_{\geq 0}^{\infty}\) are then defined over such pairs. For example, the addition \((r_{1},\mathit{isInfy}_{1})+(r_{2},\mathit{isInfy}_{2})\) is defined as \((r_{1}+r_{2},\mathit{isInfy}_{1}\vee\mathit{isInfy}_{2})\). For multiplication, we ensure that \(0\cdot\infty=\infty\) - a common assumption in probability theory. The second option leverages Z3-specific data type declarations to specify values that are either infinite or non-negative reals. We observed that the first option performs better overall and thus use it by default. The \(\ell\) - and \(\mathcal{G}\) quantifiers are translated using the textbook definition of infima and suprema over \(\mathbb{R}_{\geq 0}^{\infty}\), but are eliminated whenever possible using that for \(A\subseteq\mathbb{R}_{\geq 0}^{\infty}\) and \(r\in\mathbb{R}_{\geq 0}^{\infty}\), we have \[\sup A\leq r\quad\text{iff}\quad\forall a\in A\colon a\leq r\qquad\quad\text{ and }\text{dually}\qquad\quad r\leq\inf A\quad\text{iff}\quad\forall a\in A\colon r\leq a\;.\] Finally, we simplify sub-formulae by, e.g., rewriting \(?(b)\sqcap\psi\) to \(0\) if \(b\) is unsatisfiable. _Benchmarks._ To validate whether our implementation is capable of verifying interesting quantitative properties of probabilistic programs, we have considered various verification problems taken from the literature. These benchmarks involve unbounded probabilistic loops or recursion and include quantitative correctness properties of communication protocols (D'Argenio et al., 1997; Helmink et al., 1993) and randomised algorithms (Hurd et al., 2005; Kushilevitz and Rabin, 1992; Lumbroso, 2013), bounds on expected runtimes of stochastic processes (Kaminski et al., 2020, 2018; Ngo et al., 2018), proofs of _positive_ almost-sure termination (Chakarov and Sankaranarayanan, 2013) and proofs of almost-sure termination for the case studies provided in (McIver et al., 2018). For each of these benchmarks, we apply the HeyVL encodings provided in Section 4 and Appendix C, and cover all verification techniques from Table 1. Table 2 summarizes the results of our benchmarks. For each benchmark, it provides the benchmark name, the verification problem, the encoded techniques (cf. Table 1), the lines of HeyVL code (without comments), notable features, and running time. For the running time, we also provide the shares of pruning. i.e. simplification of sub-formulae, and the final SAT check. Table 1 together with the column "Problem" provides pointers to each benchmark's source and encoding. For latticed \(k\)-induction, we indicate the value of \(k\) that was used for the encoding. Benchmarks that use exponential functions (e.g. rabin, zeroconf) or harmonic numbers (e.g. ast) are marked with F1. Benchmarks that use multiple possibly mixed (co)procedures are marked with F2. One example encodes verification of nested loops (feature F3). The size of our benchmarks ranges from 19-224 lines of HeyVL code. 85% of our benchmarks (those shaded in gray) have been verified with our front-end; the remaining encodings are handcrafted. All benchmark files are available as part of our artifact. _Evaluation._ On average, Caesar needs 0.2 seconds to verify a HeyVL program, with a maximum of 2.3 seconds. Most benchmarks verify within less than a second. The brp3 benchmark times out because of the large nested branching resulting from the exponential size of the \(k\)-induction encoding with \(k=23\). We conclude that Caesar is capable of verifying interesting quantitative verification problems of probabilistic programs taken from the literature. Moreover, we conclude that modern SMT solvers are a suitable back-end besides the fact that our benchmarks often require reasoning about highly non-linear functions. This is due to the fact that it often suffices to (un)fold recursive definitions of, e.g., the harmonic numbers, finitely many times. Finally, our benchmarks demonstrate that our verification infrastructure provides a unifying interface for _encoding and solving_ various kinds of probabilistic verification problems in an automated manner. ## 6. Related Work We focus on automated verification techniques for probabilistic programs and deductive verification infrastructures for non-probabilistic programs; encoded proof rules have been discussed in Section 4. _Probabilistic Program Verification._ Expectation-based probabilistic program verification has been pioneered by Kozen (1983, 1985) and McIver & Morgan (McIver and Morgan, 2005). Hurd et al. (2005) formalised the w(l)p calculus in _Isabelle/HOL_(Nipkow et al., 2002). They focus on the calculus' meta theory and provide a verification-condition generator for proving partial correctness. Holzl (2016) implemented the meta theory of Kaminski et al. (2016)'s ert calculus in _Isabelle/HOL_ and verified bounds on expected runtimes of randomised algorithms. We focus on unifying verification techniques in a single infrastructure. _Easycrypt_(Barthe et al., 2013, 2011) is a theorem prover for verifying cryptographic protocols, featuring libraries for data structures and algebraic reasoning. _Ellora_(Barthe et al., 2018) is an assertion-based program logic for probabilistic programs implemented in _Easycrypt_, taking benefit from _Easycrypt_'s features. Their specifications are predicates over (sub)distributions instead of expectations. While _Ellora_ employs _specialised_ proof rules for loops and does not support non-determinism or recursion, thus being more restrictive than HeyVL in this regard, _Ellora_ embeds, e.g., logics for reasoning about probabilistic independence. As stated in (Barthe et al., 2018), an in-depth comparison of assertion- and expectation-based approaches is difficult. Pardo et al. (2022) propose a \begin{table} \begin{tabular}{l|l|l|r|r|r|r|r} Name & Problem & Verification Technique & LOC & Features & Total (s) & Pruning & SAT \\ \hline \hline rabin & LPROB & wlp + Park induction & 43 & F1, F3 & 0.33 & 3\% & 96\% \\ unif\_gen1 & LPROB & wlp + Latticed \(k\)-induction (\(k=2\)) & 61 & & 0.02 & 52\% & 35\% \\ unif\_gen2 & LPROB & wlp + Latticed \(k\)-induction (\(k=3\)) & 82 & & 0.05 & 68\% & 25\% \\ unif\_gen3 & LPROB & wlp + Latticed \(k\)-induction (\(k=3\)) & 82 & & 0.05 & 71\% & 22\% \\ unif\_gen4 & LPROB & wlp + Latticed \(k\)-induction (\(k=5\)) & 124 & & 0.86 & 90\% & 7\% \\ rabin1 & LPROB & wlp + Park induction & 36 & & 0.01 & 45\% & 40\% \\ rabin2 & LPROB & wlp + Latticed \(k\)-induction (\(k=5\)) & 116 & & 0.08 & 27\% & 67\% \\ chain & UEXP & wp + Park induction & 28 & F1 & 0.03 & 24\% & 66\% \\ ohfive & UEXP & wp + Park induction & 34 & F1, F3 & 0.02 & 33\% & 56\% \\ \hline brp1 & UEXP & wp + Latticed \(k\)-induction (\(k=5\)) & 72 & & 0.03 & 45\% & 42\% \\ brp2 & UEXP & wp + Latticed \(k\)-induction (\(k=11\)) & 138 & & 0.46 & 70\% & 16\% \\ brp3 & UEXP & wp + Latticed \(k\)-induction (\(k=23\)) & 270 & & TO & & \\ geo1 & UEXP & wp + Latticed \(k\)-induction (\(k=2\)) & 32 & & 0.02 & 44\% & 41\% \\ geo (recursive) & UEXP & wp + Park induction & 19 & & 0.02 & 43\% & 42\% \\ rabin1 & UEXP & wp + Park induction & 36 & & 0.02 & 44\% & 73\% \\ rabin2 & UEXP & wp + Latticed \(k\)-induction (\(k=5\)) & 116 & & 0.12 & 22\% & 46\% \\ unif\_gen1 & UEXP & wp + Latticed \(k\)-induction (\(k=2\)) & 61 & & 0.03 & 44\% & 46\% \\ unif\_gen2 & UEXP & wp + Latticed \(k\)-induction (\(k=3\)) & 82 & & 0.11 & 41\% & 53\% \\ unif\_gen3 & UEXP & wp + Latticed \(k\)-induction (\(k=3\)) & 82 & & 0.10 & 41\% & 53\% \\ unif\_gen4 & UEXP & wp + Latticed \(k\)-induction (\(k=5\)) & 124 & & 2.26 & 47\% & 49\% \\ zeroconf & UEXP & wp + Park induction & 43 & F1, F2 & 0.03 & 36\% & 49\% \\ ost & UEXP & wp + Optional Stopping Theorem & 93 & F2 & 0.07 & 33\% & 51\% \\ die & CEXP & conditional wp & 22 & F2 & 0.02 & 17\% & 63\% \\ 2drwalk & UERT & ert + Park induction & 224 & & 0.02 & 41\% & 44\% \\ bayesian\_network & UERT & ert + Park induction & 107 & & 0.02 & 45\% & 40\% \\ C4b\_t303 & UERT & ert + Latticed \(k\)-induction (\(k=3\)) & 73 & & 0.03 & 29\% & 58\% \\ condand & UERT & ert + Park induction & 24 & & 0.02 & 42\% & 42\% \\ fcall & UERT & ert + Park induction & 26 & & 0.02 & 52\% & 44\% \\ hyper & UERT & ert + Park induction & 31 & & 0.02 & 41\% & 44\% \\ linear01 & UERT & ert + Park induction & 23 & & 0.02 & 42\% & 43\% \\ prdwalk & UERT & ert + Park induction & 62 & & 0.02 & 56\% & 31\% \\ prspeed & UERT & ert + Park induction & 45 & & 0.02 & 41\% & 45\% \\ rdspeed & UERT & ert + Park induction & 48 & & 0.02 & 38\% & 47\% \\ rdwalk & UERT & ert + Park induction & 24 & & 0.02 & 42\% & 43\% \\ sprdwalk & UERT & ert + Park induction & 26 & & 0.02 & 42\% & 43\% \\ omega & LERT & ert + \(\omega\)-invariants & 33 & F2 & 0.02 & 42\% & 47\% \\ ast1 & AST & parametric super\_martingale rule & 67 & F2 & 0.06 & 33\% & 49\% \\ ast2 & AST & parametric super-martingale rule & 79 & F2 & 0.05 & 38\% & 50\% \\ ast3 & AST & parametric super-martingale rule & 65 & F1, F2 & 1.94 & 1\% & 99\% \\ ast4 & AST & parametric super-martingale rule & 55 & F2 & 0.05 & 33\% & 52\% \\ past & PAST & program analysis with martingales & 26 & F2 & 0.04 & 40\% & 46\% \\ \end{tabular} \end{table} Table 2. Benchmarks. Rows shaded in gray indicate HeyVL examples automatically generated from pGCL code with annotations using our frontend. Timeout (TO) was set to 10 seconds. Verification techniques correspond to those presented in Table 1. Lines of HeyVL code (LOC) are counted without comments. Features: user-defined uninterpreted functions (F1), multiple (co)procedures (F2), nested loops (F3). propositional dynamic logic for pGCL featuring reasoning about convergence of estimators. Their logic is not automated yet. _Fully automatic_ analyses of probabilistic programs are limited to specific properties, e.g. bounding expected runtimes or proving (positive) almost-sure termination (Abate et al., 2021; Avanzini et al., 2020; Batz et al., 2023; Chatterjee et al., 2016, 2017; Fioriti and Hermanns, 2015; Fu and Chatterjee, 2019; Leutgeb et al., 2022; Meyer et al., 2021; Moosbrugger et al., 2021; Ngo et al., 2018). We might also benefit from invariant synthesis approaches (Agrawal et al., 2018; Amrollahi et al., 2022; Bao et al., 2022; Barthe et al., 2016; Bartocci et al., 2020; Batz et al., 2023; Brakarov and Sankaranarayanan, 2013; Chen et al., 2015; Feng et al., 2017; Katoen et al., 2010; Susag et al., 2022). _Deductive Verification Infrastructures_. Boogie (Leino, 2008) and Why3 (Fillatre and Paskevich, 2013) are prominent examples of IVLs for non-probabilistic programs that lie at the foundation of various modern verifiers, such as Dafny(Leino, 2010) and Frama-C(Kirchner et al., 2015). Neither of these IVLs targets reasoning about expectations or upper bounds (aka necessary preconditions (Cousot et al., 2011)). For example, Boogie's statements are specific to verifying lower bounds on Boolean predicates. Evaluating whether our implementation could benefit from encoding HeyLo formulae into Why3 is interesting future work. ## 7. Conclusion and Future Work We have presented a verification infrastructure for probabilistic programs based on a novel quantitative intermediate verification language that aids researchers with prototyping and automating their proof rules. As future work, we plan to automate more rules and explore the relationship between our language, particularly its dual operators, and (partial) incorrectness logic (O'Hearn, 2020; Zhang and Kaminski, 2022). A further promising direction is to generalize our infrastructure for the verification of probabilistic pointer programs (Batz et al., 2022, 2019) and weighted programs (Batz et al., 2022). Furthermore, establishing a formal "ground truth" for our intermediate language HeyVL in terms of an operational semantics that assigns precise meaning to quantitative Hoare triples, which we admittedly introduced ad-hoc, is important future work. However, defining an operational semantics that yields a _pleasant forward-reading_ intuition for all statements in our intermediate language HeyVL appears non-trivial. In particular, we are unaware of a semantics for (co)assume statements that is independent of the semantics of the remaining program. We believe that stochastic games might be an adequate formalism but the details have not been worked out yet. ## Data-Availability Statement The tool Caesar, our prototypical front-end for pGCL programs, as well as our benchmarks that we submitted for the artifact evaluation are available (Schroer et al., 2023). We also develop our tools as open-source software at [https://github.com/moves-rwth/caesar](https://github.com/moves-rwth/caesar). ## Acknowledgments This work was partially supported by the Digital Research Centre Denmark (DIREC), the ERC Advanced Research Grant FRAPPANT (grant no. 787914), and the 2022 WhatsApp Privacy Aware Program Analysis Research Award.
2309.09974
Emergence of chaotic cluster synchronization in heterogeneous networks
Many real-world complex systems rely on cluster synchronization to function properly. A cluster of nodes exhibits synchronous behavior while others behave erratically. Predicting the emergence of these clusters and understanding the mechanism behind their structure and variation in response to parameter change is a daunting task in networks that lack symmetry. We unravel the mechanism for the emergence of cluster synchronization in heterogeneous random networks. We develop a heterogeneous mean field approximation together with a self-consistent theory to determine the onset and stability of the cluster. Our analysis shows that cluster synchronization occurs in a wide variety of heterogeneous networks, node dynamics, and coupling functions. The results could lead to a new understanding of the dynamical behavior of networks ranging from neural to social.
Rodrigo M. Corder, Zheng Bian, Tiago Pereira, Antonio Montalban
2023-09-18T17:54:43Z
http://arxiv.org/abs/2309.09974v1
# Emergence of chaotic cluster synchronization in heterogeneous networks ###### Abstract Many real-world complex systems rely on cluster synchronization to function properly. A cluster of nodes exhibits synchronous behavior while others behave erratically. Predicting the emergence of these clusters and understanding the mechanism behind their structure and variation in response to parameter change is a daunting task in networks that lack symmetry. We unravel the mechanism for the emergence of cluster synchronization in heterogeneous random networks. We develop a heterogeneous mean field approximation together with a self-consistent theory to determine the onset and stability of the cluster. Our analysis shows that cluster synchronization occurs in a wide variety of heterogeneous networks, node dynamics, and coupling functions. The results could lead to a new understanding of the dynamical behavior of networks ranging from neural to social. pacs: 05.45.Xt, 89.75.Hc, 05.45.Ac **Synchronization is an important phenomenon in networks impacting communications, biology, chemistry, and physics. Extensive studies have addressed the onset of global synchronization and its relation to the interaction structure of the network and the node dynamics. Recent work reveals that cluster synchronization, where network interactions drive certain units to behave in unison while others exhibit erratic patterns, promotes health and coherence in real-world scenarios. While symmetry-induced cluster synchronization is characterized, its onset for networks that lack symmetry remains elusive. Our work unveils the phenomenon of chaotic units achieving sustained and stable cluster synchronization within heterogeneous networks. The initiation by hubs, followed by their desynchronization, leads to a stable cluster of moderately connected nodes. As coupling strengthens, nodes join or depart the cluster according to their connectivity degree. We introduce a novel heterogeneous mean-field approach and a self-consistent theory predicting cluster membership, stability, and formation mechanisms.** Synchronization in complex networks is key for the proper functioning of various real-world complex systems ranging from communication [1], via biology [2] to chemistry [3; 4]. Nodes of the network adapt their dynamical behavior because of the network interaction to move in unison. While _global_ synchronization, where all units of the system behave in unison, has been deeply studied [5; 6; 7; 8; 9], this behavior is often related to pathologies such as Parkinson [10] and epilepsy [11]. In fact, most real-world systems rely on _cluster_ synchronization for their functioning. In this case, some units exhibit synchronous behavior while others behave erratically. Examples include multi-robot systems carrying out parallel tasks [12] or neural systems where cluster synchronization is associated with the healthy state of the individual [13]. When such cluster synchronization results from a graph symmetry, recent progress allows one to determine the onset, membership, and stability of the clusters of synchronized nodes [14; 15; 16]. However, synchronized clusters are prominent in networks such as neuron networks with connectivity structures that lack symmetries [17; 18]. Indeed, certain phase models on random heterogeneous networks exhibit a degree-dependent cluster formation where hubs serve as an engine for cluster formation [19]. As the coupling increases, other nodes join the synchronized cluster, leading to a giant synchronized cluster approaching global synchrony [20; 21]. Interestingly, for general models in heterogeneous networks, global synchronization is unstable [22]. That is, for large coupling strengths, hubs lose their synchrony, and other nodes can display synchronization by forming their own cluster. All this can happen while the network behaves erratically, far from any global synchronization. Surprisingly, such cluster formation in random networks remains undisclosed. Here, we uncover how chaotic units coupled on heterogeneous networks display sustained and stable cluster synchronization. While in synchronous motion, the cluster remains enslaved by the chaotic dynamics of the mean field. As the coupling strength increases, nodes can join and leave the cluster depending on their degree. We develop a heterogeneous mean field approach and a self-consistent theory capable of predicting which nodes belong to the cluster and its stability and shed light on the cluster formation mechanisms. _Dynamics in a heterogeneous random network._ Consider an undirected network \(G\) on \(N\) nodes defined by the adjacency matrix \(A=\{A_{pq}\}\), where \(A_{pq}=1\) if nodes \(p\) and \(q\) are connected and \(A_{pq}=0\) otherwise. Each node \(p\) supports isolated dynamics \(f(z)=2z\mod 1\). The state \(z_{p}^{t}\) of node \(p\) at time \(t\) evolves by the discrete-time Kuramoto model: \[z_{p}^{t+1}=f(z_{p}^{t})+\frac{\alpha}{C}\sum_{q=1}^{N}A_{pq}\sin[2\pi\left(z_ {q}^{t}-z_{p}^{t}\right)]\mod 1, \tag{1}\] where \(\alpha\) is the coupling strength, and the network mean degree \(C\) normalizes the total interaction. Throughout the text, we focus on this particular case of the isolated dynamics \(f\) that is chaotic and stochastically stable [23]. This means that under a small coupling, no cluster can be formed. In Appendix A, we discuss the general isolated dynamics \(f\). _Heterogeneous random networks._ We construct the network \(G\) from Chung-Lu [24] random graph model \(G(w)\) with expected degree sequence \(w=(w_{1},\cdots,w_{N})\). For \(p>q\), each \(A_{pq}\) is a Bernoulli variable mutually independent with success probability \(w_{p}w_{q}/\sum_{k=1}^{N}w_{k}\). It can be shown that when \(N\) is large, the actual degree sequence of \(G\) is concentrated around \(w\). Concretely in our numerical experiments, we prescribe \(w\) with \(N=5\times 10^{5}\) to follow an inverse Gamma distribution \(\mathrm{Inv}\Gamma(2,C)\). In our realization, the mean degree is \(C=300\), and the maximal degree of the hub is \(11977\). The tail of the distribution is a power law with exponent \(3\). _Order parameter._ To study the cluster formation, we introduce the ensemble order parameter taking into account the different roles played by each node in the network \[re^{i2\pi\theta}:=\frac{1}{\mathrm{Vol}(G)}\sum_{p=1}^{N}\sum_{q=1}^{N}A_{pq}e^ {i2\pi z_{q}}, \tag{2}\] where \(\mathrm{Vol}(S):=\sum_{q\in S}w_{q}\) denotes the volume of a subgraph in \(G\); in particular, \(\mathrm{Vol}(G)=CN\). Since the network is heterogeneous, the ensemble order parameter is suitable as it weighs the contribution of the nodes according to their degrees. When \(r\) equals \(1\), the whole network is perfectly synchronized. In heterogeneous networks, global synchronization is unstable [22]. Thus, cluster synchronization is the only possible, stable collective dynamics that provide a nonzero value for the amplitude \(r\) of the order parameter. _Cluster formation._ Starting from uniformly distributed initial conditions, we probe couplings \(\alpha\) between \(0\) and \(1.2\) by iterating the network dynamics until reaching the stationary configuration of cluster synchrony, where the ensemble amplitude \(r\) becomes stationary while the phase \(\theta\) evolves in time. Fig. 1 presents three snapshots of the stationary configuration of cluster synchrony at \(\alpha=0.1,0.5\) and \(1.0\). We plot in the horizontal axis the relative connectivity layer \(w_{p}/C\), that is, gathering all nodes \(p\) with the same relative degree \(w_{p}/C\), and in the vertical axis the relative state \(z_{p}-\theta\). For \(\alpha=0.1\), we observe a nearly uniform distribution. At \(\alpha=0.5\), a group of nodes behaves in unison, synchronizing with \(\theta\). The bright colors from the heat map indicate the concentration of points. This behavior persists for large values of \(\alpha\) such as \(\alpha=1\). As the \(\alpha\) increases, high degree nodes desynchronize, and other nodes with smaller degrees join the cluster. In Fig. 1, the bright colors shift towards the lower connectivity layers as \(\alpha\) increases from \(0.5\) to \(1\). Other layers tend to follow the cluster but not too sharply, as can be observed in Fig. 1 by the spreading of the states in particular layers above the cluster. For a more detailed discussion of the other layers, see Appendix B. _Cluster synchronization driven by mean field phase \(\theta\)._ For coupling strengths \(\alpha\) that admit cluster synchrony, Fig. 1 suggests that the cluster dynamics synchronize to \(\theta\), which, interestingly, has an erratic behavior as shown in Fig. 2A. To see the time-evolution towards this synchronization, we restrict the analysis to this cluster of nodes and denote it by \(S_{\theta}\). For fixed \(\alpha=1\), starting from a uniformly distributed initial network state, we plot the histogram of the states for the nodes in the synchrony cluster \(S_{\theta}\). Fig. 2B shows a vertical histogram with increasing concentration near zero, indicated by thickness and bright color. Nodes in the cluster spontaneously come into synchrony towards \(\theta\). _A heterogeneous mean-field approach._ To analyze these findings we develop a theoretical approach capable of explaining the cluster formation and the enslavement of the cluster dynamics to a chaotic motion. Informed by the stationary cluster synchrony configuration, we use the ansatz that _there is a sustained cluster \(S_{\theta}\) synchronizing to the global field phase \(\theta\) at coupling \(\alpha\), while the other nodes spread uniformly._ It remains to determine which nodes belong to \(S_{\theta}\), establish its stability, and analyze the dynamics of the mean field phase \(\theta\). Already from the ansatz, we claim that the order parameter amplitude \(r\) is stationary. Indeed, since \(re^{i2\pi\theta}=(\sum_{q=1}^{N}d_{q}e^{i2\pi z_{q}})/\mathrm{Vol}(G)\), where \(d_{q}\) is the actual degree of node \(q\), and nodes that do not belong to \(S_{\theta}\) provide a negligible contribution to the ensemble order parameter we obtain \(r=(\sum_{q\in S_{\theta}}d_{q})/\mathrm{Vol}(G)\). Here, we used that nodes in \(S_{\theta}\) satisfy \(z_{p}=\theta\). By concentration properties of the network, the actual degrees \(d_{q}\) are asymptotically almost surely approximated by the ensemble average \(\mathbb{E}d_{q}=w_{q}\), therefore \[r=\frac{\mathrm{Vol}(S_{\theta})}{\mathrm{Vol}(G)}. \tag{3}\] To determine which nodes belong to \(S_{\theta}\), our first step is to write the network equations in terms of the ensemble order parameter. We define the local field at each node \(p\) to be \[r_{p}e^{i2\pi\theta_{p}}:=\sum_{q=1}^{N}A_{pq}e^{i2\pi z_{q}}.\] Figure 1: **Emergence of spontaneous cluster synchrony**. We show three snapshots of the relative states \((z_{p}-\theta)\) at coupling strengths \(\alpha=0.1\), \(0.5\), and \(1.0\) for a fixed network realization. The vertical axis represents the relative position on the circle \(z_{p}-\theta\) with respect to global field phase \(\theta\), and the horizontal axis the relative connectivity layer \(w_{p}/C\). The bright colors emphasize synchrony to \(\theta\). For weak coupling strength \(\alpha=0.1\), the network dynamics do not admit cluster synchrony. As the coupling strength increases, cluster synchrony emerges at \(\alpha=0.5\); furthermore, at \(\alpha=1.0\), the cluster transforms as new nodes join while others leave. Multiplying both sides by \(e^{-i2\pi z_{p}}\) and comparing the imaginary parts, we can write the network dynamics as \[z_{p}^{t+1}=f(z_{p}^{t})+\frac{\alpha}{C}r_{p}^{t}\sin[2\pi(\theta_{p}^{t}-z_{p}^ {t})]. \tag{4}\] The cluster synchrony ansatz implies that \(\theta_{p}=\theta\), and \(r_{p}=\sum_{q\in S_{\theta}}A_{pq}\) equals the number of neighbors of node \(p\) that belong to the cluster \(S_{\theta}\). Again, by the network concentration properties, this number can be approximated by its ensemble average \(\sum_{q\in S_{\theta}}\mathrm{E}A_{pq}=\sum_{q\in S_{\theta}}\frac{w_{p}w_{q }}{\mathrm{Vol}(G)}=w_{p}\frac{\mathrm{Vol}(S_{\theta})}{\mathrm{Vol}(G)}\), and we obtain the heterogeneous mean field approximation: \[r_{p}=w_{p}r.\] Notice that such approximation relies solely on the cluster synchrony ansatz and the concentration properties of our random network. Plugging it into Eq. (4) yields \[z_{p}^{t+1}=f(z_{p}^{t})+\frac{\alpha}{C}w_{p}r^{t}\sin[2\pi(\theta^{t}-z_{p}^ {t})]. \tag{5}\] To obtain the \(\theta\)-dynamics, consider any cluster node \(p\in S_{\theta}\) so that \(z_{p}=\theta\) and hence the interaction term in Eq. (5) vanishes, resulting in that \(z_{p}\) and hence \(\theta\) evolve by the isolated map. This explains the chaotic behavior of the mean field phase \(\theta\). Next, we determine the stability of \(S_{\theta}\) and estimate its size via a self-consistent theory. _Stability of the synchronous motion of the cluster \(S_{\theta}\)_. We established the stationarity of ensemble amplitude \(r\). For now, let us assume its value \(r>0\) is known and determine the cluster \(S_{\theta}\) in terms of \(r\) by studying the displacement \(s_{q}^{t}=z_{q}^{t}-\theta^{t}\). Because of the particular form of \(f\) and the heterogeneous mean field approximation we obtain \[s_{q}^{t+1}=f(s_{q}^{t})-\frac{\alpha}{C}w_{q}r\sin(2\pi s_{q}).\] The above map can be parametrized as \(f_{\beta}(s)=f(s)-\beta\sin(2\pi s)\) and inserting the appropriate value of \(\beta\) we recover the map for \(s_{q}\). Observe that \(s=0\) is always a fixed point of \(f_{\beta}\) for any parameter \(\beta\). To check its attractivity, we compute the linearization of \(f_{\beta}\) and obtain \(f_{\beta}^{\prime}(0)=2-\beta 2\pi\). Thus, the fixed point \(s=0\) attracts when \(\beta\in(1/2\pi,3/2\pi)\). We therefore obtain the condition for synchrony cluster \(\frac{\alpha}{C}w_{q}r\in(1/2\pi,3/2\pi)\), that is, \(z_{q}\) synchronizes to \(\theta\) whenever \[w_{q}\in\left(\frac{C}{\alpha r2\pi},\frac{3C}{\alpha r2\pi}\right).\] This determines \(S_{\theta}\) in terms of the amplitude of the ensemble order parameter \(r\). For a bifurcation analysis of \(f_{\beta}\) and its relation to the finer cluster structures, see Appendix B. _Self-consistent equation for \(r\)_. Now we turn to the ensemble order parameter amplitude \(r\), which is key to predicting the nodes that belong to the cluster \(S_{\theta}\). We aim to determine the value of \(r\) as a function of coupling strength \(\alpha\), and will be particularly interested in its bifurcation from zero to positive values. Using Eq. (3), we can write \(r=\frac{1}{C}\int_{S_{\theta}}w\delta(w)\mathrm{d}w\), where \(\delta(w)\) is the probability density function for the degree distribution of the network \(G\). We notice that \(S_{\theta}\) depends on \(r\) and the values of \(r\) must satisfy a self-consistent relation. Consider \[r=R_{\alpha}(r):=\frac{1}{C}\int_{C/\alpha r2\pi}^{3C/\alpha r2\pi}w\cdot \delta(w)\mathrm{d}w. \tag{6}\] In fact, \(R_{\alpha}(r)\) defined above is a first approximation of \(r\) only by the cluster \(S_{\theta}\) contribution. Further approximation can be constructed by considering nodes locking phase with \(S_{\theta}\) and layers that are not uniformly distributed. For more detailed derivation and discussion of the self-consistent equation, see Appendix C. We will determine \(r\) by finding a fixed point of the map \(R_{\alpha}\). In our case, the degrees follow an inverse gamma distribution \(\mathrm{Inv}\Gamma(2,C)\), and evaluating the integral we obtain \[R_{\alpha}(r)=e^{-2\pi\alpha r}[e^{(4/3)\pi\alpha r}-1].\] Note \(r_{0}=0\) is always a fixed point of \(R_{\alpha}(r)\) for any \(\alpha\) and \(R_{\alpha}^{\prime}(0)=\frac{4\pi\alpha}{3}\). Through a bifurcation analysis for \(R_{\alpha}\), see Appendix C for details, we identify three parameter regimes. (i) For \(\alpha\in(0,\frac{3}{4\pi})\), \(r_{0}=0\) is attractive and no cluster synchrony. (ii) At \(\alpha=\frac{3}{4\pi}\), the fixed point \(r_{0}=0\) loses stability and gives rise to a new attractive fixed point \(r>0\). Cluster synchrony emerges among the layers of degree between \((C/2\pi\alpha r,3C/2\pi\alpha r)\). (iii) As \(\alpha\) increases beyond a threshold \(\alpha_{*}\approx 2.1\), \(r\) loses stability and bifurcates into an attractive period-2 orbit and further through a period-doubling cascade. To pinpoint the emergence of cluster synchrony by the bifurcation into \(r>0\), in Fig. 3 the solid line is the theoretically Figure 2: **Time-evolution of the spontaneous emergence of cluster synchrony**. In panel A at \(\alpha=1\), the time series of the mean field phase \(\theta\) reveals a chaotic dynamics. In panel B, for the same \(\alpha\), the histograms show the time-evolution of the relative states \((z_{p}^{t}-\theta^{t})\) of nodes in the cluster \(S_{\theta}\) across time \(t\). Concentration near zero indicates that nodes in the cluster spontaneously synchronize with \(\theta\). predicted \(r\) found as the attractive fixed point of \(R_{\alpha}\). The actual formula used is slightly different from Eq. (6) to compensate for the discrepancy due to the simplifying assumption that nodes outside the cluster are distributed uniformly; for more details, see Appendix C. The dots are the empirically calculated \(r\) obtained by simulating the large heterogeneous network dynamics on \(G\). We probe each \(\alpha\) by forcing the network's initial condition into synchrony; for more details, see Appendix D. We discard the first 2000 iterates as transient and collect the network states for the next 2000 iterates to compute the empirical \(r\) according to Eq. (2) at each iterate and finally output the average value. Stationarity is confirmed by small standard deviations \(<0.025\) around the average. As mentioned earlier, we presented the case of the Bernoulli map \(f\) as isolated dynamics and the Kuramoto coupling for the sake of simplicity. Both the heterogeneous mean field and the self-consistent approach generalize to further isolated dynamics, coupling functions, and degree distributions. We provide these details in the Appendix A. Our theory relies on the stationarity of ensemble amplitude \(r\), which can break down for large coupling. The analysis of such cluster synchrony with non-stationary \(r\) requires the development of a nonautonomous driving in the heterogeneous mean field approximation. If the network is small, finite-size effects in the heterogeneous mean field approximation can produce noise-induced phenomena. For instance, in a homogeneous network, where Ott-Antonson ansatz applies, it was found that finite-size effects can induce synchronization [25] or delay synchronization [26]. _In conclusion_, we have observed the spontaneous emergence of cluster synchronization towards an enslaving chaotic motion in a general class of systems, where the network is heterogeneous, the isolated maps chaotic, and the coupling function diffusive. In contrast to previous studies on cluster synchronization where an increasing number of nodes join the cluster for strong coupling, in our case of chaotic cluster synchronization, as the coupling increases, new nodes can join the cluster while certain nodes leave. We developed a heterogeneous mean-field approximation of the network effect on each connectivity layer and a self-consistent theory for the ensemble mean-field amplitude \(r\). Our theory explains the emergence of cluster synchrony at the bifurcation of \(r\) from zero into positive values. The prediction from our analysis is in excellent agreement with the empirically simulated \(r\) from network dynamics. Our results could lead to a deeper understanding of collective dynamics in real-world networks with a heterogeneous topology that lacks symmetry. We thank Edmilson Roque, Jeroen Lamb, Narcicegi Kiran, Thomas Peron, Serhiy Yanchuk for enlightening discussions. ZB and TP acknowledge support by FAPESP grants 2018/26107-0 and 2013/07375-0, Serrapilheira Institute (Grant No.Serra-1709-16124) and Newton Advanced Fellow of the Royal Society NAF\(\backslash\)R1\(\backslash\)180236). TP thanks the Humboldt Foundation via the Bessel Fellowship. ## Appendix A General chaotic cluster synchronization in heterogeneous networks We generalize the network dynamics into \[z_{p}^{t+1}=f(z_{p}^{t})+\frac{\alpha}{C}\sum_{q=1}^{N}A_{pq}\phi(z_{q}^{t}-z_{ p}^{t}), \tag{1}\] where the node dynamics \(f\) is chaotic and preserves the Lebesgue measure on [0,1). We can also analyze maps that preserve other measures but in this case the order parameter needs to be adapted; otherwise, we would obtain nonzero values of the order parameters even when the maps are uncoupled. The network \(G\) has a heterogeneous degree distribution, and the coupling is diffusive in the sense that \(\phi(0)=0\) and \(\mathrm{D}\phi(0)\neq 0\). In this case, it is known that for large networks [22] global synchrony is unstable for any \(\alpha\neq 0\), as long as there is a nontrivial heterogeneity in the network \(G\). We turn to cluster synchrony and use the ansatz that _there is a sustained cluster \(S_{\theta}\) synchronizing to the global field angle \(\theta\) at coupling \(\alpha\), while the other nodes spread uniformly._ ### Heterogeneous Mean Field To compute the ensemble order parameter amplitude \(r\), we split it into contributions from different connectivity layers: \[r=\frac{1}{CN}\sum_{\mathrm{layers}\ w}w\cdot\sum_{\mathrm{nodes}\ q:\ w_{q}=w}e^{i2\pi(z_{q}-\theta)}.\] In the thermodynamic limit, (i) there are infinitely many layers \(w\) following the degree distribution pdf \(\delta(w)\), (ii) each layer \(w\) has infinitely many nodes that distribute themselves Figure 3: **Emergence of cluster synchrony predicted by a self-consistent theory.** The heterogeneous mean-field approach and self-consistent theory lead to the definition of \(R_{\alpha}(r)\) in Eq. (6). Its bifurcation into a nonzero attractive fixed point \(r>0\) pinpoints the emergence of cluster synchrony. The dots are empirically calculated values of the order parameter. The solid line shows the theoretic prediction of \(r\) as the attractive fixed point of the self-consistent map \(R_{\alpha}\), predicting the values of \(r\) and the group of nodes in the cluster. on the circle according to some measure \(\mu_{w}\). Then, we have \[r= \frac{1}{C}\sum_{\text{layers }w}w\cdot\frac{1}{N}\sum_{\text{nodes }q:\ w_{q}=w}e^{i2\pi(z_{q}-\theta)}\] \[\approx \frac{1}{C}\int_{0}^{\infty}w\delta(w)\cdot\int_{[0,1)}\mathrm{d }\mu_{w}(z)e^{i2\pi(z-\theta)}.\] Generally the measure \(\mu_{w}\) can be wild, the corresponding layer may contribute a nontrivial value \[\rho_{w}e^{i2\pi\psi_{w}}:=\int_{[0,1)}\mathrm{d}\mu_{w}(z)e^{i2\pi z},\] and these contributions may aggregate to a complicated \(\theta\)-dependent expression \[r=\frac{1}{C}\int_{0}^{\infty}w\delta(w)\rho_{w}e^{i2\pi(\psi_{w}-\theta)}\] However, under the sustained cluster synchrony ansatz, the layers \(w\) that lie outside the cluster \(S_{\theta}\) spread uniformly so that \(\mu_{w}=\mathrm{Leb}\) for most layers and hence \[\int_{[0,1)}\mathrm{d}\mathrm{Leb}(z)e^{i2\pi(z-\theta)}=0,\quad\text{for any }\theta.\] The heuristics of this reasoning is that the hub layer contains only a few nodes and does not contribute much to the order parameter. On the other hand, the cluster is formed around nodes in the connectivity layers near \(C\). The layers with connectivity degree less than \(C\) have \(w_{q}/C\ll 1\) and thus behave almost independently of the network. In fact, since the isolated dynamics is stochastically stable, they will distribute almost like Lebesgue. In other words, the non-cluster layers do not contribute to the ensemble order parameter amplitude \(r\). On the other hand, the layers \(w\) that lie inside the cluster \(S_{\theta}\) synchronize to \(\theta\) so that \(\mu_{w}=\delta_{\theta}\) and hence \[r \approx \frac{1}{C}\int_{\text{layers }w\text{ in }S_{\theta}}w\delta(w)\int_{[0,1)} \mathrm{d}\delta_{\theta}(z)e^{i2\pi(z-\theta)} \tag{22}\] \[= \frac{1}{C}\int_{\text{layers }w\text{ in }S_{\theta}}w\delta(w)\] (23) \[= \frac{\mathrm{Vol}(S_{\theta})}{\mathrm{Vol}(G)} \tag{24}\] Recall for node \(p\) the local field is defined to be \[r_{p}e^{i2\pi\theta_{p}} := \sum_{q=1}^{N}A_{pq}e^{i2\pi z_{q}} \tag{25}\] \[\approx \sum_{q\in S_{\theta}}A_{pq}e^{i2\pi z_{q}}\] (26) \[= N(p,S_{\theta})e^{i2\pi\theta}, \tag{27}\] where \(N(p,S_{\theta})\) is the number of neighbors of node \(p\) that belong to cluster \(S_{\theta}\). By concentration of the random graph, we may approximate this number by the ensemble average \[N(p,S_{\theta})\approx \mathbb{E}[N(p,S_{\theta})]=\sum_{q\in S_{\theta}}\mathbb{E}A_{pq}\] \[= \sum_{q\in S_{\theta}}\frac{w_{p}w_{q}}{CN}=w_{p}\frac{\mathrm{ Vol}(S_{\theta})}{CN}\] \[= w_{p}r.\] We have deduced \[r_{p}e^{i2\pi\theta_{p}}=w_{p}re^{i2\pi\theta}.\] Similarly we have \[\sum_{q\in S_{\theta}}A_{pq}e^{i2\pi kz_{q}}=w_{p}re^{i2\pi k\theta},\quad p \in\{1,\cdots,N\},\ k\in\mathbb{Z}.\] With Fourier expansion \(\phi(x)=\sum_{k\in\mathbb{Z}}a_{k}e^{i2\pi kx}\), we compute \[z_{p}^{t+1}= f(z_{p}^{t})+\frac{\alpha}{C}\sum_{q=1}^{N}A_{pq}\phi(z_{q}^{t}-z _{p}^{t})\] \[= f(z_{p}^{t})+\frac{\alpha}{C}\sum_{k\in\mathbb{Z}}a_{k}e^{-i2 \pi kz_{p}^{t}}\sum_{q=1}^{N}A_{pq}e^{i2\pi kz_{q}^{t}}\] \[= f(z_{p}^{t})+\frac{\alpha}{C}\sum_{k\in\mathbb{Z}}a_{k}e^{-i2 \pi kz_{p}^{t}}\sum_{q\in S_{\theta}}A_{pq}e^{i2\pi k\theta^{t}}\] \[= f(z_{p}^{t})+\frac{\alpha}{C}\sum_{k\in\mathbb{Z}}a_{k}e^{-i2 \pi kz_{p}^{t}}w_{p}re^{i2\pi k\theta^{t}}\] \[= f(z_{p}^{t})+\frac{\alpha}{C}w_{p}r\phi(\theta^{t}-z_{p}^{t}).\] ### Master Stability Function To determine stability of the cluster \(S_{\theta}\), we analyze the dynamics of small perturbations about the synchronous motion \(s=z-\theta\) with respect to the global field angle. Consider \[s_{p}^{t+1}=z_{p}^{t+1}-\theta^{t+1}=f(z_{p}^{t})+\frac{\alpha}{C}w_{p}r\phi( \theta^{t}-z_{p}^{t})-f(\theta^{t}).\] Generally we cannot find a synchrony map like \(f_{\beta}\) which evolves \(s^{t+1}=f_{\beta}(s^{t})\) as in the linear case \(f(x)=2x\mod 1\). We linearize at \(s_{p}=0\) to obtain \[s_{p}^{t+1}=(\mathrm{D}f(\theta^{t})-\beta_{p}\mathrm{D}\phi(0))s_{p},\quad \beta_{p}:=\frac{\alpha}{C}w_{p}r,\] The stability of \(s_{p}=0\) translates to the stability of the cluster. To determine the stability of the trivial solution, we consider the so-called Master Stability Function (\(\mathrm{MSF}\)) mapping effective coupling strength \(\beta\) to the largest Lyapunov exponent of the multiplier cocycle \(\mathrm{D}f(\theta^{t})-\beta_{p}\mathrm{D}\phi(0)\). We have thus identified the synchrony condition for node \(p\) to be \[p\in S_{\theta}\iff\mathrm{MSF}\left(\frac{\alpha}{C}w_{p}r\right)<0.\] ### Generalized self-consistent equation Similar to the linear case \(f(x)=2x\mod 1\), the self-consistent equation for \(r\) reads \[R_{\alpha}(r)=\frac{1}{C}\int_{\{w:\mathrm{MSF}(\alpha wr/C)<0\}}w\delta(w) \mathrm{d}w,\] where \(\delta(w)\) is the degree distribution density function. ### Further examples: tent map As an illustration, consider the tent map as node dynamics \[f(x):=\begin{cases}2x,&x\in[0,1/2];\\ 2(1-x),&x\in[1/2,1],\end{cases}\] and \(\phi(x):=\sin(x)\) coupled on a random graph on \(N=5\times 10^{4}\) nodes with inverse gamma \(\mathrm{Inv}\Gamma(2,C)\) degree distribution. This gives \[\mathrm{D}f(\theta^{t})-\beta\mathrm{D}\phi(0)= 2(\mathbf{1}_{(0,1/2)}-\mathbf{1}_{(1/2,1)})(\theta^{t})-\beta,\] where \(\mathbf{1}_{I}\) is the indicator function of the interval \(I\). It can be observed that the interval \((\beta^{-},\beta^{+})=(1/2\pi,3/2\pi)\) is the stability region for effective coupling strengths. We identify the synchronous layers \[w=\frac{\beta C}{\alpha r}\in\left(\frac{\beta^{-}C}{\alpha r},\frac{\beta^{+ }C}{\alpha r}\right).\] Thus, we obtain the self-consistent equation for cluster synchronization in network system with tent map as node dynamics: \[R_{\alpha}(r)= \frac{1}{C}\int_{\beta^{-}C/\alpha r}^{\beta^{+}C/\alpha r}w\cdot C ^{2}w^{-3}e^{-C/w}\mathrm{d}w\] \[= e^{-\alpha r/\beta^{+}}-e^{-\alpha r/\beta^{-}}.\] ## Appendix B Synchrony map Recall that the synchrony map \[f_{\beta}(s)=2s-\beta\sin(2\pi s)\mod 1\] governs the evolution of the relative state \(s=z-\theta\) with respect to the global mean field angle \(\theta\), where the effective coupling strength \(\beta=\frac{\alpha}{C}wr\) depends on the network coupling strength \(\alpha\), mean degree \(C\), degree \(w\) of the node and the stationary global field amplitude \(r\) at \(\alpha\). By differentiating \(f_{\beta}\) with respect to \(s\), we have \[\left.\frac{\mathrm{d}f_{\beta}}{\mathrm{d}s}\right|_{s=0}=2-\beta 2\pi\cos(2\pi s)|_{s=0}=2-\beta 2\pi.\] This gives the lower bound \(\beta_{1}=1/2\pi\) and upper bound \(\beta_{2}=3/2\pi\) for the stability region, that is, the fixed point \(s=0\) attracts when the effective coupling strength is tuned to be \(\beta\in(\beta_{1},\beta_{2})\), and the attraction is strongest with derivative zero when the effective coupling strength is \(\beta_{0}=1/\pi\). Through a more detailed bifurcation analysis, four parameter regimes can be observed: 1. for small \(\beta\in[0,\beta_{1}]\), the fixed point \(s=0\) is not attractive. The corresponding layers \(w=\frac{\beta C}{\alpha r}\in[0,\frac{\beta_{1}C}{\alpha r}]\) are not enslaved to the motion of global field angle \(\theta\); 2. for \(\beta\in(\beta_{1},\beta_{2})\), the fixed point \(s=0\) attracts exponentially. Provided that the stationary \(r>0\), the corresponding layers \(w=\frac{\beta C}{\alpha r}\in(\frac{\beta_{1}C}{\alpha r},\frac{\beta_{2}C}{ \alpha r})\) synchronize by virtue of enslavement towards the global field angle \(\theta\); 3. for larger \(\beta\in[\beta_{2},\beta_{3}]\) where \(\beta_{3}\approx 0.6\), the fixed point \(s=0\) loses stability and gives rise to an attractive period-2 orbit around it. The corresponding layers \(w=\frac{\beta C}{\alpha r}\in(\frac{\beta_{2}C}{\alpha r},\frac{\beta_{3}C}{ \alpha r})\) are enslaved towards a neighborhood of the global field angle \(\theta\), jumping around it in a period-2 fashion. This explains the funnel shape of layers above the synchronized cluster in Fig. 1. 4. for large \(\beta>\beta_{3}\), the period-2 orbit undergoes a cascade of period doubling bifurcations to enter a chaotic regime with windows of stability. In a finite network, the corresponding highly connected layers \(w=\frac{\beta C}{\alpha r}>\frac{\beta_{3}C}{\alpha r}\) feel little effect of the synchrony map due either to the chaotic regime or to finite-size effect. It is important to have a _positive_ and _stationary_ global field amplitude \(r\) in order to pass from the bifurcation analysis of \(f_{\beta}\) to the corresponding layers in the network dynamics defined with \(r\) in the denominator. Indeed, \(r=0\) suggests that the network coupling strength \(\alpha\) in question does not support cluster synchrony. And when the global field amplitude is non-stationary, see the Appendix C. For network coupling strengths \(\alpha\) that admit cluster synchrony, that is, when stationary amplitude \(r>0\), the node dynamics are enslaved to the doubling motion \(\theta\mapsto f(\theta)\) of the global mean-field angle. More precisely, consider the skew-product for the layer with degree \(w\) \[F(\theta,z)=(f(\theta),f(\theta)+f_{\beta}(z-\theta)),\quad\beta:=\frac{\alpha} {C}wr.\] A moderate layer \(w\in(\beta_{1}C/\alpha r,\beta_{2}C/\alpha r)\) enjoys effective coupling strength \(\beta\in(\beta_{1},\beta_{2})\), for which the synchrony map \(f_{\beta}\) shrinks the relative distance \((z-\theta)\) to 0 exponentially fast. At a uniformly distributed initial network state, the initial global mean-field magnitude \(r^{0}\ll 1\) is small and hence the synchrony maps first brings highly connected nodes with degree \(w=C/2\pi\alpha r^{0}\gg 1\) into coherence towards \(\theta\). As \(r^{t}\) increases in the meantime, the synchrony map loses control over these highly connected layers and moves to enslave lower layers. At stationarity \(r\), the synchrony map fully captures the synchronized cluster and sustains the cluster synchrony configuration. Fig. 2 shows the time-evolution of the cluster being captured and thereby entering sustained synchrony. ## Appendix C Self-consistent theory Recall the synchrony cluster \[S_{\theta}=\{i:w_{p}\in(C\beta^{-}/\alpha r,C\beta^{+}/\alpha r)\},\] with \((\beta^{-},\beta^{+})=(1/2\pi,3/2\pi)\) which leads to the definition of the self-consistent equation \[R_{\alpha}(r)= \int_{C\beta^{-}/\alpha r}^{C\beta^{+}/\alpha r}Cw^{-2}e^{-C/w} \mathrm{d}w,\quad\delta(w)=C^{2}w^{-3}e^{-C/w}\] \[= e^{-\alpha r/\beta^{+}}-e^{-\alpha r/\beta^{-}}.\] 1. At low connectivity level, the nodes are close to being uniformly distributed. Their contribution to the mean-field amplitude \(r\) is negligible. 2. Near but below the cluster layers, the mean-field skews the layer distributions towards zero, thus contributing to the mean-field amplitude \(r\). 3. The cluster with effective coupling strength \([\beta^{-},\beta^{+}]\) synchronize. These are the layers considered in Eq. (6). 4. Near but above the cluster layers, the synchrony map undergoes a period doubling cascade, causing these layers also to contribute to the mean-field amplitude \(r\). Our hand tuning accounts for them by including effective coupling strengths as high as \(\beta^{+}+0.049\). 5. At high connectivity levels, there are few such massive hubs and their contribution to \(r\) is also negligible. To generate the solid curves in Fig. 3, we use successive approximation: for each \(\alpha\) probed in \([0,1.2]\) spaced \(0.001\) apart, we initialize at \(r^{0}=0.1\), iterate \(1000\) times by \(R_{\alpha}\) and output the last value as \(r\). In the derivation of \(R_{\alpha}\), the only occasion using the inverse gamma degree distribution \(\delta(w)\) is at the second step. For other heterogeneous degree distributions, such as for Barabasi-Albert network, the same derivation can be performed, with \(\delta(w)\) replaced accordingly. In a homogeneous network, such as a small-world in the sense of Watts-Strogatz, this calculation is not so meaningful, as nodes have the same connectivity. We expect full synchrony in this case. ## Appendix D Forcing a large network into synchrony Consider a large network, in our case, \(N=5\times 10^{5}\). Even at a coupling strength \(\alpha\) that admits cluster synchrony, i.e., \(r>0\), it still may take a prohibitively long time for the network to evolve spontaneously from a uniformly distributed initial state into cluster synchrony. To deal with this issue in our numerical experiments, we prepare the initial network condition at a certain synchrony level \(r^{0}\in(0,r)\) by pointing a suitable cluster of nodes all toward phase 0. Such a prepared initial low level of synchrony serves to spark the cluster synchrony, which, once in motion, is allowed to run freely.
2309.04288
Computation of Nash Equilibria of Attack and Defense Games on Networks
We consider the computation of a Nash equilibrium in attack and defense games on networks (Bloch et al. [1]). We prove that a Nash Equilibrium of the game can be computed in polynomial time with respect to the number of nodes in the network. We propose an algorithm that runs in O(n4) time with respect to the number of nodes of the network, n.
Stanisław Kaźmierowski, Marcin Dziubiński
2023-09-08T12:22:00Z
http://arxiv.org/abs/2309.04288v1
# Computation of Nash Equilibria of Attack and Defense Games on Networks ###### Abstract We consider the computation of a Nash equilibrium in attack and defense games on networks (Bloch et al. [1]). We prove that a Nash Equilibrium of the game can be computed in polynomial time with respect to the number of nodes in the network. We propose an algorithm that runs in \(O(n^{4})\) time with respect to the number of nodes of the network, \(n\). Keywords:Games on networks network interdiction Nash Equilibrium ## 1 Introduction International drug trafficking [4, 9], disrupting the movement of the enemy troops [8, 6], and terrorist attacks [10] involves strategic actors interacting over a network of bilateral connections. A class of scenarios of this type consists of a network of defenders (e.g. countries connected by common borders) and an attacker attempting to move an undesirable object (e.g. a bomb or a package of drugs) through a network to a targeted node (e.g. a targeted country). Each defender is interested in his security but investment in protection spills over to subsequent defenders on the potential routes of attack. In a recent paper, Bloch, Chatterjee, and Dutta [1] introduce a game theoretic model that captures such scenarios. In the model, an attacker (node 0) and \(n\) defenders are all connected in a fixed network. The attacker chooses a target node and an attack path in the network from her location (node 0) to the location of a targeted node. In the event of a successful attack, the attacker gains the value assigned to the target node. If successfully attacked, the targeted defender loses his value while every other node on the path remains intact. To prevent potential losses, every defender can invest in costly protection to increase the probability of stopping a potential attack. An attack can be stopped by every defender on the attack path. Bloch et al. [1] establish the existence of mixed strategy Nash equilibria (NE) in the model and obtain a partial characterization of the NE as well as a full characterization for the networks that form a line. They prove that the set of nodes attacked with positive probability in NE is unique, and under certain redefinition, the model has a unique NE. They provide a set of non-linear equations describing the strategies in a NE when the set of nodes attacked with positive probability is given. Whether this set can be computed efficiently and, consequently, whether a NE of the model can be computed efficiently, was left an open question. Our ContributionWe provide an algorithm for calculating a Nash equilibrium (NE) of the model proposed in [1]. We prove that the algorithm runs in polynomial time with respect to the number of players. More in detail, we use the idea of reducing the network by removing the nodes that are not attacked under any NE, while maintaining all of the possible paths of attack. We identify a subset of defending nodes called _linkers_ which are the subset of nodes that are never attacked in any NE. After removing the linkers from the network, every node can be reached by the attacker by a path of increasing values for the attacker. Using this observation, and building on the idea for computing NE for linear networks, where the nodes are connected in order of ascending values, presented in [1], we obtain a polynomial time algorithm that finds a NE of the model for any connected network. ###### Abstract We consider a game, introduced in [1], between an _attacker_ (player \(0\)) and \(n\)_defenders_ (target nodes). We will use \([n]=\{1,2,\ldots,n\}\) to denote the set of defenders. The attacker and the defenders are connected in a network modeled by an undirected graph, \(G=\langle V,E\rangle\), where \(V=[n]\cup\{0\}\) is the set of nodes and \(E\subseteq\binom{V}{2}\) is the set of edges (\(\binom{V}{2}\) denotes the set of all 2-element subsets of \(V\)). We assume that graph \(G\) is connected, meaning that every target is reachable from the attacker by some simple path \(p\) (i.e. a path where every node appears at most once).1 Given a graph \(G\), we will use \(E(G)\) to denote the set of edges in \(G\) and \(V(G)\) to denote the set of nodes in \(G\). Footnote 1: Throughout the paper when using the term path we will mean a simple path. The attacker attacks a selected defender in the network by reaching him through a path starting at \(0\) and ending at the defender. Each defender \(j\in[n]\) has a value \(b_{j}>0\) describing his strategic importance to the attacker. If defender \(j\) is successfully attacked, the attacker receives a payoff of \(b_{j}\). If attacked successfully, the defender \(j\) obtains a negative payoff of \(-d_{j}\). Each defender, anticipating a possible attack, can invest in protection (interception probability). Intercepting the attack means stopping the attacker, regardless of whether the defender is the target or simply lies on an attack path. The investment of defender \(j\) increases the probability \(x_{j}\in[0,1]\) of intercepting an attack and comes at a cost, \(c_{j}(x_{j})\). The cost of protection is an increasing, differentiable, and strictly convex function of the protection level and the cost of no protection is \(0\), i.e. \(c_{j}(0)=0\). We make the following assumption about the cost functions: **Assumption 1**: _For every cost function \(c_{j}(x_{j})\) of defender \(j\in[n]\), we assume that \(c^{\prime}_{j}(1)\geq d_{j}\) and \(c^{\prime}_{j}(0)=0\)._ Assumption 1 implies that the only scenario in which the best response of the defender \(j\) might be the "perfect defense" (i.e. \(x_{j}=1\)) is when he is the only attacked node. In [1] it is assumed that the cost function \(c_{j}(x_{j})=x_{j}^{2}/2\) for the final presented results, but many important results are proven for a wider class of cost functions satisfying Assumption 1. Defenders choose their interception probabilities independently and simultaneously with the attacker choosing a target \(j\in[n]\) and an attack path \(p\) from \(0\) to \(j\). Let \(P(G)\) denote the set of all paths in graph originating at the attacker node, \(0\), and \(t(p)\) denote the terminal node of path \(p\). Given any path \(p\in P(G)\) and \(j\in p\), the set of predecessors of \(j\) in \(p\) is \(Pred(p,j)=\{k\in p:\ k\) lies on path \(p\) between \(0\) and \(j\) }. Fix a vector of interception investments \((x_{1},\ldots,x_{n})\). For any node \(j\) on path \(p\), we let \(\alpha_{j}(p,(x_{i})_{i\in[n]})\) denote the probability that the attack along \(p\) reaches \(j\) \[\alpha_{j}(p,(x_{i})_{i\in[n]})=\prod_{k\in Pred(p,j)}(1-x_{k}).\] The probability that the attack on target \(j\) along path \(p\) is successful is given by \[\beta_{j}(p,(x_{i})_{i\in[n]})=\alpha_{j}(p,(x_{i})_{i\in[n]})\cdot(1-x_{j}).\] The set of pure strategies of the attacker is defined by the set \(P(G)\) of all paths originating at \(0\). The set of pure strategies of every defender \(j\in[n]\), the level of protection, is the interval \([0,1]\). Pair \((p,(x_{i})_{i\in[n]})\) describes a pure strategy profile, with the payoff of the attacker given by \[U(p,(x_{i})_{i\in[n]})=\beta_{t(p)}(p,(x_{i})_{i\in[n]})b_{t(p)},\] and the payoff of defender \(j\) given by \[V_{j}(p,(x_{i})_{i\in[n]})=\begin{cases}\beta_{j}(p,(x_{i})_{i\in[n]})(-d_{j} )-c_{j}(x_{j}),\,\text{if}\,\,j=t(p),\\ -c_{j}(x_{j}),\,\text{otherwise}.\end{cases} \tag{1}\] We allow the attacker to use mixed strategies, choosing a probability distribution \(\pi\) over all paths in \(P(G)\). Let \(\Delta(P(G))\) denote the set of all probability distributions over \(P(G)\). The expected payoff of the attacker from a mixed strategy profile \((\pi,(x_{i})_{i\in[n]})\in\Delta(P(G))\times[0,1]^{n}\) is given by \[U(\pi,(x_{i})_{i\in[n]})=\sum_{p\in P(G)}\pi(p)\beta_{t(p)}(p,(x_{i})_{i\in[n ]})b_{t(p)}. \tag{2}\] The expected payoff of defender \(j\) is given by \[V_{j}(\pi,(x_{i})_{i\in[n]})=\sum_{\begin{subarray}{c}p\in P(G)\\ t(p)=j\end{subarray}}\pi(p)\alpha_{j}(p,(x_{i})_{i\in[n]})(1-x_{j})(-d_{j})-c_ {j}(x_{j}). \tag{3}\] Following [1] we use the following assumption on defenders' importance. **Assumption 2**: _For any two defenders \(i\) and \(j\), \(b_{i}\neq b_{j}\)._ This assumption means that no two defenders have the same strategic importance to the attacker. Moreover, without the loss of generality, we will assume that the defenders are numbered in increasing order with respect to their strategic importance to the attacker, i.e. \(i<j\implies b_{i}<b_{j}\). **Definition 1** (Attack and defense game on a network).: _Quadruple \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) defines an attack and defense game on a network with network \(G\), set of players \(V(G)\), defenders' cost functions, \(c_{j}\), attacker's evaluations, \(b_{j}\), and defenders' evaluations \(d_{j}\)._ We are interested in calculating (mixed strategy) Nash equilibria (NE) of attack and defense games on a network defined by Definition 1. A strategy profile \((\pi^{*},(x^{*}_{i})_{i\in[n]})\) is a NE if and only if for every mixed strategy \(\pi\in\Delta(P(G))\) of the attacker, \(U(\pi^{*},(x^{*}_{i})_{i\in[n]})\geq U(\pi,(x^{*}_{i})_{i\in[n]})\), and for every node \(j\in[n]\) and every strategy \(x_{j}\in[0,1]\), \(V_{j}(\pi^{*},(x_{j},(x^{*}_{i})_{i\in[n]\setminus\{j\}}))\leq V_{j}(\pi^{*},(x^ {*}_{i})_{i\in[n]})\). Properties of the Nash Equilibria In this section, we recall important properties of the NE of the attack and defense game on a network. The properties follow from [1] and are crucial for the computational results we obtain. First, Bloch, Chatterjee, and Dutta [1] establish the existence of mixed strategy NE in the game. Theorem 3.1 (Bloch et al. [1]): _The attack and defense game on a network always admits a Nash equilibrium in mixed strategies._ Second, they establish sufficient and necessary conditions for the existence of pure strategy NE. Lemma 1 (Bloch et al. [1]): _The described model yields NE in pure strategies if and only if the value \(b_{n}\) of node \(n\) satisfies_ \[b_{n}(1-c_{n}^{\prime(-1)}(d_{n}))\geq b_{j} \tag{4}\] _for all \(j\) such that there is a path \(p\) from 0 to \(j\) that does not contain \(n\)._ Note, that as \(c_{n}\) is a strictly convex, differentiable function, the inverse function \(c_{n}^{\prime(-1)}\), of its differential, \(c_{n}^{\prime}\), is well-defined. Deciding whether the condition introduced in Lemma 1 is satisfied can be done in time \(O(n)\) by the following straightforward algorithm. After removing the node \(n\) from the graph, all the nodes that remain connected to node 0 by a path form a set of nodes that can be reached by the attacker with a path that does not contain \(n\). For this set of nodes, we check whether Inequality (4) is satisfied. If the condition is met then every profile where the attacker chooses a path \(p\) that terminates at node \(n\), defender \(n\) chooses investment of \(c_{n}^{\prime(-1)}(d_{n})\) (value obtained from finding the derivative of payoff function of the \(n\)'th defender and comparing it to 0) and every other defender chooses investment of 0 is a pure strategy NE. From now on we will focus on the parameters of the model that do not yield NE in pure strategies. ### Properties of Mixed Strategies Nash Equilibria Given a game \(\Gamma=(G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), let \[D_{\Gamma}(\pi,(x_{i})_{i\in[n]})\subseteq[n],\] denote the set of all defenders attacked with positive probability under the strategy profile \((\pi,(x_{i})_{i\in[n]})\). The following lemma about the independence of set \(D_{\Gamma}(\pi,(x_{i})_{i\in[n]})\) from a considered strategy profile \((\pi,(x_{i})_{i\in[n]})\), that is a NE of \(\Gamma\), follows from the proof of Theorem 2[1]. Lemma 2 (Bloch et al. [1], Theorem 2): _Given the Assumption 2, for every attack and defense game on network \(\Gamma=(G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), and every two strategy profiles \((\pi,(x_{i})_{i\in[n]})\) and \((\pi^{\prime},(x_{i}^{\prime})_{i\in[n]})\) that are NE of \(\Gamma\),_ \[D_{\Gamma}(\pi,(x_{i})_{i\in[n]})=D_{\Gamma}(\pi^{\prime},(x_{i}^{\prime})_{i \in[n]}).\] By Lemma 2, the set of nodes attacked with positive probability in equilibrium depends only on the game's parameters. Therefore, given a game \(\Gamma=(G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) we will denote this set by \(D(\Gamma)\). We will call nodes in \(D(\Gamma)\)_non-neutral nodes_. From proof of Theorem 2[1], the set of non-neutral nodes is invariant under the vector of values \((d_{i})_{i\in[n]}\) and the vector of cost functions \((c_{i})_{i\in[n]}\) (as long as they satisfy Assumption 1). This is stated in the following lemma. Lemma 3 (Bloch et al. [1]): _Let \(\Gamma=(G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\). For any cost functions vector \((c_{i}^{\prime})_{i\in[n]}\), satisfying Assumption 1, and values vector \((d_{i}^{\prime})_{i\in[n]}\) it holds_ \[D(\Gamma)=D(\Gamma^{\prime}),\] _where \(\Gamma^{\prime}=(G,(b_{i})_{i\in[n]},(d_{i}^{\prime})_{i\in[n]},(c_{i}^{ \prime})_{i\in[n]})\)._ Following Lemma 3, for the remaining part of the paper, we will denote the set of nodes attacked in every NE of the game by \(D(G,(b_{i})_{i\in[n]})\). The set of nodes \([n]\setminus D(G,(b_{i})_{i\in[n]})\) is never attacked under any NE. We call nodes in \([n]\setminus D(G,(b_{i})_{i\in[n]})\)_neutral nodes_. We have the following observation. **Observation 1**: _Every neutral node \(j\) maximizes his payoff in every NE by choosing a strategy \(x_{j}=0\)._ When the network, \(G\), and the values of the nodes, \((b_{i})_{i\in[n]}\) are clear from the context, we will use \(D\) instead of \(D(G,(b_{i})_{i\in[n]})\) to denote the set of non-neutral nodes and \([n]\setminus D\) to denote the set of neutral nodes. For a non-neutral node, \(j\), let \(P^{j}\) denote the set of all paths from \(0\) to \(j\) chosen by the attacker with positive probability in some NE of the game. Formally, path \(p\) from \(0\) to \(j\) in \(G\) belongs to \(P^{j}\) if and only if there exists a strategy profile \((\pi,(x_{i})_{i\in[n]})\) that is a NE of the game, such that \(\pi(p)>0\). Bloch et al. [1] prove that any two paths in \(P^{j}\) can differ only on the set of neutral nodes. Moreover, non-neutral nodes on any two paths in \(P^{j}\) are aligned in the same sequence from the attacker to the target. This is stated by the following lemma. Lemma 4 (Bloch et al. [1]): _For any two paths \(p,p^{\prime}\) in \(P^{j}\),_ \[Pred(p,j)\cap D(G,(b_{i})_{i\in[n]})=Pred(p^{\prime},j)\cap D(G,(b_{i})_{i\in[ n]}).\] _Moreover, if \(k,l\in Pred(p,j)\cap D(G,(b_{i})_{i\in[n]})\) then_ \[k\in Pred(p,l)\iff k\in Pred(p^{\prime},l). \tag{5}\] Following Lemma 4, for every non-neutral node \(j\), we denote the unique sequence of his predecessors from \(D\) on any path in \(P^{j}\) by \(p^{j}\). We call \(p^{j}\)_the equilibrium attack path of \(j\)_. The equilibrium attack paths are not always paths in the original graph, as they can lack some of the neutral nodes that are essential to their connectivity. If a non-neutral node, \(k\in D\), lies on an equilibrium attack path of another node non-neutral, \(j\in D\), his equilibrium attack path, \(p^{k}\), is a subsequence of \(p^{j}\). This is stated by the following lemma. Lemma 5 (Bloch et al. [1]): _Given two non-neutral nodes, \(k\) and \(j\), if \(k\) is an element of \(p^{j}\) then \(p^{k}\) is a subsequence of \(p^{j}\), i.e. that for some \(m\in\{2,3,\ldots,|p^{j}|-1\}\), \(p^{k}\) is a sequence of first \(m\) elements of \(p^{j}\)._ From Lemma 5, it follows that the set of nodes \(\{0\}\cup D\) and the set of equilibrium attack paths \(\{p_{j}\}_{j\in D}\) constitute a tree that is invariant under the vector of cost functions \((c_{i})_{i\in[n]}\) (as long as they satisfy Assumption 1) and the vector of values \((d_{i})_{i\in[n]}\). Therefore, for a given game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), we denote this tree by \(T(G,(b_{i})_{i\in[n]})\) and call it an _equilibrium attack tree_. The concept of the equilibrium attack tree allows for the following redefinition of the game. Definition 2 (Equilibrium attack tree game): An equilibrium attack tree game induced by the attack and defense game on network \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) is the attack and defense game on network \((T(G,(b_{i})_{i\in[n]}),\)\((b_{i})_{i\in D},\)\((d_{i})_{i\in D},\)\((c_{i})_{i\in D})\). In such a game, every defender is connected to the attacker by exactly one path - his equilibrium attack path. It means that every mixed strategy \(\pi\) of the attacker is described by vector \((q_{i})_{i\in D}\), which determines the probabilities of attack on every node. ### NE of the Equilibrium Attack Tree Game Given an equilibrium attack tree game \((T(G,(b_{i})_{i\in[n]}),\)\((b_{i})_{i\in D},\)\((d_{i})_{i\in D},\)\((c_{i})_{i\in D})\), let \(D_{0}\subseteq D\) denote the set of all the neighbours of \(0\) in tree \(T(G,(b_{i})_{i\in[n]})\). By [1], the first-order conditions that have to be fulfilled by any NE of the game are \[x_{j}^{*}=1-\frac{U}{b_{j}},\,\text{if}\,\,j\in D_{0}, \tag{6}\] \[x_{j}^{*}=1-\frac{b_{k(j)}}{b_{j}},\,\text{if}\,\,j\in D\setminus D _{0},\] (7) \[q_{j}^{*}=\frac{c_{j}^{\prime}\left(1-\frac{U}{b_{j}}\right)}{d_ {j}},\,\text{if}\,\,j\in D_{0},\] (8) \[q_{j}^{*}=\frac{b_{k(j)}\cdot c_{j}^{\prime}\left(1-\frac{b_{k(j )}}{b_{j}}\right)}{U\cdot d_{j}},\,\text{if}\,\,j\in D\setminus D_{0},\] (9) \[\sum_{j}q_{j}^{*}=1. \tag{10}\] where \(k(j)\) is the direct predecessor of \(j\) in the equilibrium attack path \(p^{j}\) and \(U\) is the equilibrium utility of the attacker. Equations (6) and (7) are obtained from the equations guaranteeing that the attacker is indifferent among the targets in the support. \[b_{j}(1-x_{j}^{*})=U,\quad\text{ for }j\in D_{0},\] \[b_{j}(1-x_{j}^{*})=b_{k(j)},\,\text{for }j\notin D_{0}.\] Equations (8) and (9) are obtained from maximizing the payoff function of every defender defined in (11). First, we calculate the derivative \[\frac{\partial V_{j}(q,x_{1},\ldots,x_{n})}{\partial x_{j}}=\alpha_{j}x_{j}q_ {j}^{*}d_{j}-c_{j}^{\prime}(x_{j}). \tag{11}\] The function \(V_{j}(x_{j})\) is concave, therefore it is only increasing in an interval where \(\alpha_{j}x_{j}q_{j}^{*}d_{j}\geq c_{j}^{\prime}(x_{j}).\) It follows from Assumption 1, that 0 is in this interval while 1 is not, therefore the derivative is equal to 0 in the maximum, hence \[c_{j}^{\prime}(x_{j}^{*})=\alpha_{j}q_{j}^{*}d_{j}. \tag{12}\] In any NE the attacker is indifferent over the strategies in the support, i.e. \[U=b_{j}\alpha_{j}(1-x_{j}^{*}).\] After transforming this equation, we get \[\alpha_{j}=\frac{U}{b_{j}(1-x_{j}^{*})}.\] This means that the equation (12) states \[c_{j}^{\prime}(x_{j}^{*})=\frac{Uq_{j}^{*}d_{j}}{b_{j}(1-x_{j}^{*})},\] hence \[q_{j}^{*}=\frac{c_{j}^{\prime}(x_{j}^{*})b_{j}(1-x_{j}^{*})}{U\cdot d_{j}}.\] Using equations (6) and (7) we get (8) and (9), respectively. Equation (10) states that the probabilities in any mixed strategy of the attacker sum up to 1. We conclude this subsection by stating the uniqueness of the solution to the introduced set of equations. Theorem 2.1 (Bloch et al. [1]): _Given Assumption 2, the proposed set of first-order conditions (6)-(10) yields exactly one solution_ \[((q^{*})_{i\in D},(x_{i})_{i\in D},U)\in[0,1]^{|D|}\times[0,1]^{|D|}\times[0,1]\] _that is the unique NE of the equilibrium attack tree game._ ### Properties of the Equilibrium Attack Tree In this subsection, we present the properties of the equilibrium attack tree that follows from Theorem 2. Consider a non-neutral node \(j\in D\setminus D_{0}\). In the NE of the equilibrium attack tree game (\(T(G,(b_{i})_{i\in[n]})\), \((b_{i})_{i\in D}\), \((d_{i})_{i\in D}\), \((c_{i})_{i\in D}\)), node \(j\) is attacked through the equilibrium attack path \(p^{j}=(0,p_{1}^{j},p_{2}^{j},\ldots,k(j),j)\), and the probability \(\alpha_{j}\) of attacker successfully reaching the node \(j\) is \[\alpha_{j}=\prod_{i\in\{p_{1}^{j},p_{2}^{j},\ldots,k(j)\}}(1-x_{i}^{*}).\] Using the (6) and (7), we get \[\alpha_{j} = \left(1-\left(1-\frac{U}{b_{p_{1}^{j}}}\right)\right)\prod_{i\in \{p_{2}^{j},\ldots,k(j)\}}\left(1-\left(1-\frac{b_{i-1}}{b_{i}}\right)\right) = \frac{U}{b_{p_{1}^{j}}}\prod_{i\in\{p_{2}^{j},\ldots,k(j)\}} \left(\frac{b_{i-1}}{b_{i}}\right) = \frac{U}{b_{k(j)}}. \tag{13}\] The nodes in \(D\), that can be a direct predecessor of a node \(j\) in his equilibrium attack path, are the non-neutral nodes that can be reached in graph \(G\) from \(j\) by any path that does not contain any other node from \(\{0\}\cup D\). Let \(N(j,D,G)\subset D\) denote the set of these nodes. Formally, non-neutral node \(i\) is in \(N(j,D,G)\) if and only if there exists a path \(p\) from \(j\) to \(i\) in \(G\) that does not contain any nodes from \((D\cup\{0\})\setminus\{i,j\}\). Equilibrium attack paths are chosen by the attacker to maximize her payoff. Equation (13) states, that the smaller the value \(b_{k(j)}\), of the direct predecessor of \(k(j)\) of node \(j\in D\setminus D_{0}\) on equilibrium attack path \(p^{j}\), the greater the probability of reaching the node \(j\) by the attacker. We conclude this with the following observation, which states how the attacker chooses the equilibrium attack tree for a given graph \(G\), set of nodes \(D_{0}\) and their evaluations \((b_{i})_{i\in D_{0}}\). **Observation 2** (Bloch et al. [1]): _For any node \(j\in D_{0}\), the attacker maximizes her payoff in the NE of the equilibrium attack tree game by attacking \(j\) directly. For any node \(j^{\prime}\in D\setminus D_{0}\) the attacker maximizes her payoff in the NE of the equilibrium attack tree game by attacking node \(j^{\prime}\) along the equilibrium attack path where the direct predecessor of \(j^{\prime}\) is node \(i\in N(j,D,G)\) with the lowest value \(b_{i}\)._ ## 4 Computation of Mixed Strategy NE The main challenge of computing a NE of a given attack and defense game, \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) is computing the set of all non-neutral nodes, \(D(G,(b_{i})_{i\in[n]})\). To tackle this problem, we introduce the idea of network reduction by a subset of neutral nodes. We prove that reducing the network by any subset of neutral nodes retains a particular correspondence between the Nash equilibria of the original and the reduced model (in particular, both games yield the same equilibrium attack tree game). Using network reduction, we show that the equilibrium attack tree of the given game can be found in polynomial time when the set of non-neutral nodes is known. Next, we introduce an important subset of neutral nodes called _linkers_. After reducing the network by the set of linkers, the problem of finding the set of non-neutral nodes is easier. We propose an algorithm that allows for finding the set of non-neutral nodes of a given attack and defense game on a network. The algorithm generalizes the idea of finding the set of non-neutral nodes when the considered network is a linear graph with ascending values \(b_{j}\), presented in [1], to finding this set when the considered network is an arbitrary graph. When the set of non-neutral nodes in the game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) is found, we calculate the NE of the corresponding equilibrium attack tree game. Finally, we show how to reconstruct a NE of a \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) from the NE of the corresponding equilibrium attack tree game. ### Network Reduction For a given game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), its' set of non-neutral nodes \(D\), and any neutral node \(m\in[n]\setminus D\), let us construct a graph, called \(G\)_reduced by \(m\)_, obtained by removing node \(m\) and adding links between all pairs of neighbours of \(m\) that are not connected by an edge in \(G\). We denote a graph \(G\) reduced by \(m\) by \(G\setminus m\). Formally \(V(G\setminus m)=V(G)\setminus\{m\}\) and \(E(G\setminus m)=E(G)\setminus\{\{i,m\}:i\in V(G)\}\cup\{\{i,k\}:i\neq k\wedge\{i, m\}\in E(G)\wedge\{m,k\}\in E(G)\}\). Let \(h_{m}^{G}:P(G)\to P(G\setminus m)\) be a function such that, for a given path \(p\in P(G)\), \[h_{m}^{G}(p)=\begin{cases}p&\text{ if }m\notin p,\\ p\setminus\{m\}&\text{ otherwise.}\end{cases}\] Function \(h_{m}^{G}\) maps paths emerging from \(0\) in graph \(G\) to paths emerging from \(0\) in graph \(G\setminus m\). Function \(h_{m}^{G}\), defined for the set of the pure strategies of the attacker, naturally extends to a function \(H_{m}^{G}:\Delta(P(G))\rightarrow\Delta(P(G\setminus m))\) such that, for every probability distribution \(\pi\in\Delta(P(G))\) over the set of paths in \(G\), \[H_{m}^{G}(\pi)=\sum_{p\in P(G)}\pi(p)\cdot h_{m}^{G}(p).\] **Lemma 6**.: _Let \(\Gamma\) = \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\). Node \(m\in[n]\setminus D\) is a neutral node and strategy profile \((\pi^{*},(x_{i})_{i\in[n]}^{*})\) is a NE of \(\Gamma\) if and only if the strategy profile \((H_{m}^{G}(\pi^{*}),(x_{i})_{i\in[n]\setminus m}^{*})\) is a NE of \(\Gamma\setminus m=(G\setminus m,(c_{i})_{i\in[n]\setminus\{m\}},(b_{i})_{i\in[ n]\setminus\{m\}},(d_{i})_{i\in[n]\setminus\{m\}})\)._ Proof.: Notice that the derivative of defender \(j\in[n]\) payoff function, \(V_{j}\), is given by \[V_{j}^{\prime}(x_{j})=d_{j}\cdot\sum_{\begin{subarray}{c}p\in P(G)\\ t(p)=j\end{subarray}}\pi\left(p\right)\alpha_{j}\left(p,(x)_{i\in[n]}\right)-c _{j}^{\prime}\left(x_{j}\right).\] Assumption 1 on costs functions guarantees that the maximum of \(V_{j}\) is inside the interval \((0,1)\). As function \(V_{j}\) is concave, we can find this maximum by solving \(V_{j}^{\prime}(x_{j})=0\). We get \[x_{j}=\left(c_{j}^{\prime}\right)^{-1}\left(d_{j}\cdot\sum_{ \begin{subarray}{c}p\in P(G)\\ t(p)=j\end{subarray}}\pi\left(p\right)\alpha_{j}\left(p,(x_{i})_{i\in[n]}\right) \right). \tag{14}\] In any NE, any defender \(j\) chooses the defense investment given by Equation (14) to maximize his payoff. For the right to left implication, consider a NE \((\pi^{*},(x_{i}^{*})_{i\in[n]\setminus\{m\}})\) of a game \(\Gamma\setminus m\). We will prove that when node \(m\) is neutral, every strategy profile \((\pi,(x_{m}=0,(x_{i}^{*})_{i\in[n]\setminus\{m\}}))\) that satisfies \(H_{m}^{G}(\pi)=\pi^{*}\) is a NE of the game \(\Gamma\). Notice that \(x_{m}=0\) implies that, for every path \(p\in P(G)\), \[\alpha_{j}(p,(x_{i})_{i\in[n]})=\alpha_{j}(h_{m}^{G}(p),(x_{i})_{i \in[n]\setminus\{m\}}). \tag{15}\] By (15) every defender \(j\in[n]\setminus\{m\}\), \[\sum_{\begin{subarray}{c}p\in P(G\setminus\{m\})\\ t(p)=j\end{subarray}}\pi(p)\alpha_{j}(p,(x_{i})_{i\in[n]\setminus\{m\}})=\sum_ {\begin{subarray}{c}p\in P(G\setminus\{m\})\\ t(p)=j\end{subarray}}\sum_{\begin{subarray}{c}p^{\prime}\in P(G)\\ h_{m}^{G}(p^{\prime})=p\end{subarray}}\pi(p^{\prime})\alpha_{j}(p^{\prime},(x_{i })_{i\in[n]})=\\ \sum_{\begin{subarray}{c}p\in P(G)\\ t(p)=j\end{subarray}}\pi(p)\alpha_{j}(p,(x_{i})_{i\in[n]}). \tag{16}\] As Equation (14) is satisfied for every defender \(j\in[n]\setminus\{m\}\) by the strategy profile \((\pi^{*},(x_{i}^{*})_{i\in[n]\setminus\{m\}})\), notice that every defender \(j\in[n]\) cannot increase his payoff by deviating from \((\pi,(x_{m}=0,(x_{i}^{*})_{i\in[n]\setminus\{m\}}))\) if the strategies of all the other players remain unchanged. Therefore, the only player that can benefit from changing her strategy in the strategy profile \((\pi,(x_{m}=0,(x_{i}^{*})_{i\in[n]\setminus\{m\}}))\) is the attacker. Consider any path \(p\in P(G)\) such that \(\pi(p)=0\). Notice, that \[U(p,(x_{m}=0,(x_{i})_{i\in[n]\setminus\{m\}}))=\\ U(h_{m}^{G}(p),(x_{i})_{i\in[n]\setminus\{m\}})\leq U(\pi^{*},(x_{i})_ {i\in[n]\setminus\{m\}})=U(\pi,(x_{m}=0,(x_{i})_{i\in[n]\setminus\{m\}})), \tag{17}\] hence the attacker also cannot increase her payoff by deviating from \((\pi,(x_{m}=0,(x_{i}^{*})_{i\in[n]\setminus\{m\}}))\). The inequality follows from the NE definition and both equalities follow from the Equation (15). The strategy profile \((\pi,(x_{m}=0,(x_{i}^{*})_{i\in[n]\setminus\{m\}}))\) is a NE of the attack and defense game on network \(\Gamma\setminus m\), because none of the players can increase their payoff by deviating from it. The proof of reverse implication is analogous. The reduction of the game extends to any set of neutral nodes by iterative reduction of neutral nodes one by one. First, note that for any game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), the corresponding set \(D\) of non-neutral nodes and any two neutral nodes \(j\), \(k\in D\setminus[n]\) \[H_{k}^{(G\setminus j)}\circ H_{j}^{G}=H_{j}^{(G\setminus k)}\circ H_{k}^{G}. \tag{18}\] The reduction of the game extends to any set \(S\subseteq([n]\setminus D)\) of neutral nodes by iterative reduction of nodes from \(S\) one by one. Equation (18) guarantees that reduction by the set of nodes is invariant to the ordering in which we choose nodes from \(S\). Definition 3: For a given game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), the corresponding set of non-neutral nodes \(D\), any subset of neutral nodes \(S\subseteq([n]\setminus D)\), and any sequence \(s=\{s_{1},s_{2},\ldots,s_{|S|}\}\) of all the nodes in \(S\), the reduction of \(G\) by \(S\) with sequence \(s\) is defined as \[H_{S,s}^{G}=\begin{cases}H_{(S\setminus\{s_{1}\}),(s_{2},\ldots,s_{|S|})}^{G \setminus s_{1}},&\text{ if }|S|>2\text{,}\\ H_{s_{2}}^{G\setminus s_{1}},&\text{ if }|S|=2.\end{cases}\] As the reduction of the network is independent of the order of nodes from \(S\), we denote it with \(H_{S}^{G}\). Reducing the network by a given neutral node \(i\) can be done in \(O(n^{2})\) time and reducing the network by a given set of nodes, \(S\subseteq[n]\), can be done in time \(O(|S|\cdot n^{2})\). ### Linkers We now introduce an important set of nodes called _linkers_. Let us call a node \(i\) a _linker_ if he is not directly connected to the attacker and all of his neighbours' evaluations, \(b_{j}\), are greater than \(b_{i}\), i.e. \(\{0,i\}\notin E(G)\) and \((\{i,j\}\in E\left(G\right)\implies b_{j}>b_{i})\). All linkers are neutral nodes, which we state in the lemma below. Lemma 7: _Every linker is a neutral node._ Proof: Consider a linker \(m\in[n]\) and any path \(p\in P(G)\) such, that \(t(p)=m\), i.e. \(m\) is a terminal node of \(p\). Let \(k(m)\) denote the direct predecessor of node \(m\) on path \(p\). Notice that the probability \(\beta_{m}(m,p)\) of the successful attack on node \(m\) through path \(p\) satisfies \[\beta_{m}(p,(x_{i})_{i\in[n]})=(1-x_{i})\beta_{k(m)}(p\setminus\{m\},(x_{i})_ {i\in[n]})\leq\beta_{k(m)}(p\setminus\{m\},(x_{i})_{i\in[n]}).\] As \(b_{m}<b_{k(m)}\) from the linker definition, for any strategy \(p\), the strategy \(p\setminus\{m\}\) yields a strictly greater payoff to the attacker. Therefore, the strategy \(p\) is not in the NE support of the attacker. No path \(p\in P(G)\) with terminal \(t(p)=m\) is in the attacker support in any NE, hence considered node \(m\) is a neutral node. Let \(L(G)\subseteq[n]\) denote the set of all linkers in graph \(G\). By Lemma 7, the set \(L(G)\) is a subset of the set of all neutral nodes. Following the game reduction by the set of neutral nodes, we can reduce the graph \(G\) by \(L(G)\) while retaining the correspondence between the NE of the original and the reduced model. We will call the graph \(G\setminus L(G)\) a _proper_ graph. The following example illustrates reducing a graph by its linker. Example 1: In the graph shown in Figure 1, node 1 is a linker. We can reduce graph \(G\) by node 1, connecting all of 1's neighbours that are not directly connected. As a result, we get the proper graph shown in Figure 2, where nodes 2 and 4 are now directly connected and every node is connected to the attacker with at least one ascending path of indices. The following observation is a direct consequence of the properties of network reduction. **Observation 3**: _Let \(\Gamma=(G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), graph \(G^{\prime}=G\setminus L(G)\) is the proper graph of \(G\), and \(\Gamma^{\prime}=(G^{\prime},(c_{i})_{i\in[n]\setminus L(G)},(b_{i})_{i\in[n] \setminus L(G)},(d_{i})_{i\in[n]\setminus L(G)})\). Games \(\Gamma\) and \(\Gamma^{\prime}\) yield the same equilibrium attack tree game._ Determining whether a given node \(i\in[n]\) is a linker can be done in time \(O(n)\), hence finding the set \(L(G)\) of all linkers can be done in time \(O(n^{2})\). As every node \(i\) in the proper graph has a neighbour \(j\) of a lower index, the following observation emerges. **Observation 4**: _Every node in a proper graph is connected to the attacker with at least one path of ascending indices._ As a consequence of Observation 4, we have the following lemma, characterizing the set of non-neutral nodes for proper graphs. Lemma 8: _If graph \(G\) is proper, the set of non-neutral nodes \(D\) of the attack and defense game is \(\{k,k+1,\ldots,n\}\) for some \(k\in[n]\)._ Proof: Let us assume that \(k\) is the lowest index of a node attacked with positive probability in NE. We will prove that every node with an index greater than \(k\) is also attacked with positive probability. Let us assume that node \(k+1\) is not attacked in NE. From Observation 4, we know there is at least one ascending path of nodes from \(0\) to \(k+1\). If there is such a path that does not contain \(k\), then \(k+1\) has to be attacked. If he were Figure 1: Graph with a linker (node 1) Figure 2: Graph after removing the linker (node 1) not attacked, then he would not defend himself, and therefore attacking him would yield a greater payoff to the attacker than attacking node \(k\). If every ascending path from \(0\) to \(k+1\) contains \(k\), then again, \(k+1\) has to be attacked. If he was not attacked, he would not defend himself, and therefore the attacker could reach him with the same probability as the node \(k\), but \(k+1\) would yield a greater payoff. The same reasoning for every other node with an index greater than \(k\) shows that if \(k\) is the lowest index of a non-neutral node, and graph \(G\) is proper, hence \(D=\{k,k+1,\ldots,n\}\). ### Computation of the Equilibrium Attack Tree Consider game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), with a proper graph \(G\), and the corresponding set of non-neutral nodes, \(D\). The equilibrium attack tree \(T(G,(b_{i})_{i\in[n]})\) can be found in polynomial time with the following algorithm. ``` 1:proper graph \(G\) and a set \(D\subseteq V(G)\) 2:Output: equilibrium attack tree \(T\) 3:\(G^{\prime}=G\setminus([n]\setminus D)\) 4:\(T\) = empty graph 5:\(\mathrm{V}(T)=\{0\}\cup D\) 6:for\(j\in D\)do 7:if\((0,j)\in\mathrm{E}(G^{\prime})\)then 8:\(\mathrm{E}(T)\) = \(\mathrm{E}(T)\)\(\cup\{(0,j)\}\) 9:else 10: find \(N(j,G^{\prime})\) {the set of neighbours of \(j\) in graph \(G^{\prime}\)} 11:\(i=\min(N(j,G^{\prime}))\) 12:\(\mathrm{E}(T)\) = \(\mathrm{E}(T)\)\(\cup\{\{i,j\}\}\) 13:endif 14:endfor 15:return\(T\) ``` **Algorithm 1** Constructing equilibrium attack tree From Observation 4, we know that every node in a proper graph has at least one neighbour of a smaller index, hence in every iteration of the _for_ loop, a new edge is added to the graph \(T\). As the resulting graph \(T\) is a connected graph with \(n+1\) vertices and \(n\) edges, it is in fact a tree. From Observation 3 we know that graphs \(G\) and \(G^{\prime}\) yield the same equilibrium attack tree. Observation 2 states that every neighbour of \(0\) in \(G^{\prime}\) is directly connected to \(0\) in \(T\), and every other node is connected in \(T\) to his neighbour in \(G^{\prime}\) of the lowest index, which concludes the correctness of Algorithm 1 when the input set of nodes is the set of non-neutral nodes. The dominant procedure when considering the time complexity of Algorithm 1 is finding the graph \(G^{\prime}\), which can be done in \(O((n-|D|)\cdot n^{2})\). ### Finding the Lowest Node Index in \(D\) In this section, we consider game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), with a proper graph \(G\), and show how to find the corresponding set of non-neutral nodes, \(D\). Let \(k^{*}\) denote the lowest index of a node in \(D\). We formulate the condition which is satisfied by \(k^{*}\) alone and test this condition for all the possible values of \(k\in[n]\), finding \(k^{*}\). Using Algorithm 1, for every \(k\in[n]\) we can find the equilibrium attack tree \(T\) for graph \(G\) assuming that \(D=\{k,k+1,\ldots,n\}\). For every \(k\in[n]\) we define a function \(F_{k}:[0,b_{n}]\rightarrow\mathbb{R}_{\geq 0}\), such that, for a given payoff \(U\) of the attacker, \[F_{k}(U)=\sum_{i}q_{i}^{*}(U,k),\] where \(q_{i}^{*}(U,k)\) is given by Equations (8) and (9), for \(D=\{k,k+1,\ldots,n\}\). \(F_{k}\) has the following properties. 1. It is strictly decreasing in \(U\), as every element of the sum is strictly decreasing in \(U\). 2. \(F_{k^{*}}(U^{*})=1\), where \(U^{*}\) denotes the attacker payoff at the equilibrium and \(k^{*}\) denotes the lowest index of a node attacked with positive probability in NE. The condition on \(k^{*}\) is \[F_{k^{*}}(b_{k^{*}})\leq 1<F_{k^{*}}(b_{k^{*}-1}), \tag{19}\] as it implies \[b_{k^{*}}\geq U^{*}>b_{k^{*}-1}. \tag{20}\] The set of first-order conditions (6)-(9) guarantees that the payoff to the attacker is the same for every pure strategy in the support. The payoff from every pure strategy outside of the support is not greater than \(b_{k^{*}-1}\), hence it is smaller than \(U^{*}\). This means, that the attacker cannot increase her payoff by changing her strategy. Neither can the defenders, as each one of them already maximizes his payoff. Therefore, the strategy profile \(((q^{*})_{i\in D},(x_{i})_{i\in D})\) defined by Equations (6)-(10) describe the NE of equilibrium attack tree game \((T(G,(b_{i})_{i\in[n]})\), \((b_{i})_{i\in D}\), \((d_{i})_{i\in D}\), \((c_{i})_{i\in D})\) (which we know is unique from Theorem 2), with \(D=\{k^{*},k^{*}+1,\ldots,n\}\). ### Calculating the NE of an Equilibrium Attack Tree Game for a Proper Graph Consider attack and defense game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), where graph \(G\) is proper. To calculate the strategy profile that is the NE of the corresponding equilibrium attack tree game, knowing that \(D=\{k^{*},k^{*}+1,\ldots,n\}\), we need to find the payoff of the attacker \(U^{*}\). This means solving the equation \(F_{k^{*}}(U)=1\) \[\sum_{i\in D_{0}}\frac{c_{i}^{\prime}(1-\frac{U}{b_{i}})}{d_{i}}+\sum_{i\in D \setminus D_{0}}\frac{b_{k(i)}\cdot c_{i}^{\prime}(1-\frac{b_{k(i)}}{b_{i}})} {U\cdot d_{i}}=1. \tag{21}\] In the case of the cost functions \(c_{i}(x_{i})\) being of the form \(c_{i}(x_{i})=x_{i}^{2}/2\), Equation (9) takes the form \[\sum_{i\in D_{0}}\frac{1-\frac{U}{b_{i}}}{d_{i}}+\sum_{i\in D\setminus D_{0}} \frac{b_{k(i)}\cdot 1-\frac{b_{k(i)}}{b_{i}}}{U\cdot d_{i}}=1.\] This equation can be transformed into a quadratic equation after multiplying both sides by \(U\), and it can be solved in linear time with respect to the number of nodes. After establishing the payoff of the attacker, the last thing to do is to calculate the solutions of equations (6)-(9). Each equation can be solved in a constant time, as we only need to calculate the value of the function \(x_{i}^{2}/2\) at a given point. ## 5 Calculating the Strategies in the NE for any Graph Consider \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), where graph \(G\) is any connected graph. We showed how to find the set \(D\) and the NE \(((q_{i}^{*})_{i\in D},(x_{i}^{*})_{i\in D})\) of the attack equilibrium tree game \((T(G,(b_{i})_{i\in[n]})\), \((b_{i})_{i\in D}\), \((d_{i})_{i\in D}\), \((c_{i})_{i\in D})\) by first calculating the proper graph \(G^{\prime}=G\setminus L(G)\), then applying the method of finding the set of non-neutral nodes \(D\) and finally calculating the NE, \(((q_{i}^{*})_{i\in D},(x_{i}^{*})_{i\in D})\}\), of the corresponding equilibrium attack tree game. In this section, we show how to retrieve a strategy profile \((\pi,(x_{i})_{i\in[n]})\) that is a NE of an attack and defense game on network, \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\), from the NE \(((q_{i}^{*})_{i\in D},(x_{i}^{*})_{i\in D})\) of the corresponding equilibrium attack tree game. Let \(R(i,j,G,D)\) denote the set of all paths \(p\) in \(G\) from \(i\in D\) to \(j\in D\), that do not contain any other node from \(D\). To find the set \(R(i,j,G,D)\) we remove nodes in \(D\setminus\{i,j\}\) from \(G\). If \(i\) and \(j\) are in two different components then \(R(i,j,G,D)=\emptyset\). In general, set \(R(i,j,G,D)\) can contain (exponentially) many different paths, and therefore can be difficult to find, however, we can obtain any of these paths in time \(O(n^{2})\). We will denote such a path by \(r_{\{i,j\}}(G,D)\). Considering an equilibrium attack path, \(p^{j}\in P(T(G,(b_{i})_{i\in[n]}))\), we create a path \(p^{j,res}\in P(G)\) by replacing the edge between \(m\in p^{j}\) and his predecessor \(k(m)\in p^{j}\) with a path \(r_{\{k(m),m\}}(G,D)\). From the reduction procedure, it follows that \(r_{\{k(m),m\}}(G,D)\) exists for every such pair of nodes and it can be \(\{m,k(m)\}\) if and only if \(\{m,k(m)\}\in E(G)\). Let \(\pi^{*}\in\Delta(P(G))\) be \[\pi^{*}(p)=\begin{cases}q_{j}^{*},\,\text{if}\,\,p=p^{j,res},\\ 0,\,\text{otherwise}.\end{cases}\] **Observation 5**: _Strategy profile \((\pi^{*},((0)_{i\in[n]\setminus D},\,(x_{i}^{*})_{i\in D}))\) describes a NE of \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\)._ This follows from the game reduction by the set of neutral nodes, as by reversing the reduction of \(G\) by set \(D^{\prime}=[n]\setminus D\), we can define the mapping \(H_{D^{\prime}}^{G}\) where \((H_{D^{\prime}}^{G})^{-1}(p^{j})=p^{j,res}\) for every node \(j\in D\), as for every \(j\), and every mapping \(H_{D^{\prime}}^{G}\), \(H_{D^{\prime}}^{G}(p^{j,res})=p^{j}\). The procedure of reconstructing NE of the original game from the \(((q_{i}^{*})_{i\in D},(x_{i}^{*})_{i\in D})\) runs in time \(O(|D|\cdot n^{2})\), as it requires finding exactly \(|D|\leq n\) paths \(r_{i,j}(G,D)\). ## 6 Computational Complexity A NE of the attack and defense game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) can be found by the following procedure. ``` Input: attack and defense game \((G,(b_{i})_{i\in[n]},(d_{i})_{i\in[n]},(c_{i})_{i\in[n]})\) Output: a NE of a given game 1: find the set \(L(G)\) of all linkers in \(G\) 2: calculate the proper graph \(G^{\prime}=G\setminus L(G)\) 3:for\(k\in\{1,\ldots,n\}\)do 4: calculate \(G_{k}^{\prime}=G^{\prime}\setminus\{1,\ldots,k-1\}\) 5: construct equilibrium attack tree \(T_{k}\) for graph \(G_{k}^{\prime}\) (Algorithm 1) 6: construct function \(F_{k}(U)\) using the equations (8) and (9) 7:if inequality (19) holds then 8: save \(k^{*}=k\) 9: save \(T_{k^{*}}=T_{k}\) 10: break 11:endif 12:endfor 13: calculate \(U^{*}\) solving (21) 14: calculate strategies of the attacker and the defenders solving the equations (6), (7), (8) and (9) 15: reconstruct the NE of the general game 16:return NE of the general game ``` **Algorithm 2** Finding the NE of the general model The pessimistic time complexities of all the used procedures were established when the given procedure was introduced. The dominant operation of Algorithm 2 is the calculation of the reduced graph (line 4), which can require time \(O(n^{3})\) in every iteration of the main loop. Therefore, the pessimistic run time of Algorithm 1 is \(O(n^{4})\). Algorithm 2 can be easily generalized to compute Nash equilibria of the game for arbitrary cost functions (satisfying Assumptions 1) other than \(c_{i}(x)=x^{2}/2\). In the case of such cost functions, the pessimistic time cost of Algorithm 2 is polynomial with respect to the number of players as long as Equation (21) can be solved in polynomial time. Conclusions In this paper, we proposed a method for finding a NE of attack and defense games on networks [1]. The proposed algorithm runs in polynomial time with respect to the number of nodes in the network. The idea of reducing the network by the set of linker nodes that results in a proper graph, although simple, allows us to make an important observation that every node in a proper graph can be reached by the attacker by at least one ascending path. This idea can be used for the subclass of attack and interception games on networks, where the defenders make their decisions independently and only the target node is influenced by the attack. However, if the defenders could coordinate, it could happen that a linker node, although never attacked, is protected to defend more valuable nodes that can be reached only through this linker. #### 7.0.1 Acknowledgements This work was supported by the Polish National Science Centre through grant 2018/29/B/ST6/00174.
2309.15385
Debye Screening of Non-Abelian Plasmas in Curved Spacetimes
Decades of analytic and computational work have demonstrated that a charge immersed in a hot plasma is screened. For both Abelian and non-Abelian interactions, the characteristic screening length $1/m_D$ is set by the so-called Debye mass $m_D \sim g_s T$, proportional to the plasma temperature $T$ and the dimensionless gauge coupling $g_s$. One of the most interesting naturally occurring examples is the quark-gluon plasma (QGP) that filled the early universe prior to the QCD confinement phase transition at $t_{\rm QCD} \sim 10^{-5}\,{\rm s}$. During this early epoch, regimes of strong spacetime curvature are of significant cosmological interest, such as near primordial black holes (PBHs). However, the typical description of Debye screening only applies within Minkowski spacetime, and is therefore insufficient to describe the dynamics of charged plasmas near PBHs or other primordial features. We construct an effective field theory for soft modes of the gauge field $A_\mu^a$ to give a full description of Debye screening in non-Abelian plasmas within arbitrary curved spacetimes, recovering a temperature-dependent Debye mass that exhibits gravitational redshift. We then apply our results to some scenarios of cosmological interest: an expanding FLRW universe and the vicinity of a PBH immersed in a hot QGP.
Elba Alonso-Monsalve, David I. Kaiser
2023-09-27T03:33:17Z
http://arxiv.org/abs/2309.15385v2
# Debye Screening of Non-Abelian Plasmas in Curved Spacetimes ###### Abstract Decades of analytic and computational work have demonstrated that a charge immersed in a hot plasma is screened. For both Abelian and non-Abelian interactions, the characteristic screening length \(1/m_{D}\) is set by the so-called Debye mass \(m_{D}\sim g_{s}T\), proportional to the plasma temperature \(T\) and the dimensionless gauge coupling \(g_{s}\). One of the most interesting naturally occurring examples is the quark-gluon plasma (QGP) that filled the early universe prior to the QCD confinement phase transition at \(t_{\rm QCD}\sim 10^{-5}\,\)s. During this early epoch, regimes of strong spacetime curvature are of significant cosmological interest, such as near primordial black holes (PBHs). However, the typical description of Debye screening only applies within Minkowski spacetime, and is therefore insufficient to describe the dynamics of charged plasmas near PBHs or other primordial features. We construct an effective field theory for soft modes of the gauge field \(A_{\mu}^{a}\) to give a full description of Debye screening in non-Abelian plasmas within arbitrary curved spacetimes, recovering a temperature-dependent Debye mass that exhibits gravitational redshift. We then apply our results to some scenarios of cosmological interest: an expanding FLRW universe and the vicinity of a PBH immersed in a hot QGP. ## I Introduction The screening of charges in hot plasmas, for both Abelian and non-Abelian interactions, has been a topic of significant interest for decades. Upon resumming hard thermal loops, the leading-order effects yield a characteristic screening length \(\lambda_{D}(T)=1/m_{D}(T)\) set by the Debye mass \(m_{D}(T)\sim g_{s}T\), where \(g_{s}\) is the dimensionless gauge coupling strength and \(T\) is the temperature of the plasma. (For reviews, see Refs. [1; 2; 3; 4].) The Debye mass \(m_{D}(T)\) sets the characteristic scale for dynamics of collective excitations within the plasma. For example, collective oscillations of "soft" modes, with momenta \(k_{\rm soft}\sim m_{D}(T)\ll T\), propagate with frequency set by the plasma frequency \(\omega_{p}\simeq m_{D}(T)\)[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Moreover, nontrivial spatial distributions of color charge among the soft gluons can form, with typical length-scale \(1/m_{D}(T)\)[18; 19]. These effects have been studied extensively in Minkowski spacetime, using both analytic techniques and lattice simulations [1; 2; 3; 4]. In this paper we consider Debye screening in hot plasmas within curved spacetimes, with particular interest in cosmological applications. For example, post-inflation reheating typically yields a universe filled with Standard Model particles in thermal equilibrium at a temperature as high as \(T\sim\mathcal{O}(10^{14}\,{\rm GeV})\sim\mathcal{O}(10^{27}\,K)\) by the time \(t_{\rm therm}\sim\mathcal{O}(10^{-35}\,{\rm s})\) after the big bang [20; 21]. Such temperatures are exponentially greater than the QCD confinement scale \(\Lambda_{\rm QCD}\simeq 0.170\,{\rm GeV}\). Hence at such early times, the universe was filled with a plasma of unconfined quarks and gluons, subject to the non-Abelian dynamics of QCD [22; 23]. As the universe expanded, the temperature of the plasma fell adiabatically: \(T(t)a(t)\simeq\) constant, where \(a(t)\) is the scale factor of the Friedmann-Lemaitre-Robertson-Walker (FLRW) line-element. During the radiation-dominated phase, \(a(t)=(t/t_{\rm therm})^{1/2}\). The QCD confinement transition, after which quarks and gluons remained bound in color-neutral hadronic states, occurred at \(t_{\rm QCD}\simeq 10^{-5}\,\)s, when \(T(t_{\rm QCD})\simeq\Lambda_{\rm QCD}\)[22; 23]. Within the range of times \(t_{\rm therm}\ll t\ll t_{\rm QCD}\), additional phenomena of cosmological interest could have occurred. For example, if a significant population of primordial black holes formed at early times, with masses in the range \(10^{17}\,{\rm g}\leq M(t_{c})\leq 10^{22}\,{\rm g}\), they could account for the entire abundance of dark matter today [24; 25; 26; 27]. Given the dependence of the black holes' masses on the time of collapse \(t_{c}\), the black-hole mass range of interest to account for the present dark-matter abundance corresponds to formation times \(10^{-21}\,{\rm s}\leq t_{c}\leq 10^{-16}\,{\rm s}\). At these early times, the temperature of the plasma within which the primordial black holes formed would have been \(10^{5}\,{\rm GeV}\leq T(t_{c})\leq 10^{7}\,{\rm GeV}\), for which \(T(t_{c})\gg T(t_{\rm QCD})\). With such cosmological applications in mind, we study Debye screening in hot non-Abelian plasmas within arbitrary curved spacetimes, including spacetimes that need not be homogeneous and isotropic (such as in the vicinity of a primordial black hole). In such general spacetimes, the effective temperature of the plasma can develop spatial gradients, \(T\to T(x)\), akin to the familiar Tolman temperature [28; 29; 30]. We identify corrections to the induced current \(j_{\mu}^{a}(x)\) and to the Debye mass \(m_{D}(T(x))\) from spacetime curvature. Much as in Minkowski spacetime, the components \(A_{0}^{a}(x)\) of the gauge field acquire an effective mass in the plasma proportional to \(m_{D}(T(x))\), whereas components \(A_{i}^{a}(x)\) remain massless. (See, e.g., Refs. [31; 32; 33; 34; 35; 36; 37; 38; 39] on the behavior of Abelian plasmas within curved spacetimes.) In Section II, we introduce an effective theory for soft modes within the plasma, generalizing the elegant formalism reviewed in Refs. [1; 2] for application to curved spacetimes. In Section III we evaluate the induced current \(j^{a}_{\mu}(x)\) for the soft modes that arises from interactions with high-energy quarks and gluons in the plasma. Section IV considers corrections to the usual expression for the Debye mass \(m_{D}(T)\) arising from spacetime curvature. In Section V we apply these expressions for \(j^{a}_{\mu}(x)\) and \(m_{D}(T(x))\) to some scenarios of cosmological interest. We restrict attention to \((3+1)\) spacetime dimensions and adopt the metric signature \((-,+,+,+)\). Greek letters \(\mu,\nu=0,1,2,3\) label spacetime indices, while Latin letters \(i,j=1,2,3\) label spatial indices. The color-charge indices associated with the adjoint representation of the gauge group \(\mathrm{SU}(N_{c})\) range over \(a,b,c=1,2,...,N_{c}^{2}-1\) and are raised and lowered with \(\delta_{ab}\). The generators \(T^{a}\) of the Lie algebra for \(\mathrm{SU}(N_{c})\) satisfy \([T^{a},T^{b}]=if^{abc}T^{c}\), where \(f^{abc}\) are the totally antisymmetric structure constants, and the generators are normalized such that \(2\,\mathrm{Tr}(T^{a}T^{b})=\delta^{ab}\). We adopt natural units in which \(c=\hbar=k_{B}=1\), in terms of which the reduced Planck mass is given by \(M_{\mathrm{pl}}\equiv 1/\sqrt{8\pi G}=2.43\times 10^{18}\) GeV. ## II Effective theory for soft modes At early times \(t\ll t_{\mathrm{QCD}}\), when the temperature of the plasma filling the universe satisfied \(T\gg\Lambda_{\mathrm{QCD}}\), the number density of gluon degrees of freedom likely dominated those of quarks: gluons can radiate gluons at tree level, and bosonic statistics allow gluon number densities per mode to scale as \(n_{k}\sim 1/\alpha_{s}\), where \(\alpha_{s}\equiv g_{s}^{2}/(4\pi)\) and \(g_{s}\) is the QCD coupling constant [40]. Given the running of \(\alpha_{s}\) to lower values at higher energies, the number density of gluons could have greatly exceeded the number density of quarks within the plasma at early times [1; 4]. Similar behavior has been observed in the plasmas that form immediately after relativistic heavy-ion collisions [40; 41]. Hence we expect gluons to dominate the dynamics, and consider the effective action for a Yang-Mills model with gauge group \(\mathrm{SU}(N_{c})\) and field strength tensor \[F^{a}_{\mu\nu}=\nabla_{\mu}A^{a}_{\nu}-\nabla_{\nu}A^{a}_{\mu}+g_{s}f^{abc}A^{ b}_{\mu}A^{c}_{\nu}, \tag{1}\] where \(\nabla_{\mu}\) is the (spacetime) covariant derivative associated with metric \(g_{\mu\nu}(x)\). Following the approach reviewed in Refs. [1; 2], we construct an effective theory for long-wavelength excitations in the plasma within the high-temperature limit. In particular, we consider the effective action for soft modes \(\tilde{A}^{a}_{\mu}\) with typical momenta \(k_{\mathrm{soft}}\sim g_{s}T\). Thermal and quantum corrections to the behavior of the soft modes are dominated by quanta \(a^{a}_{\mu}\) with momenta \(k_{\mathrm{hard}}\sim T\). We therefore write \(A^{a}_{\mu}=\tilde{A}^{a}_{\mu}+a^{a}_{\mu}\), integrate out the high-momentum modes \(a^{a}_{\mu}\), and drop the tilde on the soft modes \(\tilde{A}^{a}_{\mu}\to A^{a}_{\mu}\). As described in Refs. [1; 2], the soft modes \(A^{a}_{\mu}\) have large occupation numbers per mode and hence behave as effectively classical fields. The behavior of the soft modes therefore yields the mean-field dynamics for the system on length-scales \(\lambda\geq 1/k_{\mathrm{soft}}\sim 1/(g_{s}T)\). We follow the background-field method of Ref. [1], whereby we choose to decompose the action of the gauge symmetry on \(a^{a}_{\mu}\) and \(A^{a}_{\mu}\) in such a way that leaves the soft modes \(A^{a}_{\mu}\) unchanged. One example is to use a generalized Coulomb-type gauge-fixing term in the full theory, \(\nabla_{i}a^{i}_{a}-g_{s}f^{abc}A^{b}_{i}a^{i}_{c}\)[1]. No additional gauge-fixing terms or ghosts are then required in the effective action for the soft modes. For long length-scales \(\lambda\geq 1/k_{\mathrm{soft}}\gg 1/k_{\mathrm{hard}}\sim 1/T\), the dynamics may be described by the effective action \[S_{\mathrm{eff}}=\int d^{4}x\sqrt{-g}\left[\frac{M_{\mathrm{pl}}^{2}}{2}R- \frac{1}{4}F^{a}_{\mu\nu}F^{\mu\nu}_{a}-j^{a}_{\mu}A^{\mu}_{a}+\mathcal{L}_{ \mathrm{fluid}}\right], \tag{2}\] where \(j^{a}_{\mu}(x)\) is the induced current generated by (non-Abelian) self-interactions with the high-momentum modes \(a^{a}_{\mu}(x)\); the induced current \(j^{a}_{\mu}\) also includes contributrons from high-momentum quarks. The term \(\mathcal{L}_{\mathrm{fluid}}\) represents the contributions to the evolution of the spacetime curvature from constituents other than the soft modes. For example, the high-momentum modes in the plasma (coarse-grained over a length-scale \(\lambda\gg 1/k_{\mathrm{hard}}\)) behave as a neutral fluid with a radiation-dominated equation of state, and thereby contribute to the time evolution of the scale factor \(a(t)\) in an FLRW background, while masses \(M\) of compact objects, such as black holes, influence the spacetime curvature in their vicinity. Each of these contributions affects \(g_{\mu\nu}(x)\) and hence impacts the dynamics of the soft modes \(A^{a}_{\mu}(x)\). Varying \(S_{\mathrm{eff}}\) with respect to \(A^{a}_{\mu}\) yields the equations of motion \[D^{\mu}F^{a}_{\mu\nu}=j^{a}_{\nu}, \tag{3}\] where the covariant derivative \(D_{\mu}\) acting on a spacetime tensor that transforms in the adjoint representation of \(\mathrm{SU}(N_{c})\), \(X^{a}_{\nu_{1}\cdots\nu_{n}}\), is defined as \[D_{\mu}X^{a}_{\nu_{1}\cdots\nu_{n}} \equiv\nabla_{\mu}X^{a}_{\nu_{1}\cdots\nu_{n}}+g_{s}f^{abc}A^{b}_ {\mu}X^{c}_{\nu_{1}\cdots\nu_{n}}. \tag{4}\] The left-hand side of Eq. (3) can be expanded as \[\begin{split} D^{\mu}F^{a}_{\mu\nu}&=\nabla^{\mu}F^{ a}_{\mu\nu}+g_{s}f^{abc}A^{\mu}_{b}F^{c}_{\mu\nu}\\ &=D^{\mu}D_{\mu}A^{a}_{\nu}-g_{s}f^{abc}A^{\mu}_{b}\nabla_{\nu}A^{ c}_{\mu}-R_{\mu\nu}A^{\mu}_{a},\end{split} \tag{5}\] where \(R_{\mu\nu}\) is the Ricci tensor for the background spacetime. The equations of motion imply that the induced current \(j^{a}_{\mu}\) must be covariantly conserved with respect to both the spacetime curvature and the \(\mathrm{SU}(N_{c})\) gauge group: \[D^{\mu}j^{a}_{\mu}(x)=0, \tag{6}\] for each color \(a\). The contribution to the energy-momentum tensor from the Yang-Mills field is given, as usual, by \[T^{(A)}_{\mu\nu}=F^{a}_{\mu\lambda}F^{a}_{\nu}{}^{\lambda}-\frac{1}{4}g_{\mu\nu }F^{\lambda\sigma}_{a}F^{a}_{\lambda\sigma}, \tag{7}\] while the additional fluid contributes \[T_{\mu\nu}^{\rm fluid}=-\frac{2}{\sqrt{-g}}\frac{\delta\mathcal{L}_{\rm fluid}}{ \delta g^{\mu\nu}}. \tag{8}\] These terms satisfy the usual covariant conservation equations \[\nabla_{\mu}T_{(A)}^{\mu\nu} =j_{\mu}^{a}F_{a}^{\mu\nu}, \tag{9}\] \[\nabla_{\mu}T_{\rm fluid}^{\mu\nu} =0\] in the presence of the induced current \(j_{\mu}^{a}\). ## III Induced current The induced current \(j_{\mu}^{a}(x)\) for the soft modes \(A_{\mu}^{a}(x)\) arises from interactions with high-momentum quarks and gluons in the plasma. Lattice simulations have confirmed the analytic expectation that for temperatures \(T\gg T_{\rm QCD}\), high-energy quarks and gluons attain an equilibrium equation of state akin to that of a gas of non-interacting, massless particles. In particular, the so-called "trace anomaly," \((\rho-3P)/T^{4}\), where \(\rho\) is the energy density and \(P\) the pressure of particles in the plasma, tends rapidly toward zero for \(T\gg T_{\rm QCD}\), and therefore \(\rho\approx 3P\). (See, e.g., Refs. [42; 43; 44; 45].) At high temperatures, the soft quantum fields of the plasma are well approximated by classical fields, due to their large occupation numbers per mode. Furthermore, the hard (high-momentum) modes can be approximated by an ensemble of classical point particles to leading order in the coupling \(g_{s}\), since they are weakly interacting. Specifically, the effects of the high-momentum particles on the soft modes within the plasma are dominated by processes involving "hard thermal loops": one-loop diagrams with arbitrary numbers of low-momentum external legs (with \(k_{\rm soft}\sim g_{s}T\)) and hard internal momenta (with \(k_{\rm hard}\sim T\)) [46; 47; 48; 10]. These effects can be analyzed within a mean-field approximation of the (truncated) Schwinger-Dyson equations [1; 15; 16; 17; 12; 13], or by simply adopting classical transport equations for high-momentum particles [2; 17; 49; 50; 51]. In this section we adapt the latter approach to arbitrary curved spacetimes. (See also Ref. [52].) The equations of motion for a classical point particle with color charge in a soft gauge-field background are well known [52; 53; 54; 50]: \[\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\lambda} =P^{\mu}, \tag{10}\] \[\frac{\mathrm{d}P_{\mu}}{\mathrm{d}\lambda} =-\frac{1}{2}\frac{\partial g^{\alpha\beta}}{\partial x^{\mu}}P_ {\alpha}P_{\beta}-g_{s}Q^{a}F_{\mu\nu}^{a}P^{\nu},\] \[\frac{\mathrm{d}Q^{a}}{\mathrm{d}\lambda} =-g_{s}f^{abc}A_{\mu}^{b}Q^{c}P^{\mu}.\] Here \(x^{\mu}\) is the position of the particle, \(P^{\mu}=\mathrm{d}x^{\mu}/\mathrm{d}\lambda\) its kinetic 4-momentum, \(\lambda\) an affine parameter, and \(Q^{a}\) its SU(\(N_{c}\)) charge. We take advantage of the freedom to linearly rescale the affine parameter \(\lambda\) so that \(\mathrm{d}x^{\mu}/\mathrm{d}\lambda\) has units of energy. (We do not explicitly rescale by the particle mass \(m\) so that our formalism can be applied to massless particles.) Notice that, due to non-Abelian self-interactions, the charges \(Q^{a}\) are dynamical, unlike in electromagnetism. The induced current \(j_{a}^{\mu}(x)\), summed over all hard particles, is given by [1; 2] \[j_{a}^{\mu}(x)=g_{s}\int dQ\,d\omega\,\frac{\mathrm{d}x^{\mu}}{\mathrm{d} \lambda}Q_{a}\delta f(x,p,Q), \tag{11}\] where \(\delta f(x,p,Q)\) represents the deviation from equilibrium of the distribution function for the high-momentum charge-carrying particles. The integration measure includes the momentum volume form \(d\omega\) and the measure for the space of color charges \(dQ\), subject to the physical constraints of on-shell mass condition, positivity of energy, and conservation of the \(N_{c}-1\) group Casimirs. The momentum measure is given below, in Eq. (36). The phase-space structure for the color-charge degrees of freedom is unaffected by the spacetime structure, so we may use the now-standard parameterization developed for previous studies within Minkowski spacetime, as reviewed, e.g., in Ref. [2]; an explicit construction for the cases SU(2) and SU(3) is available in Section III of Ref. [50]. In particular, for the gauge group SU(2) the color-space measure is \[dQ=d^{3}Q\,c_{R}\,\delta(Q^{a}Q^{a}-q_{2}), \tag{12}\] and for the gauge group SU(3) we have \[dQ=d^{8}Q\,c_{R}\,\delta(Q^{a}Q^{a}-q_{2})\delta(d_{abc}Q^{a}Q^{b}Q^{c}-q_{3}). \tag{13}\] For the SU(3) case, the \(d^{abc}\) are the totally symmetric constants given by \(d^{abc}=2\,\mathrm{Tr}(\{T^{a},T^{b}\},T^{c})\). The representation-dependent constant \(c_{R}\) is fixed by the normalization \(\int dQ=1\), while the constants \(q_{2},q_{3}\) are further fixed by the first and second Casimirs, respectively. Thus, the integration over color charges serves to enforce the conservation of the Casimir invariants [2; 50; 49]. We consider small departures from equilibrium, and therefore work perturbatively in powers of the coupling \(g_{s}\), so we may write the full distribution function as \[f(x,p,Q)=f^{(0)}(x,p)+g_{s}f^{(1)}(x,p,Q)+\mathcal{O}(g_{s}^{2}). \tag{14}\] The relevant dynamics for the induced current \(j_{a}^{\mu}(x)\) in Eq. (11) are captured to leading order by identifying \(\delta f(x,p,Q)=g_{s}f^{(1)}(x,p,Q)\), as demonstrated by Refs. [50; 49]. The dynamical evolution of \(\delta f(x,p,Q)\) is governed by the collisionless Boltzmann equation for the full distribution function \(f(x,p,Q)\), which in the framework of Hamiltonian mechanics corresponds to the conservation of \(f(x,p,Q)\) along dynamical trajectories (Hamiltonian flow) in phase space. To solve the Boltzmann equation, it will be convenient to have an explicit Hamiltonian formulation of the dynamics of the high-momentum particles. To this end, we first construct an action which yields the correct equations of motion, Eq. (10). This is nontrivial because there is no such action involving the charges \(Q^{a}\) directly as dynamical variables. To bypass this problem, following Refs. [54; 55; 56], we introduce new dynamical variables \(q^{a}\) which, like \(Q^{a}\), transform under the adjoint representation of \(\mathrm{SU}(N_{c})\), but which are anticommuting (Grassmann-valued). These new variables \(q^{a}\) are useful as an intermediate step, in terms of which we may construct the charges \(Q^{a}\) as \[Q^{a}=-\frac{i}{2}f^{abc}q^{b}q^{c}. \tag{15}\] An action for a single high-momentum particle that yields the appropriate equations of motion may then be written in terms of the dynamical variables \(x^{\mu}\) and \(q^{a}\) as \[\begin{split} S_{\mathrm{1p}}=\int d\lambda\bigg{\{}& \frac{1}{2}g_{\mu\nu}\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\lambda} \frac{\mathrm{d}x^{\nu}}{\mathrm{d}\lambda}+\frac{i}{2}q^{a}\frac{\mathrm{d}q^ {a}}{\mathrm{d}\lambda}\\ &-\frac{i}{2}g_{s}f^{abc}A^{a}_{\mu}q^{b}q^{c}\frac{\mathrm{d}x^ {\mu}}{\mathrm{d}\lambda}\bigg{\}}.\end{split} \tag{16}\] The corresponding Hamiltonian is simply \[H=\frac{1}{2}g^{\mu\nu}P_{\mu}P_{\nu}. \tag{17}\] The kinetic momentum \(P^{\mu}\equiv\mathrm{d}x^{\mu}/\mathrm{d}\lambda\) is related to the canonical momentum \(p_{\mu}\), conjugate to \(x^{\mu}\), by \[P_{\mu}=p_{\mu}-g_{s}Q^{a}A^{a}_{\mu}. \tag{18}\] Then Hamilton's equations are Eq. (10), as desired. Dynamical evolution in phase space is generated by the Liouville vector field \(X_{H}\): \[\begin{split} X_{H}\equiv\frac{\mathrm{d}}{\mathrm{d}\lambda}& =\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\lambda}\frac{\partial}{\partial x ^{\mu}}+\frac{\mathrm{d}p_{\mu}}{\mathrm{d}\lambda}\frac{\partial}{\partial p _{\mu}}+\frac{\mathrm{d}Q^{a}}{\mathrm{d}\lambda}\frac{\partial}{\partial Q^{a} }\\ &=P^{\mu}\frac{\partial}{\partial x^{\mu}}\bigg{|}_{P}-\frac{1}{2 }\frac{\partial g^{\alpha\beta}}{\partial x^{\mu}}\bigg{|}_{P}P_{\alpha}P_{ \beta}\frac{\partial}{\partial P_{\mu}}\\ &\qquad-g_{s}Q^{a}P^{\mu}F^{a}_{\mu\nu}\frac{\partial}{\partial P _{\nu}}\\ &\qquad-g_{s}f^{abc}A^{b}_{\mu}Q^{c}P^{\mu}\frac{\partial}{ \partial Q^{a}}\bigg{|}_{P},\end{split} \tag{19}\] where, in the second line, we have written the vector field in noncanonical coordinates \(\{x^{\mu},P_{\mu},Q^{a}\}\), and we have noted explicitly which derivatives are taken keeping \(P_{\mu}\) fixed, as opposed to \(p_{\mu}\). The collisionless Boltzmann equation for the distribution function \(f(x,p,Q)\) can be written in terms of the Liouville vector field as \(X_{H}[f]=0\), and is equivalently known as the Liouville equation. Given the expansion in Eq. (14), we require that the Liouville equation be satisfied order by order in \(g_{s}\): \[P^{\mu}\frac{\partial f^{(0)}}{\partial x^{\mu}}\bigg{|}_{P}-\frac{1}{2} \frac{\partial g^{\alpha\beta}}{\partial x^{\mu}}\bigg{|}_{P}P_{\alpha}P_{ \beta}\frac{\partial f^{(0)}}{\partial P_{\mu}}=0 \tag{20}\] and \[\begin{split} P^{\mu}\frac{\partial f^{(1)}}{\partial x^{\mu}} \bigg{|}_{P}-\frac{1}{2}\frac{\partial g^{\alpha\beta}}{\partial x^{\mu}} \bigg{|}_{P}P_{\alpha}P_{\beta}\frac{\partial f^{(1)}}{\partial P_{\mu}}=Q^ {a}P^{\mu}F^{a}_{\mu\nu}\frac{\partial f^{(0)}}{\partial P_{\nu}}.\end{split} \tag{21}\] We assume that all dependence of \(f(x,p,Q)\) on the charges \(Q^{a}\) can only appear at \(\mathcal{O}(g_{s})\) or above, and therefore \(\partial f^{(0)}/\partial Q^{a}=0\). Notice that, as emphasized in Ref. [51], the non-Abelian terms in \(X_{H}\), proportional to \(g_{s}f^{abc}\), make no contribution to the evolution of \(f^{(1)}\), given our perturbative expansion in \(g_{s}\). We are free to choose any distribution function for our fluid, as long as it solves the Boltzmann equation and its physical meaning is compatible with small deviations from thermal equilibrium. For this reason, we aim to find a solution of the collisionless Boltzmann equation \(X_{H}[f]=0\) such that the zeroth order in \(g_{s}\) is of the usual form for a canonical ensemble, \(f^{(0)}\sim\exp[-\beta E]\), for a constant \(\beta\) and a quantity \(E\) that may be interpreted as an energy. The equation \(X_{H}[f]=0\) is typically solved to \(\mathcal{O}(g_{s})\) by employing Green's function techniques [1; 2; 51]. We will instead show that one can further exploit the Hamiltonian formalism to solve efficiently for the distribution function to the same order. The two approaches are equivalent up to \(\mathcal{O}(g_{s})\), because quantum corrections to the classical equations of motion in Eq. (10) only arise at \(\mathcal{O}(g_{s}^{2})\) and above. It is a classic result that Hamiltonian mechanics in the fully covariant phase space \(\{x^{\mu},p_{\mu},Q^{a}\}\) with evolution parameterized by an affine \(\lambda\) and generated by the Hamiltonian in Eq. (17) is equivalent to Hamiltonian mechanics in the reduced phase space \(\{x^{i},p_{i},Q^{a}\}\) with evolution parameterized by coordinate time \(t\equiv x^{0}\) and generated by the reduced Hamiltonian \[\bar{H}\equiv-p_{0}(t,x^{i},p_{i},Q^{a};h). \tag{22}\] This means that both formalisms yield the same equations of motion. This is possible thanks to the redundancy in the choice of affine parameter \(\lambda\), as well as the conservation of \(H(x^{\mu},p_{\mu},Q)\) along phase space trajectories, in the covariant formalism. In Eq. (22) we have solved for \(p_{0}\) in terms of \(\{t,x^{i},p_{i},Q^{a},h\}\) using \(H(x^{\mu},p_{\mu},Q)\equiv h\), with \(h\) a constant. In the covariant formalism, the conservation of \(H(x^{\mu},p_{\mu},Q)\) follows from Eq. (10) and the fact that \(H\) cannot depend explicitly on \(\lambda\). The reduced phase-space formalism does not yield the conservation of \(H(x^{\mu},p_{\mu},Q^{a})\), so we must impose it by hand. For details of the proof, see Sections 44 and 45 in Chapter 9 of Ref. [57]. If the reduced Hamiltonian \(\bar{H}\) from Eq. (22) does not depend explicitly on \(t\), then it is also conserved along phase-space trajectories, \(X_{H}[\bar{H}]=0\), so we may exploit \(\bar{H}\) to construct a distribution function. That is, we may use the fact that any scalar function of \(\bar{H}\) will solve the collisionless Boltzmann equation to devise a valid distribution function for our fluid. We therefore introduce a quasi-stationary approximation: we consider only scenarios in which \(\partial_{t}g_{\mu\nu}\) and \(\partial_{t}A^{a}_{\mu}\) remain subdominant. This approximation is appropriate, since we are interested in the behavior of the high-momentum particles, whose dynamics are governed by the time-scale \(1/k_{\rm hard}\sim 1/T\), whereas we expect the soft modes \(A^{a}_{\mu}(x)\) to evolve on time-scales set by \(1/k_{\rm soft}\gg 1/k_{\rm hard}\). Likewise, our effective description can only resolve dynamics up to scales set by \(1/k_{\rm soft}\), which bounds how sharply the background spacetime can evolve within our self-consistent expansion as well. In particular, if we allowed \(\bar{H}\) to have an explicit dependence on \(t\), then \[X_{H}[\bar{H}]=\frac{1}{2}\partial_{t}g^{\mu\nu}P_{\mu}P_{\nu}-g_{s}Q^{a}P^{\mu }\partial_{t}A^{a}_{\mu}, \tag{23}\] and thus a distribution function constructed from \(\bar{H}\) would yield self-consistent dynamics as long as \[|\partial_{t}g^{\mu\nu}|,\frac{|\partial_{t}A^{a}_{\mu}|}{|A^{a}_{\mu}|}\ll k_ {\rm soft}. \tag{24}\] Eq. (24) involves coordinate-dependent quantities. However, any change in \(x^{\mu}\) would be accompanied by changes in the conjugate momenta \(p_{\mu}\), such that Eq. (24) remains meaningful, given that \(k_{\rm soft}\) is a momentum scale. This follows from the invariance of the phase-space volume under coordinate transformations. As we will see in Section V, this quasi-stationary approximation is easily satisfied in many cosmological applications of interest. We choose a distribution function \(f=\exp\bigl{[}-\beta_{T}\bar{H}\bigr{]}\), where \(\beta_{T}\) is a constant [58]. As explained just above, this satisfies the collisionless Boltzmann equation by construction. We will see in the next section that the zeroth-order term \(f^{(0)}\) in the \(g_{s}\) expansion corresponds to a canonical ensemble (for fluid undergoing normal flow with respect to coordinate time \(t\)), so it is a physically reasonable choice because it represents thermal equilibrium. This term is \[f^{(0)}=e^{\beta_{T}P_{0}}. \tag{25}\] Given \(f^{(0)}\), we may evaluate the equilibrium occupation numbers per mode in the usual way, \[n^{(0)}_{\rm B,F}\equiv\frac{\sum n_{i}e^{n_{i}\beta_{T}P_{0}}}{\sum e^{n_{i} \beta_{T}P_{0}}}, \tag{26}\] where the sums run from \(n_{i}=0\) to \(n_{i}=\infty\) for bosons and to \(n_{i}=1\) for fermions. Thus we obtain the Bose-Einstein and Fermi-Dirac distributions at zeroth-order: \[n^{(0)}_{\rm B,F}=\frac{1}{e^{-\beta_{T}P_{0}}\mp 1}. \tag{27}\] Substituting into Eq. (21) yields \[\begin{split} n^{(1)}_{\rm B,F}&=Q^{a}A^{a}_{0} \frac{\partial n^{(0)}_{\rm B,F}}{\partial P_{0}}\\ &=-\beta_{T}\frac{e^{-\beta_{T}P_{0}}}{(e^{-\beta_{T}P_{0}}\mp 1 )^{2}}Q^{a}A^{a}_{0}\end{split} \tag{28}\] for bosons and fermions, respectively. Summing over species and helicities, we have \(\delta f=g_{S}(2n^{(1)}_{\rm B}+4N_{f}n^{(1)}_{\rm F})\) in Eq. (11). To evaluate \(j^{\mu}_{a}(x)\) we first perform the \(Q\) integral with the measure in Eq. (13), which yields factors proportional to the index of the representation: \(\int dQ\,Q_{a}Q_{b}=C_{\rm B,F}\,\delta^{a}_{b}\), with \(C_{\rm B}=N_{c}\) for bosons and \(C_{\rm F}=1/2\) for fermions. Then the current becomes \[\begin{split} j^{\mu}_{a}(x)&=g_{s}^{2}\int dQ\,d \omega\left(2n^{(1)}_{\rm B}+4N_{f}n^{(1)}_{\rm F}\right)Q^{a}P^{\mu}\\ &=g_{s}^{2}A^{a}_{0}(x)\int d\omega\,\mathcal{N}(x,P)\,P^{\mu}, \end{split} \tag{29}\] where we have defined \[\begin{split}\mathcal{N}(x,P)&\equiv 2N_{c}\frac{ \partial n^{(0)}_{\rm B}}{\partial P_{0}}+2N_{f}\frac{\partial n^{(0)}_{\rm F} }{\partial P_{0}}\\ &=-2\beta_{T}\left[N_{c}\frac{e^{-\beta_{T}P_{0}}}{(e^{-\beta_{T} P_{0}}-1)^{2}}+N_{f}\frac{e^{-\beta_{T}P_{0}}}{(e^{-\beta_{T}P_{0}}+1)^{2}} \right].\end{split} \tag{30}\] Eq. (29) includes contributions from two polarization states for each effectively massless particle, and \(2N_{f}\) fermion species (\(N_{f}\) each for quarks and antiquarks). Note that, as in Minkowski spacetime [1], \(j^{\mu}_{a}(x)\) is proportional to \(A^{a}_{0}(x)\). ## IV Debye mass In Minkowski spacetime, the induced current reduces (in the static limit) to \(j^{a}_{\mu}(x)=m^{2}_{D}A^{a}_{0}\delta^{0}_{\mu}\), effectively giving a constant mass to the soft gluon components \(A^{a}_{0}\), which is responsible for Debye screening [1; 2; 3; 4]. In this section, we evaluate the induced current \(j^{\mu}_{a}(x)\) of Eq. (29) and find that in spacetimes of interest, it is proportional to the square of an effective mass, \(m^{2}_{D}(x)\), which has spatial dependence. For any \((3+1)\)-dimensional, globally hyperbolic spacetime, we can choose coordinates \(x^{\mu}\) to put the metric in the Arnowitt-Deser-Misner (ADM) form (see, e.g., Ref. [59]) \[\begin{split} ds^{2}&=-N^{2}dt^{2}+\gamma_{ij} \left(dx^{i}+\beta^{i}dt\right)\left(dx^{j}+\beta^{j}dt\right)\\ &=-\left(N^{2}-\beta_{i}\beta^{i}\right)dt^{2}+2\beta_{i}\,dtdx^{ i}+\gamma_{ij}dx^{i}dx^{j},\end{split} \tag{31}\] where \(x^{0}=t\) is a global time function, \(N(x)\) is the lapse funtion, and \(\beta^{i}(x)\) the shift vector, whose indices are raised and lowered with \(\gamma_{ij}\): \(\beta_{i}=\gamma_{ij}\,\beta^{j}\). Hypersurfaces of constant time \(t\) are Cauchy surfaces by construction, with induced metric \(\gamma_{\mu\nu}=g_{\mu\nu}+n_{\mu}n_{\nu}\), where \[n_{\mu}=-(dt)_{\mu}=(-N,\mathbf{0})\,. \tag{32}\] We normalize \(t\) such that \(N^{2}\to 1\) on the spatial (possibly asymptotic) boundary. The components of the inverse metric are given by \[g^{00}=-\frac{1}{N^{2}},\,g^{0i}=\frac{\beta^{i}}{N^{2}},\,g^{ij}=\gamma^{ij}- \frac{\beta^{i}\beta^{j}}{N^{2}}. \tag{33}\] In these coordinates, the kinetic momentum takes the form \[P_{\mu}=\left(-Nk+\beta^{i}P_{i},P_{i}\right),\ P^{\mu}=\left(\frac{k}{N},\gamma^{ ij}P_{j}-\frac{k}{N}\beta^{i}\right), \tag{34}\] where \[k\equiv\sqrt{\gamma^{\mu\nu}\,P_{\mu}P_{\nu}}=\sqrt{\gamma^{ij}\,P_{i}P_{j}} \tag{35}\] is the magnitude of the momentum projected on constant-\(t\) hypersurfaces. The spacetime metric \(g_{\mu\nu}\) induces a metric in all of phase space, known as the Sasaki metric [60; 61; 62], such that the invariant momentum volume form is \[d\omega=\frac{d^{4}p}{(2\pi)^{3}\sqrt{-g}}. \tag{36}\] We restrict to the mass-shell by integrating over \(2\delta(P^{2})\Theta(P^{0})\). To lowest order in \(g_{s}\), the kinetic and canonical momenta coincide, \(P_{\mu}\to p_{\mu}\), so \[\frac{d^{4}p}{(2\pi)^{3}\sqrt{-g}}\to\frac{d^{3}p_{(i)}}{(2\pi)^{3}p^{0}\,N \sqrt{\gamma}}, \tag{37}\] where the last step follows upon noting that \(\sqrt{-g}=N\sqrt{\gamma}\). The current \(j^{\mu}_{a}(x)\) in Eq. (29) may then be written \[\begin{split} j^{\mu}_{a}(x)&=\frac{g_{s}^{2}}{(2 \pi)^{3}}\frac{A_{0}^{a}}{N}\\ &\times\int\frac{d^{3}p_{(i)}}{\sqrt{\gamma}}\mathcal{N}(x,p) \left[\delta^{\mu}_{0}+\delta^{\mu}_{i}\left(\beta^{i}-\frac{N}{k}\gamma^{ij} p_{j}\right)\right].\end{split} \tag{38}\] We next consider a change in momentum coordinates \(\tilde{k}^{i}\equiv(\gamma^{1/2})^{ij}p_{j}\), where \((\gamma^{1/2})^{ij}\) is the square matrix that satisfies \((\gamma^{1/2})_{mi}\gamma^{ij}(\gamma^{1/2})_{j\ell}=\delta_{m\ell}\). Then \(d^{3}p_{(i)}=\sqrt{\gamma}\,d^{3}\tilde{k}^{(i)}\), and \(k^{2}=\tilde{k}^{2}=\delta_{ij}\tilde{k}^{i}\tilde{k}^{j}\), which is even in the components \(\tilde{k}^{i}\). One could perform an additional coordinate transformation as in Ref. [52], \(p^{\prime}_{i}=p_{i}-\beta_{i}p_{0}/g_{00}\), to aid in evaluating the integral in Eq. (38). For the applications of interest to us, we will instead restrict attention to spacetimes in which the shift vector vanishes, \(\beta^{i}=0\). Then the integration in Eq. (38) may be performed exactly. In such cases, we find \[\begin{split} j^{0}_{a}(x)&=\frac{g_{s}^{2}}{(2\pi) ^{3}}\frac{A_{0}^{a}}{N}\int d^{3}\tilde{k}^{(i)}\,\mathcal{N}(x,\tilde{k}), \end{split} \tag{39}\] \[\begin{split} j^{i}_{a}(x)&=-\frac{g_{s}^{2}}{(2\pi) ^{3}}A_{0}^{a}(\gamma^{1/2})^{ij}\delta_{j\ell}\int d^{3}\tilde{k}^{(i)}\, \mathcal{N}(x,\tilde{k})\frac{\tilde{k}^{\ell}}{k}.\end{split} \tag{40}\] The quantity \(\mathcal{N}(x,\tilde{k})\) is even in the components \(\tilde{k}^{i}\), so that \(j^{i}_{a}(x)\) vanishes identically. We further note that the momentum coordinates \(\tilde{k}^{i}\) are Cartesian, so \(j^{0}_{a}(x)\) is equivalent to \[\begin{split} j^{0}_{a}(x)&=\frac{g_{s}^{2}}{2\pi^ {2}}\frac{A_{0}^{a}}{N}\int_{0}^{\infty}d\tilde{k}\,\tilde{k}^{2}\,\mathcal{N }(x,\tilde{k})\\ &=-\frac{m_{D}^{2}(x)}{N^{2}(x)}A_{0}^{a}(x),\end{split} \tag{41}\] with \[m_{D}^{2}(x)\equiv\frac{1}{6}\left(2N_{c}+N_{f}\right)\frac{g_{s}^{2}}{(\beta _{T}N(x))^{2}}. \tag{42}\] Eq. (41) corresponds to \[j^{a}_{\mu}(x)=m_{D}^{2}(x)A_{0}^{a}\,\delta^{0}_{\mu}. \tag{43}\] The induced current in Eq. (43) is equivalent to inserting an effective mass \(m_{D}(x)\) for the \(A_{0}^{a}\) soft gluon components within the effective action of Eq. (2). The expression for \(m_{D}(x)\) in Eq. (42) reduces to the usual expression in Minkowski spacetime upon setting \(N(x)\to 1\) and identifying \(1/\beta_{T}=T\) with the temperature of the plasma [1; 2; 3; 4; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. In our case, the Boltzmann factor for the high-momentum particles is \(\exp[-\beta_{T}\bar{H}^{(0)}]=\exp[\beta_{T}N(x)k(x)]\), since we are considering spacetimes in which \(\beta^{i}=0\). This is exactly what one would expect for particles in a fluid that is undergoing normal flow, with four-velocity \(n^{\mu}(x)\) and therefore local energy per particle \(E\equiv-n^{\mu}p_{\mu}=k\), if we identify \(T(x)\equiv 1/(\beta_{T}N(x))\) with the local temperature. We may therefore identify the constant \(\beta_{T}\equiv 1/T_{0}\) and write \[T(x)=\frac{T_{0}}{N(x)}. \tag{44}\] Given that we have normalized \(t\) such that \(N(x)\to 1\) on the spatial (possibly asymptotic) boundary, \(T_{0}\) is the temperature associated with time \(t\) on the spatial boundary. We have thus recovered the familiar Tolman temperature gradient in a curved spacetime [28; 29; 30]. The Debye mass then takes the form \[m_{D}^{2}(x)=\frac{1}{6}\left(2N_{c}+N_{f}\right)g_{s}^{2}\,T^{2}(x), \tag{45}\] with the local temperature \(T(x)\) given by Eq. (44). ## V Cosmological applications In this section we consider a few specific examples. We begin with the familiar case of Debye screening of a non-Abelian plasma in Minkowski spacetime, to identify several regimes of interest. Next we generalize to the case of a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime, which introduces a new scale (compared to the Minkowski case), set by the Hubble radius. In the last subsection, we examine Debye screening in the vicinity of a primordial black hole. ### Debye screening in Minkowski spacetime We first consider the behavior of soft gluon modes \(A_{\mu}^{a}(x)\) within a hot plasma in Minkowski spacetime, to clarify notation and identify several physical regimes of interest. In that case, the lapse function becomes trivial, \(N(x)=1\), and the shift vector vanishes, \(\beta^{i}=0\). In the static limit, \(\partial_{0}A^{a}_{\mu}(x)=0\), the exact equations of motion of Eq. (5) reduce to \[\begin{split}&\delta^{0}_{\nu}\bigg{\{}\partial_{i}F^{a}_{i0}+g_{s} f^{abc}A^{b}_{i}F^{c}_{i0}\bigg{\}}\\ &+\delta^{i}_{\nu}\bigg{\{}\partial_{k}F^{a}_{ki}+g_{s}f^{abc} \left(A^{b}_{0}F^{c}_{i0}+A^{b}_{k}F^{c}_{ki}\right)\bigg{\}}=j^{a}_{\nu}, \end{split} \tag{46}\] where repeated indices are summed and we have adopted Cartesian coordinates for the spatial sections. The induced current of Eq. (43) reduces to \[j^{a}_{\nu}(\mathbf{x})=m_{D}^{2}A^{a}_{0}(\mathbf{x})\delta_{\nu}^{\phantom{ \nu}0}+\mathcal{O}(g_{s}^{3}). \tag{47}\] A generic feature of non-Abelian field theories, which has been well-studied for the case of self-interacting gauge fields in Minkowski spacetime, is the existence of monopole-like solutions among the spatial components \(A^{a}_{i}(\mathbf{x})\). Such solutions can be found in simplest form for SU(2) [67; 68; 69; 1], and have been generalized to monopoles charged under various 3-dimensional subgroups of SU(\(N_{c}\)) [70; 71; 72]. Moreover, it is well-known that self-consistent, static solutions to the non-Abelian equations of motion in Minkowski spacetime generically include a Yukawa-like screened behavior for the component \(A^{a}_{0}(\mathbf{x})\) combined with the Wu-Yang monopole solution for the components \(A^{a}_{i}(\mathbf{x})\)[69; 1]. Such solutions underscore the important physical point that only the components \(A^{a}_{0}(\mathbf{x})\) acquire a nonzero mass within a medium in the state described in Section IV--as indicated by the form of the induced current \(j^{a}_{\mu}(\mathbf{x})\) in Eq. (47)--and hence only those components undergo screening, with amplitude proportional to \(\exp[-m_{D}r]\). Our aim in this section is to examine how well-known field configurations in Minkowski spacetime generalize to various curved spacetimes of interest, in which additional length-scales become relevant. We therefore begin by considering an exact solution to Eq. (46) that consists of a superposition of a screened component \(A^{a}_{0}\) with a Wu-Yang monopole solution for the components \(A^{a}_{i}\). For the case of SU(2), the solution takes the form [67; 68; 69; 1] \[A^{a}_{0}(\mathbf{x})=-\frac{\mathcal{Q}^{a}_{0}\,e^{-m_{D}r}}{r},\,\,A^{a}_{ i}(\mathbf{x})=\frac{\epsilon_{aij}\hat{x}^{j}}{g_{s}r}, \tag{48}\] where \(\mathcal{Q}^{a}_{0}=\mathcal{Q}_{0}\,\hat{x}^{a}\), \(Q_{0}\) is a constant, \(\hat{x}^{j}\equiv x^{j}/r\) is a unit vector, and \(\epsilon_{ijk}\) is the usual Levi-Civita symbol. Note that the solution mixes spatial indices and color-space indices, which is straightforward for SU(2), since both \(i,j=1,2,3\) and \(a,b=1,2,3\). (To confirm that \(A^{a}_{\mu}(\mathbf{x})\) in Eq. (48) satisfies Eq. (46), it is helpful to construct the projection operator \(P^{a}_{\phantom{a}b}=\delta^{a}_{\phantom{a}b}-\hat{x}^{a}\hat{x}_{b}\), noting that \(\hat{x}_{a}P^{a}_{\phantom{a}b}=0=\hat{x}^{b}P^{a}_{\phantom{a}b}\) and \(\hat{\partial}_{j}\hat{x}^{i}=r^{-1}P^{i}_{\phantom{a}j}\).) For SU(3), exact monopole solutions have been found for subgroups such as \(\mathrm{U}(1)\times\mathrm{U}(1)\) and \(\mathrm{U}(2)\), for which the components \(A^{a}_{i}\) have comparable asymptotic behavior to the solution in Eq. (48) [70; 71; 72]. The solution for \(A^{a}_{\mu}(\mathbf{x})\) in Eq. (48) carries both chromoelectric charge \(\mathcal{Q}\) and chromomagnetic charge \(\mathcal{P}\). We consider the quasi-local charges defined in terms of the chromoelectric and chromomagnetic fields, \(E^{a}_{i}\equiv F^{a}_{0i}\) and \(B^{a}_{i}\equiv-\frac{1}{2}\epsilon_{ijk}F^{jk}_{a}\), respectively. Specifically, we identify charges via the relations \[E^{a}_{i}E^{a}_{i}=\frac{\mathcal{Q}^{2}(r)}{r^{4}},\,\,B^{a}_{i}B^{a}_{i}= \frac{\mathcal{P}^{2}(r)}{r^{4}}, \tag{49}\] where the spatial and color indices are summed over. Note that the fields \(E^{a}_{i}\) and \(B^{a}_{i}\) transform covariantly in color space under gauge transformations, and therefore the bilinear combinations in Eq. (49) are gauge-invariant. For the solution given in Eq. (48), the charges are \[\mathcal{Q}(r)=\mathcal{Q}_{0}\left(1+m_{D}r\right)e^{-m_{D}r},\,\,\,\mathcal{ P}(r)=\frac{1}{g_{s}}, \tag{50}\] where \(\mathcal{Q}_{0}=\sqrt{\mathcal{Q}^{a}_{0}\mathcal{Q}^{a}_{0}}\) is the chromoelectric charge measured at the origin. Because of screening, the value of \(\mathcal{Q}(r)\) measured at a finite distance from the origin will be reduced compared to \(\mathcal{Q}_{0}\). The chromomagnetic charge, on the other hand, is not screened, and hence \(\mathcal{P}=1/g_{s}\) everywhere. Whereas the magnitude of the chromomagnetic charge \(\mathcal{P}\) is fixed by \(1/g_{s}\), the chromoelectric charge \(\mathcal{Q}_{0}\) at the origin is a free parameter and can be arbitrarily larger than \(1/g_{s}\). Such a scenario would be compatible with a collection of test charges at the origin such that the energy associated with the charges does not backreact on the spacetime itself. In that case, there will exist a self-consistent regime, \(0\leq r\leq r_{\mathrm{abel}}\), within which \(|A^{a}_{0}(\mathbf{x})|\gg|A^{a}_{i}(\mathbf{x})|\), with \(r_{\mathrm{abel}}\) given by \[r_{\mathrm{abel}}\equiv\frac{1}{m_{D}}\mathrm{ln}\left[g_{s}\mathcal{Q}_{0} \right]. \tag{51}\] For \(0\leq r\leq r_{\mathrm{abel}}\), the system is quasi-Abelian, with \(A^{a}_{\mu}(\mathbf{x})\simeq A^{a}_{0}(\mathbf{x})\delta^{\phantom{\nu}0}_{ \mu}\), and hence non-Abelian contributions of the form \(f^{abc}A^{a}_{\mu}A^{b}_{\nu}\) remain subdominant. Within this regime, one may solve the exact equations of motion of Eqs. (46)-(47) with the ansatz \(A^{a}_{\mu}(\mathbf{x})=A^{a}_{0}(\mathbf{x})\delta^{0}_{\mu}\). The solution for \(A^{a}_{\mu}(\mathbf{x})\) is then of the form in Eq. (48), but with \(A^{a}_{i}(\mathbf{x})=0\) and \(\mathcal{Q}^{a}_{0}\) an arbitrary, constant vector in color space. ### Debye screening in an FLRW spacetime We next consider Debye screening within a hot quark-gluon plasma in an expanding FLRW spacetime at early times, prior to the QCD confinement transition (\(t<t_{\mathrm{QCD}}\sim 10^{-5}\,\mathrm{s}\)). The line-element for a spatially flat FLRW universe may be written \[ds^{2}=-dt^{2}+a^{2}(t)\,\left[dr^{2}+r^{2}d\Omega^{2}_{(2)}\right], \tag{52}\] where \(a(t)\) is the scale factor and \(r\) is a comoving radial coordinate. The Hubble parameter is given by \(H(t)\equiv\dot{a}/a\), where overdots denote derivatives with respect to cosmic time \(t\), and the Hubble radius is \(r_{H}=1/H\). The presence of \(r_{H}\) introduces a new scale compared to Minkowski spacetime, so now we must consider \(H\) as well as \(k_{\rm hard}\) and \(k_{\rm soft}\). Note also that in these coordinates the lapse function is \(N(x)=1\), so the temperature has no spatial gradients. The term \(\mathcal{L}_{\rm fluid}\) in the effective action of Eq. (2) includes contributions from the high-momentum quarks and gluons in the QGP, which, as noted in Section III, behave to leading order as a gas of non-interacting particles with a radiation-dominated equation of state. We assume that the (coarse-grained) energy density \(\rho\) and pressure \(P\) associated with the high-momentum particles dominate \(T_{\mu\nu}\), so that the Friedmann equation takes the form \[H^{2}=\frac{1}{3M_{\rm Pl}^{2}}\left(\frac{\pi^{2}}{30}g_{*}T^{4}\right) \tag{53}\] corresponding to a fluid of \(g_{*}\) effectively massless degrees of freedom in equilibrium at temperature \(T\); for the Standard Model at temperatures much greater than the top-quark mass (\(m_{t}=173\,\)GeV), \(g_{*}=106.75\)[22; 23]. During the radiation-dominated phase, \(a(t)\propto t^{1/2}\), so we have \(H(t)=1/(2t)\) and \[T(t)=\left(\frac{90}{\pi^{2}g_{*}}\right)^{1/4}\left(\frac{M_{\rm pl}}{2t} \right)^{1/2}, \tag{54}\] consistent with adiabatic expansion, \(T(t)\,a(t)=\) constant. The equations of motion from Eq. (5) are \[\begin{split} j_{\nu}^{a}&=\frac{1}{a^{2}}\,\delta_ {\nu}^{0}\bigg{\{}\partial_{t}F_{i0}^{a}+g_{*}f^{abc}A_{i}^{b}F_{i0}^{c}\bigg{\}} \\ &+\delta_{\nu}^{i}\bigg{\{}HF_{i0}^{a}+\partial_{0}F_{i0}^{a}+ \frac{1}{a^{2}}\partial_{k}F_{ki}^{a}\\ &\qquad\qquad+g_{s}f^{abc}\left(A_{0}^{b}F_{i0}^{c}+\frac{1}{a^ {2}}A_{k}^{b}F_{ki}^{c}\right)\bigg{\}},\end{split} \tag{55}\] where repeated indices are summed. The Hubble parameter \(H(t)=\dot{a}/a\) sets the time-scale over which we expect cosmological dynamics to be relevant. Notice that the terms that make Eq. (55) differ from Eq. (46) (up to factors of \(a(t)\)) are either proportional to \(H(t)\) or include a time derivative \(\dot{A}_{\mu}^{c}\). We consider an ansatz for \(A_{\mu}^{c}(x)\) in which the only time dependence arises from the scale factor \(a(t)\). By the chain rule, \(\mathrm{d}A_{\mu}^{c}/\mathrm{d}t=\dot{a}\;\partial A_{\mu}^{c}/\partial a=H\; \partial A_{\mu}^{c}/\partial\ln a\,\). Therefore any term with a time derivative \(\dot{A}_{\mu}^{c}\) will be proportional to \(H\). If \(H\ll k_{\rm soft}^{2}/k_{\rm hard}\sim g_{s}^{2}T\), then all terms proportional to \(H\), which include those with time derivatives \(\dot{A}_{\mu}^{c}\), will be subleading compared to \(j_{\nu}^{c}\sim m_{D}^{2}A_{0}^{c}\sim k_{\rm soft}^{2}k_{\rm hard}\). If the hierarchy \(H\ll g_{s}^{2}T\) is satisfied at early times, it will be even more easily satisfied at later times, since \(H\sim T^{2}/M_{\rm pl}\) from Eq. (54) and \(T/M_{\rm pl}\) decreases as the universe cools down, while \(g_{s}\) gradually runs to larger values. Now we proceed to show that \(H\ll g_{s}^{2}T\) holds at early times of interest. Consider, as an example, dynamics at the time \(t_{c}\sim 10^{-21}\,\)s, the earliest time that a population of primordial black holes (PBHs) could have formed, if the PBHs are to constitute all of dark matter today [24; 25; 26; 27]. At such early times, the Hubble scale \(H(t_{c})=3.0\times 10^{-4}\,\)GeV and the fluid filling the FLRW spacetime had a temperature \(T(t_{c})=10^{7}\,\)GeV. At such high energy scales, we must take into account the running of the strong coupling \(\alpha_{s}\equiv g_{s}^{2}/(4\pi)\). To two-loop order, the running with energy scale \(\mu\) is given by [4] \[\frac{\mathrm{d}\alpha_{s}(\mu)}{\mathrm{d}\ln\mu}=-2\alpha_{s}\left[b_{0}\frac {\alpha_{s}}{4\pi}+b_{1}\left(\frac{\alpha_{s}}{4\pi}\right)^{2}\right], \tag{56}\] with \[\begin{split} b_{0}&=\frac{11}{3}N_{c}-\frac{2}{3}N _{f},\\ b_{1}&=\frac{32}{3}N_{c}^{2}-\frac{10}{3}N_{c}N_{f}- \left(\frac{N_{c}^{2}-1}{N_{c}}\right)N_{f}.\end{split} \tag{57}\] Upon normalizing \(\alpha_{s}(m_{Z})=0.118\) at the \(Z\) boson mass (\(m_{Z}=91.2\,\)GeV), we find \(\alpha_{s}=0.046\) at the energy scale \(\mu=T(t_{c})=10^{7}\) GeV. This yields a ratio \(g_{s}^{2}T/H\sim 10^{9}\). Hence we may self-consistently neglect \(H\ll g_{s}^{2}T\) when considering the dynamics of the gluon modes on length-scales \(\lambda_{D}\), even at very early times in cosmic history. In other words, the FLRW spacetime is consistent with our quasi-stationary approximation from Eq. (24), since \(|\partial_{t}g^{\mu\nu}|\sim H\ll g_{s}k_{\rm soft}<k_{\rm soft}\). Having shown that we may neglect cosmological dynamics, the scale factor \(a(t)\) will be approximately constant in regimes of interest. We may therefore rescale the spatial coordinates as \(x^{i}\to x^{i}/a\), and thus recover identical equations of motion to those in the Minkowski background, Eq. (46). A solution is the superposition of a screened component \(A_{0}^{a}\) with a Wu-Yang monopole for the components \(A_{i}^{a}\), which in the case of SU(2) takes a similar form to Eq. (48), \[A_{0}^{a}(x)=-\frac{\mathcal{Q}_{0}^{a}\,e^{-m_{D}ar}}{ar},\;\;A_{i}^{a}(x)= \frac{\epsilon_{aij}\dot{x}^{j}}{g_{s}ar}, \tag{58}\] where we have rescaled the spatial coordinates back so that \(r\) is the comoving radial coordinate from Eq. (52). The solution for \(A_{\mu}^{a}(x)\) in Eq. (58) remains consistent with the quasi-stationary approximation of Eq. (24) for \(\lambda_{D}\leq r\ll r_{H}\). ### Debye screening near a primordial black hole Our final example concerns Debye screening within the hot plasma surrounding a primordial black hole (PBH) that formed early in cosmic history. PBHs form via direct gravitational collapse of primordial overdensities, and their masses are typically proportional to the Hubble mass \(M_{H}\) at the time of formation \(t_{c}\), where \(M_{H}(t_{c})=4\pi M_{\rm pl}^{2}/H(t_{c})\) is the mass contained within a Hubble radius \(H^{-1}(t_{c})\). Such black holes may therefore form with a huge range of masses, depending on their time of formation [24; 25; 26; 27]. Any PBHs that formed at times \(t<t_{\rm QCD}=10^{-5}\,\)s would undergo collapse amid a hot plasma of unconfined quarks and gluons. On long length-scales at such early times, \(\lambda\gg 1/k_{\rm soft}\), we expect that the plasma would have attained a charge-neutral equilibrium distribution. Yet on shorter length-scales, set by \(\lambda_{D}=1/m_{D}\sim 1/k_{\rm soft}\sim 1/(g_{s}T)\), spatial regions of nonvanishing net color charge can form, within which the charges for most soft gluons align in color space [18; 19]. In such scenarios, the PBHs would form by absorbing one or more net-color regions. Depending on the ratio of the PBH radius (\(\sim GM_{\rm PBH}\)) and the Debye length (\(\sim 1/m_{D}\)) at the time of formation, PBHs therefore could form with net color charge \(\mathcal{Q}_{0}\). Such a scenario is distinct from the examples reviewed in Ref. [73; 74], which concern black hole solutions to the Einstein-Yang-Mills equations in vacuum. In our case, the PBHs are immersed in a hot, active medium. We consider a scenario in which a PBH forms with a small net charge \(\mathcal{Q}_{0}\); the case of larger \(\mathcal{Q}_{0}\) is treated in Ref. [75]. In particular, we consider the regime \[\mathcal{Q}_{0}^{2}\ll GM_{\rm PBH}^{2}. \tag{59}\] Within this regime, backreaction on spacetime from the energy associated with the enclosed charge \(\mathcal{Q}_{0}\) remains subdominant, and we may consider the dynamics of soft gluon modes \(A_{\mu}^{a}(x)\) within a fixed background geometry. (For \(\mathcal{Q}_{0}^{2}\sim GM_{\rm PBH}^{2}\), one must solve the Einstein field equations as well as the equations of motion for \(A_{\mu}^{a}(x)\), which we consider in separate work [76].) We consider a spherically symmetric spacetime with a black hole of mass \(M_{\rm PBH}\) at the origin surrounded by hot plasma with a radiation-dominated equation of state. Our formalism holds in a quasi-static limit, which requires that \(M_{\rm PBH}\) not change appreciably over time-scales set by \(1/k_{\rm soft}\). Two competing effects could change \(M_{\rm PBH}\): evaporation due to Hawking radiation (which would reduce \(M_{\rm PBH}\) over time) and accretion from the surrounding medium (which would increase \(M_{\rm PBH}\) over time). For the regime of interest, we find that both of these effects are negligible over the relevant dynamical time-scale, and hence we may neglect \(\dot{M}_{\rm PBH}\). Consider first evaporation from Hawking radiation. Because we are considering PBHs with modest charge, subject to the inequality of Eq. (59), we may approximate the Hawking temperature based on that for an (uncharged) Schwarzschild black hole of mass \(M_{\rm PBH}\). (Any net enclosed charge would decrease the black hole's surface gravity, and hence its Hawking temperature, compared to the zero-charge case, thus rendering evaporation even less efficient.) The Hawking temperature for a Schwarzschild black hole is given by [59] \[T_{H}=\frac{M_{\rm pl}^{2}}{M_{\rm PBH}}. \tag{60}\] The typical mass for a PBH is set by the mass enclosed within a Hubble volume \(M_{H}(t_{c})\) at the time of collapse \(t_{c}\)[77; 78; 79; 80]: \[M_{\rm PBH}(t_{c})=\gamma M_{H}(t_{c})=4\gamma\sqrt{\frac{90}{g_{*}}}\left( \frac{M_{\rm pl}}{T_{c}}\right)^{2}M_{\rm pl}, \tag{61}\] where \(\gamma\simeq 0.2\) and \(T_{c}\) is the temperature of the plasma at the time \(t_{c}\). The values of \(M_{\rm PBH}\) relevant to account for dark matter lie within the range \(10^{17}\,\)g \(\leq M_{\rm PBH}\leq 10^{22}\,\)g [24; 25; 26; 27]; from Eq. (61), these correspond to plasma temperatures \(10^{5}\,\)GeV \(\leq T_{c}\leq 10^{7}\,\)GeV. Meanwhile, for masses within the dark-matter range, Eq. (60) yields \(10^{-9}\,\)GeV \(\leq T_{H}\leq 10^{-4}\,\)GeV, exponentially lower than the temperature of the surrounding plasma. Such PBHs will therefore be net absorbers (rather than emitters) around the time of their formation. Next we may consider accretion. One might suppose that if the fluid were moving with respect to the black hole, \(M_{\rm PBH}\) would increase by absorbing some of the nearby fluid. Nonetheless, even if the relative speed between the fluid and the black hole approached the speed of light, and the black hole absorbed all the fluid in its path, then \(M_{\rm PBH}\) would only increase by a fraction \(\Delta M/M_{\rm PBH}\sim\mathcal{O}(1)\,\mathcal{R}\,T/M_{\rm pl}\) over an entire Hubble time, where \(\mathcal{R}\) is the ratio of the black hole radius to the Debye screening length and \(T\) the temperature of the plasma. As in the previous section, we are interested in early times at which the plasma temperature could have been as high as \(T\sim 10^{7}\) GeV, in which case \(T/M_{\rm pl}\sim 10^{-11}\). We consider black holes for which the ratio \(\mathcal{R}\) is not exponentially larger than one, so that the PBH forms with some small, residual color charge, and therefore \(\Delta M/M\) remains negligible. A more formal calculation of the Bondi accretion rate for PBHs in this scenario yields the same conclusion [81; 82; 83; 84]. Within the regime indicated in Eq. (59), the spacetime can be described by the McVittie line-element, which reduces to an ordinary Schwarzschild spacetime near the origin and asymptotes to a spatially flat FLRW spacetime at large distances [85; 86; 87]. A convenient parameterization may be written [87] \[\begin{split} ds^{2}&=-f(t,r)\,dt^{2}\\ &\quad+\left[\frac{a(t)\left\{dr+H(t)\,rdt\right\}}{\sqrt{f(t,r)}} -H(t)\,a(t)\,rdt\right]^{2}\\ &\quad+a^{2}(t)\,r^{2}d\Omega_{(2)}^{2},\end{split} \tag{62}\] with \[f(t,r)\equiv 1-\frac{r_{s}}{a(t)r} \tag{63}\] and \(r_{s}\equiv 2GM_{\rm PBH}\) the usual Schwarzschild radius. We found in Section V.2 that in cosmological scenarios of interest, \(H\ll m_{D}\), even at very early times. For the remainder of this section, we will therefore set \(H\sim 0\), for which \(a(t)\sim\text{constant}\) (which we will scale to 1). In that limit, the lapse function reduces to \(N(x)\to\sqrt{f(r)}\), with \(f(r)=1-r_{s}/r\), and the shift vector vanishes, \(\beta^{i}\to 0\). Next we consider an appropriate range for the enclosed charge \(\mathcal{Q}_{0}\), subject to the constraint in Eq. (59). The shortest scales that can be resolved within our EFT are set by \(\lambda_{D}=1/m_{D}\), so we restrict \(r_{s}\geq\lambda_{D}\), which in turn requires \(M_{\text{PBH}}\geq 1/(2Gm_{D})\). Then Eq. (59) corresponds to the upper bound \[\mathcal{Q}_{0}\ll\frac{1}{\sqrt{2\alpha_{s}}}\,\frac{M_{\text{pl}}}{T}\sim 10^ {12} \tag{64}\] around \(t_{c}\sim 10^{-21}\,\text{s}\). Meanwhile, as discussed around Eq. (51), if \(\mathcal{Q}_{0}\gg 1/g_{s}\), then the soft modes \(A_{\mu}^{a}\) will assume a quasi-Abelian form, with \(|A_{0}^{a}|\gg|A_{i}^{a}|\), for \(r<r_{\text{abel}}\). The estimate of \(r_{\text{abel}}\) in Eq. (51) holds in homogeneous spacetimes, for which the temperature has no spatial gradients. In the present case, given Eq. (44), \(m_{D}(x)\sim g_{s}T_{0}/\sqrt{f(r)}\). Following the same steps that led to Eq. (51), we find \[\frac{r_{\text{abel}}}{r_{s}}=\frac{1}{2}\left[1+\sqrt{1+\left(\frac{2\tilde{ \lambda}_{D}}{r_{s}}\text{ln}[g_{s}\mathcal{Q}_{0}]\right)^{2}}\right], \tag{65}\] where \(\tilde{\lambda}_{D}\) is the Debye length associated with the temperature \(T_{0}\). For \(r_{s}/\lambda_{D}=3.5\) and \(\mathcal{Q}_{0}\sim 10^{4}\), this yields \(r_{\text{abel}}\sim 3\,r_{s}\), while \(\mathcal{Q}_{0}\sim 10^{10}\) yields \(r_{\text{abel}}\sim 7\,r_{s}\). The last departure to consider from the previous examples concerns the effect of the local temperature \(T(x)\) on the coupling \(\alpha_{s}\). Since the local temperature \(T(r)\) increases as \(r\) approaches \(r_{s}\), the local QCD strength runs toward zero as one approaches the black hole event horizon: an example of asymptotic freedom within an inhomogeneous spacetime. To quantify the effect, we use Eq. (56) for the running of \(\alpha_{s}\) with energy scale \(\mu\to T(r)\). To resolve the behavior of \(\alpha_{s}\) near \(r_{s}\), we adopt the "tortoise" radial coordinate, \(r_{*}\equiv r+r_{s}\text{ln}[(r/r_{s})-1]\), for which \(r=r_{s}\) corresponds to \(r_{*}\to-\infty\)[59]. See Fig. 1. In the vicinity of a primordial black hole, the evolution of the soft modes \(A_{\mu}^{a}(x)\) is therefore characterized by a hierarchy of length-scales: \[\lambda_{D}<r_{s}<r_{\text{abel}}\ll r_{H}. \tag{66}\] For \(r_{s}\leq r\leq r_{\text{abel}}\), the soft modes will evolve as a quasi-static, quasi-Abelian system within a fixed background spacetime. In that regime, \(A_{\mu}^{a}(x)\simeq A_{0}^{a}(r)\delta_{\mu}^{0}+\mathcal{O}(r/r_{\text{abel}})\), and the equations of motion in Eq. (5) reduce to \[\partial_{r}^{2}A_{0}^{a}(r)+\frac{2}{r}\partial_{r}A_{0}^{a}(r)-\frac{\tilde{ m}_{D}^{2}}{f^{2}(r)}A_{0}^{a}(r)=0, \tag{67}\] where \(\tilde{m}_{D}=1/\tilde{\lambda}_{D}\) is the Debye mass associated with the temperature \(T_{0}\). Including the running of \(g_{s}\) with \(r\), we may solve Eq. (67) numerically, with a typical example shown in Fig. 2. The component \(A_{0}^{a}(r)\) undergoes strong screening for \(r\gtrsim r_{s}\), since more plasma gathers near the event horizon, yielding a higher density and temperature than at locations \(r\gg r_{s}\). Far from the black hole, \(A_{0}^{a}(r)\) asymptotes to similar behavior as found in Sections V.1 and V.2, with \(A_{0}^{a}\sim\exp[-m_{D}r]/r\). Note that an observer at \(r\gtrsim r_{s}\) would measure an effective charge \(\mathcal{Q}(r)\) much smaller than the charge \(\mathcal{Q}_{0}\) contained within the black hole. Within our coordinate system, the Tolman temperature gradient drives \(T(r)\to\infty\) as \(r\to r_{s}\). The gradient is physical, but the divergence is an artifact of our fixed-background approximation; we have not allowed the spacetime to backreact. To produce Fig. 2, we followed the example of Refs. [88; 89; 90] and evaluated the field with a boundary condition at a "stretched horizon," Figure 1: The QCD coupling strength \(\alpha_{s}\equiv g_{s}^{2}/(4\pi)\) runs to lower values near the event horizon of a black hole, as the temperature of the surrounding plasma increases. Here \(r_{*}\) is the “tortoise” radial coordinate, \(r_{*}\equiv r+r_{s}\text{ln}[(r/r_{s})-1]\); \(r_{*}/r_{s}=-36\) corresponds to \(r/r_{s}=1+10^{-16}\). We have set \(T_{0}=10^{7}\,\text{GeV}\) on the asymptotic boundary. \(r_{s}+\epsilon\), rather than at \(r_{s}\), with \(\epsilon/r_{s}=10^{-6}\). In forthcoming work [76], we address this limitation by solving the coupled Einstein field equations and equation of motion for \(A_{0}^{a}(x)\) within the quasi-Abelian regime. ## VI Discussion We have generalized the description of Debye screening of charges in hot plasmas to curved spacetime backgrounds. For fluids undergoing normal flow in approximately static spacetimes, we found that the characteristic screening length \(\lambda_{D}(x)=1/m_{D}(x)\) is set by a Debye mass \(m_{D}(x)\sim g_{s}T(x)\) given in Eq. (45), where \(g_{s}\) is the dimensionless gauge coupling. This Debye mass is the natural generalization of the Minkowski result, upon allowing the temperature to have spatial gradients due to gravitational redshift, thereby reproducing Tolman's classic result [28; 29; 30]. To characterize Debye screening, we constructed an effective theory for long-wavelength excitations in a hot Yang-Mills plasma. We analyzed the dynamics of high-momentum particles in the plasma (with momentum \(k_{\rm hard}\sim T\)) by considering classical transport equations, and exploited the structure of Hamiltonian mechanics to show that non-Abelian self-interactions induce an effective local mass \(m_{D}(x)\) for the \(A_{0}^{a}\) components of the soft modes (with momentum \(k_{\rm soft}\sim g_{s}T\)). We applied our results to solve for the gauge potential \(A_{\mu}^{a}\) in a few cases of interest. We recovered the well-known Wu-Yang monopole solution in the Minkowski limit and generalized it to FLRW spacetimes, demonstrating the self-consistency of the quasi-static approximation in regimes of interest, for which cosmological time-scales may be neglected compared to \(1/k_{\rm soft}\). Lastly, we analyzed Debye screening in the vicinity of a primordial black hole immersed in a hot quark-gluon plasma. Such black holes can form by absorbing regions of the plasma with nonvanishing net color charge, with characteristic size set by \(\lambda_{D}\); hence the resulting primordial black holes can have a residual, net color charge. Building on the examples involving homogeneous spacetimes, we identified a regime in which the soft modes of the gauge field outside the black hole exhibit quasi-Abelian behavior, \(A_{\mu}^{a}(x)\approx A_{0}^{a}(r)\delta_{\mu}^{0}\). Given the Tolman temperature gradients, the interaction strength \(\alpha_{s}\) runs to smaller values near the event horizon. Incorporating this unusual example of asymptotic freedom, we solved numerically for the soft-mode gauge potential \(A_{0}^{a}(r)\), and found enhanced screening of the charge enclosed within the black hole, due to an increased density of the plasma near the event horizon. Our examples have been restricted so far to fixed background spacetimes. Future work will focus on exploiting our EFT to study realistic cosmological scenarios involving primordial black holes. This will require solving the coupled Einstein-Yang-Mills equations for a black hole in a hot, non-Abelian plasma [76]. This formalism can also be used to consider primordial black holes with substantial QCD color charge, which could have formed at very early times in cosmic history, well before the QCD confinement transition [75]. ## Acknowledgements It is a pleasure to thank Chris Akers, Peter Arnold, Thomas Baumgarte, Jolyon Bloomfield, Daniel Harlow, Scott Hughes, Edmond Iancu, Mikhail Ivanov, Patrick Jefferson, Jamie Karthein, Hong Liu, Cristina Manuel, Jerome Martin, Govert Nijs, Krishna Rajagopal, Phiala Shanahan, Vincent Vennin, and Xiaojun Yao for helpful discussions. Portions of this work were conducted in MIT's Center for Theoretical Physics and supported in part by the U. S. Department of Energy under Contract No. DE-SC0012567. EAM is also supported by a fellowship from the MIT Department of Physics.
2310.00199
DeformUX-Net: Exploring a 3D Foundation Backbone for Medical Image Segmentation with Depthwise Deformable Convolution
The application of 3D ViTs to medical image segmentation has seen remarkable strides, somewhat overshadowing the budding advancements in Convolutional Neural Network (CNN)-based models. Large kernel depthwise convolution has emerged as a promising technique, showcasing capabilities akin to hierarchical transformers and facilitating an expansive effective receptive field (ERF) vital for dense predictions. Despite this, existing core operators, ranging from global-local attention to large kernel convolution, exhibit inherent trade-offs and limitations (e.g., global-local range trade-off, aggregating attentional features). We hypothesize that deformable convolution can be an exploratory alternative to combine all advantages from the previous operators, providing long-range dependency, adaptive spatial aggregation and computational efficiency as a foundation backbone. In this work, we introduce 3D DeformUX-Net, a pioneering volumetric CNN model that adeptly navigates the shortcomings traditionally associated with ViTs and large kernel convolution. Specifically, we revisit volumetric deformable convolution in depth-wise setting to adapt long-range dependency with computational efficiency. Inspired by the concepts of structural re-parameterization for convolution kernel weights, we further generate the deformable tri-planar offsets by adapting a parallel branch (starting from $1\times1\times1$ convolution), providing adaptive spatial aggregation across all channels. Our empirical evaluations reveal that the 3D DeformUX-Net consistently outperforms existing state-of-the-art ViTs and large kernel convolution models across four challenging public datasets, spanning various scales from organs (KiTS: 0.680 to 0.720, MSD Pancreas: 0.676 to 0.717, AMOS: 0.871 to 0.902) to vessels (e.g., MSD hepatic vessels: 0.635 to 0.671) in mean Dice.
Ho Hin Lee, Quan Liu, Qi Yang, Xin Yu, Shunxing Bao, Yuankai Huo, Bennett A. Landman
2023-09-30T00:33:41Z
http://arxiv.org/abs/2310.00199v2
DeformUX-Net: Exploring a 3D Foundation Backbone for Medical Image Segmentation with Depthwise Deformable Convolution ###### Abstract The application of 3D ViTs to medical image segmentation has seen remarkable strides, somewhat overshadowing the budding advancements in Convolutional Neural Network (CNN)-based models. Large kernel depthwise convolution has emerged as a promising technique, showcasing capabilities akin to hierarchical transformers and facilitating an expansive effective receptive field (ERF) vital for dense predictions. Despite this, existing core operators, ranging from global-local attention to large kernel convolution, exhibit inherent trade-offs and limitations (e.g., global-local range trade-off, aggregating attentional features). We hypothesize that deformable convolution can be an exploratory alternative to combine all advantages from the previous operators, providing long-range dependency, adaptive spatial aggregation and computational efficiency as a foundation backbone. In this work, we introduce 3D DeformUX-Net, a pioneering volumetric CNN model that adeptly navigates the shortcomings traditionally associated with ViTs and large kernel convolution. Specifically, we revisit volumetric deformable convolution in depth-wise setting to adapt long-range dependency with computational efficiency. Inspired by the concepts of structural re-parameterization for convolution kernel weights, we further generate the deformable tri-planar offsets by adapting a parallel branch (starting from \(1\times 1\times 1\) convolution), providing adaptive spatial aggregation across all channels. Our empirical evaluations reveal that the 3D DeformUX-Net consistently outperforms existing state-of-the-art ViTs and large kernel convolution models across four challenging public datasets, spanning various scales from organs (KiTS: 0.680 to 0.720, MSD Pancreas: 0.676 to 0.717, AMOS: 0.871 to 0.902) to vessels (e.g., MSD hepatic vessels: 0.635 to 0.671) in mean Dice. The source code with our pre-trained model is available at [https://github.com/MASILab/deform-uxnet](https://github.com/MASILab/deform-uxnet). ## 1 Introduction Recent advancements have seen the integration of Vision Transformers (ViTs) Dosovitskiy et al. (2020) into 3D medical applications, notably in volumetric segmentation benchmarks Wang et al. (2021); Hatamizadeh et al. (2022); Zhou et al. (2021); Xie et al. (2021). What makes ViTs unique is their absence of image-specific inductive biases and their use of multi-head self-attention mechanisms. The profound impact of ViTs has somewhat eclipsed the emerging techniques in traditional Convolutional Neural Network (CNN) architectures. Despite the limelight on 3D ViTs, large kernel depthwise convolution presents itself as an alternative for extracting features with a broad field of view and scalability Lee et al. (2022). Unlike standard convolutions, depthwise convolution operates on each input channel separately, leading to fewer parameters and enhancing the feasibility of using large kernel sizes. In comparing CNNs to ViTs, we observe that a key attribute for generating fine-grained dense predictions is extracting meaningful context with a large effective receptive field (ERF). However, beyond leveraging large ERF, core operators like global-local self-attention mechanisms and large kernel convolution each have their own sets of trade-offs. Local self-attention in hierarchical transformers struggles to provide long-range dependencies, while global attention in Vanilla ViTs exhibits quadratic computational complexity relative to the image resolution. Concurrently, large kernel convolution computes features using static summations and falls short in providing spatial aggregation akin to the self-attention mechanism. Given these observations, we can summarize these trade-offs specifically for volumetric segmentation in three main directions: **1) global-local range dependency**, **2) adaptive spatial aggregation across kernel elements**, and **3) computation efficiency**. We further pose a question: **"Can we design a core operator that addresses such trade-offs in both ViTs and large kernel convolution for 3D volumetric segmentation?"** Recent advancements, such as those by Ying et al. and Wang et al. (2020); Wang et al. (2023), have enhanced deformable convolution design, offering a computationally scalable approach that marries the benefits of Vision Transformers (ViTs) and CNNs for visual recognition tasks. Drawing inspiration from these advancements, we revisit deformable convolution to explore its capability in: (1) **efficiently adapting long-range dependencies and offering adaptive spatial aggregation with 3D convolution modules**, (2) **achieving state-of-the-art (SOTA) performance across diverse clinical scenarios, including organs, tumors, and vessels**, and (3) **setting a novel direction in the development of foundational convolution backbones for volumetric medical image segmentation.** Diverging from the likes of SwinUNETHatamizadeh et al. (2022) and 3D UX-Net Lee et al. (2022), we introduce a pioneering architecture, termed 3D DeformUX-Net. This model aims to address challenges from convolution to global-local self-attention mechanisms and bolster the implementation of deformable convolution for 3D segmentation, ensuring robust performance in variable clinical scenarios. Specifically, we leverage volumetric deformable convolution in a depth-wise setting to adapt long-range dependency handling with computational efficiency. Preceding the deformable convolution operations, we incorporate concepts from structural re-parameterization of large kernel convolution weights Ding et al. (2022) and present a parallel branch design to compute the deformable tri-planar offsets, ensuring adaptive spatial aggregation for deformable convolution across all feature channels. We evaluate 3D DeformUX-Net on supervised volumetric segmentation tasks across various scales using four prominent datasets: 1) MICCAI 2019 KiTS Challenge dataset (kidney, tumor and cyst) Heller et al. (2019), 2) MICCAI Challenge 2022 AMOS (multi-organ) Ji et al. (2022), 3) Medical Segmentation Decatholon (MSD) Pancreas dataset (pancreas, tumor) and 4) hepatic vessels dataset (hepatic vessel and tumor) Antonelli et al. (2022). 3D DeformUX-Net consistently outperforms current transformer and CNN SOTA approaches across all datasets, regardless of organ scale. Our primary contributions can be summarized as follows: * We introduce the 3D DeformUX-Net, a pioneering approach that addresses the trade-offs inherent in the global-local self-attention mechanisms and convolutions for volumetric dense predictions. To the best of our knowledge, this represents the inaugural block design harnessing 3D deformable convolution in depth-wise setting, rivaling the performance of established transformer and CNN state-of-the-art (SOTA) models. * We leverage deformable convolution in depth-wise setting with tri-planar offsets computed in a parallel branch design to adapt long-range dependency and adaptive spatial aggregation with efficiency. To our best knowledge, this is the first study to introduce multi-planar offsets into deformable convolution for medical image segmentation. * We use four challenging public datasets to evaluate 3D DeformUX-Net in direct training scenario with volumetric multi-organ/tissues segmentation across scales. 3D DeformUX-Net achieves consistently improvement across all CNNs and transformers SOTA. ## 2 Timeline for Segmentation Network: From CNNs to ViTs In the realm of medical image segmentation, 2D/3D U-Net is the starting point to demonstrate the feasibility of performing dense prediction with supervised training Ronneberger et al. (2015); Cicek et al. (2016). A variety of network structure designs also propose (e.g., V-Net Milletari et al. (2016), UNet++ Zhou et al. (2018), H-DenseUNet Li et al. (2018), SegResNet Myronenko (2018)) to adapt different imaging modalities and organ semantics. Moreover, nnUNet is proposed to provide a complete hierarchical framework design to maximize the coarse-to-fine capabilities of 3D UNet Isensee et al. (2021). However, most of the networks only leverage the convolution mechanism with small kernel sizes and limit to the learning of locality with small ERF. Starting from 2021, the introduction of ViTs provides the distinctive advantages of long-range dependency (global attention) and large ERF for different medical downstream tasks. There's been a substantial shift towards incorporating ViTs for enhanced dense predictions (e.g., TransUNet Chen et al. (2021), LeViT Xu et al. (2021), CoTr Xie et al. (2021), UNETR Hatamizadeh et al. (2022b)). Due to the quadratic complexity of multi-head self-attention mechanism in ViTs, it is challenging to adapt ViT as the foundation backbone for medical image segmentation with respect to high-resolution medical images. To tackle the computation complexity in ViTs, the hierarchical transformers (e.g., swim transformer Liu et al. (2021)) demonstrate notable contributions to extract fine-grain features with the concept of sliding window. Works like SwinUNETR Hatamizadeh et al. (2022a) and nnFormer Zhou et al. (2021), directly employ Swin Transformer blocks within the encoder to improve organ and tumor segmentation in 3D medical images. Some advancements, like Tang et al.'s self-supervised learning strategy for SwinUNETR, highlight the adaptability of these frameworks Tang et al. (2022). Similarly, 2D architectures like Swin-Unet Cao et al. (2021) and SwinBTS Jiang et al. (2022) incorporate the Swin Transformer to learn intricate semantic features from images. Yet, despite their potential, these transformer-based volumetric segmentation frameworks are bogged down by lengthy training times and considerable computational complexity, especially when extracting multi-scale features. In parallel with hierarchical transformers, depthwise convolution demonstrates an alternative to adapt large kernel sizes with efficiency. Notably, ConvNeXt by Liu et al. offers a glimpse of how one can blend the advantages of ViTs with large kernel depthwise convolution for downstream visual recognition tasks Liu et al. (2022). 3D UX-Net further bridges the gap to adapt large kernel depthwise convolution for volumetric segmentation with high-resolution medical images Lee et al. (2022). However, such large kernel design is limited to address the clinical scenarios of segmenting multi-scale tissues (e.g., tumors, vessels) Kuang et al. (2023) and additional prior knowledge may need to enhance learning convergence for locality in the large kernel Lee et al. (2023). Yet, a gap still persists in the literature regarding the feasibility and the optimal approach to adapt deformable convolution in volumetric segmentation. Given the advantages offered by the deformable convolution, there is potential to tackle most of the trade-offs across ViTs and CNNs with specific design. ## 3 3D DeformUX-Net: Intuition Inspired by Ying et al. (2020) and Wang et al. (2023), we introduce 3D DeformUX-Net, a purely volumetric CNN that revisits the concepts of deformable convolution and preserves all advantages across ViTs and CNNs mechanisms. We explore the fine-grained difference between convolution and the self-attention from local-to-global scale and investigate the variability of block design in Figure 1: This figure compares our proposed block design with representative 3D medical image segmentation designs. We leverage depth-wise deformable convolution in parallel with a multi-layer perceptron (MLP), which generates the tri-planar offset to adapt long-range dependency and adaptive spatial aggregation across for deformable convolution. Furthermore, the deformable convolution module is followed with a MLP to provide linear scaling similarly to the Swin Transformer module. parallel as our main intuition in Figure 1. With such observation, we further innovate a simple block design to enhance the feasibility of adapting 3D deformable convolution with robustness for volumetric segmentation. First, we investigate the variability across the properties of self-attention and convolution as three folds: * **Global-to-Local Range Dependency**: The concept of long-range dependency can be defined as more portion of the image is recognized by the model with a large receptive field. In medical domain, the de-facto effective receptive field for traditional segmentation network (e.g., 3D U-Net) is relatively small with the convolution kernel sizes of \(3\times 3\times 3\). With the introduction of ViTs, the idea of transforming a \(16\times 16\times 16\) patch into a 1-D vector significantly enhances the ERF for feature computation and defines it as global attention. While such global attention is limited to demonstrate the fine-grained ability for dense prediction, hierarchical transformer (e.g., Swin Transformer) is further proposed to compute local attention with large sliding window sizes specific for high-resolution segmentation. Meanwhile, large kernel convolution starts to explore in parallel, which shows the similar ability of hierarchical transformer and demonstrates the effectiveness of enlarging ERF in downstream segmentation tasks. However, the segmentation performance becomes saturated or even degraded when scaling up the kernel sizes. Therefore, an optimal ERF is always variable depending on the morphology of the semantic target for downstream segmentation. * **Adaptive Spatial Aggregation**: While the weights computed from self-attention mechanism are dynamically conditioned by the input, a static operator is generated from the convolution mechanism with high inductive biases. Such characteristic enhances the recognition of locality and neighboring structure and leverages fewer training samples compared to ViTs. However, summarizing the visual content into a static value is limited to providing element-wise importance in a convolution kernel. Therefore, we hypothesize that an additional variant of prior information can be extracted within the visual context and may benefit the element-wise correspondence of the kernel weights. * **Computation Efficiency**: Compared to both ViTs and CNNs, global attention from the traditional vanilla ViT demonstrates the lowest computation efficiency and it is challenging to scale up with respect to the quadratic complexity from the input size. Although the hierarchical transformers further reduce the feature dimensionality with sub-window shifting to compute self-attention, the computation of shifted window self-attention is computational unscalable to achieve via traditional 3D model architectures. Therefore, the depth-wise convolution with large kernel sizes demonstrates to be another efficient alternative for computing features. Figure 2: Overview of the deformable convolution mechanisms. Deformable convolutions introduce an adaptable spatial sampling capability that transcends the rigid bounds of conventional \(3\times 3\times 3\) regions, achieved with the deformable offsets (light green arrows). Such offsets can demonstrate the capability of generalizing various transformation such as scaling and rotation (as shown in x-z, y-z offsets grid). The deformable offsets in our scenario are computed with a multi-layer perceptron. 3D DeformUX-Net: Complete Backbone ### Depthwise Deformable Convolution with Tri-Planar Offsets To accommodate all trade-offs that we explored from both convolution and self-attention mechanisms, the simplest way is to innovate a block design that can bridge the gap, adapting long-range dependency and adaptive spatial aggregation with efficiency. We hypothesize that a variant of convolution, deformable convolution, can provide the feasibility to address all trade-offs. Given a volumetric input \(x\in\mathcal{R}^{C\times H\times W\times D}\) and the current centered voxel \(v_{0}\) in the kernel, the operation of deformable convolution can be formulated as: \[y(v_{0})=\sum_{k=1}^{K}w(v_{k})\cdot x(v_{0}+v_{k}) \tag{1}\] where \(v_{0}\) represents an arbitrary location in the output feature \(y\) and \(v_{k}\) represents the \(k_{th}\) value in the convolution sampling grid \(G=(-1,-1,-1),(-1,-1,0),...,(1,1,0),(1,1,1)\) with \(3\times 3\times 3\) convolution kernel as the foundation basis. \(K=27\) is the size of the sampling grid. \(w\in\mathcal{R}^{C\times C}\) denotes the projection weight of the k-th sampling point in the convolution kernel. To enlarge the receptive field of a \(3\times 3\times 3\) convolution, adapting learnable offsets is the distinctive properties for deformable convolution and enhance the spatial correspondence between neighboring voxels, as shown in Figure 2. Unlike the videos input in Ying et al. (2020), 3D medical images provide high-resolution tri-planar spatial context with substantial variability organs/tissues morphology across patients' conditions. To adapt such variability, we propose to adapt tri-planar learnable offsets \(\Delta v_{k}\in\mathcal{R}^{3K\times H\times W\times D}\), which has a channel size of \(3K\) and each \(K\) channel represent one of axes (i.e., height, width and depth) for 3D spatial deformation. Furthermore, we observe that the offset computation mechanism have similar block design to the parallel branch design for structural re-parameterization to adapt large kernel convolutions Ding et al. (2022). With such inspiration, instead of using standard deformable convolution for both feature and offset computation, we propose to adapt deformable convolution in depthwise setting to simulate the self-attention behavior. Furthermore, we adapt parallel branch design to re-parameterize the offsets computation with either a Multi-Layer Perceptron (MLP) or small kernel convolution, enhancing the adaptive spatial aggregation with efficiency. We define our proposed deformable convolution mechanism at layer \(l\) as follows: \[\begin{split}&\Delta v_{0}=MLP(y^{l-1}(v_{0}))\\ & y^{l}(v_{0})=\sum_{g=1}^{G}\sum_{k=1}^{K}w_{g}(v_{k})\cdot x_{g} (v_{0}+v_{k}+\Delta v_{0})\end{split} \tag{2}\] where \(G\) defines as the total number of aggregation groups in the depth-wise setting. For the g-th group, \(w_{g}\in\mathcal{R}^{C\times C^{\prime}}\) denotes as the group-wise divided projection weight. Such deformable convolution operator demonstrates the merits as follows: 1) it tackles the limitation of standard convolution with respect to long-range dependencies and adaptive spatial aggregation; 2) it inherits the inductive bias characteristics of the convolution mechanism with better computational efficiency using less training samples. ### Complete Architecture To benchmark the effectiveness of each operation module fairly, inspired by 3D UX-Net, we follow the step-by-step comparison to compare the effectiveness of each module and create the optimal design with our proposed deformable operator. Given random patches \(p_{i}\in\mathcal{R}^{H\times W\times D\times C}\) are extracted from each 3D medical images \(x\), we follow the similar architecture with the encoder in 3D UX-Net, which first leverage a large kernel convolution layer to compute the partitioned feature map with dimension \(\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\) and project to a \(C=48\)-dimensional space. We then directly substitute the main operator of large kernel convolution with our proposed Depth-wise Deformable Convolution (DDC) using kernel size of \(3\times 3\times 3\). Furthermore, we further perform experiments to evaluate the adopness of linear scaling in standard (MLP) and depth-wise setting (scaling approach in 3D UX-Net), considering MLP as the main scaling operator for DDC based on the performance evaluation. We hypothesize such block design can tackle all trade-offs across ViTs and large kernel convolution with robust performance and large ERF. Here, we define the output of the encoder blocks in layers \(l\) and \(l+1\) as follows: \[\begin{split}\hat{z}^{l}&=\text{DDC}(\text{LN}(z^{l-1}) )+z^{l-1}\\ z^{l}&=\text{MLP}(\text{LN}(\hat{z}^{l}))+\hat{z}^{l} \\ \hat{z}^{l+1}&=\text{DDC}(\text{LN}(z^{l}))+z^{l} \\ z^{l+1}&=\text{MLP}(\text{LN}(\hat{z}^{l+1}))+\hat{z}^{l+1} \end{split} \tag{3}\] where \(\hat{z}_{l}\) and \(\hat{z}_{l+1}\) are the outputs from the DDC layer in different depth levels; LN denotes as the layer normalization. Compared to the 3D UK-Net, we substitute the large kernel convolution modules with two DDC layers. More details of the remaining architecture are provided in the supplementary material. ## 5 Experimental Setup **Datasets** We conduct experiments on five public segmentation datasets across organs/tissues from different scales (large (e.g., liver, stomach) to small (e.g., tumors, vessels), which comprising with 1) MICCAI 2019 KiTS Challenge dataset (KiTS) Heller et al. (2019), 2) MICCAI 2022 AMOS Challenge dataset (AMOS) Ji et al. (2022), 3) Medical Segmentation Decathlon (MSD) pancreas dataset and 4) hepatic vessel dataset Antonelli et al. (2022). For KiTS dataset, we employ 210 contrast-enhanced abdominal computed tomography (CT) from the University of Minnesota Medical Center between 2010 and 2018, with three specific tissues well-annotated (kidney, tumor, cyst). For AMOS dataset, we employ 200 multi-contrast abdominal CT with sixteen anatomies manually annotated Figure 3: Qualitative representations are showcased across the AMOS, MSD pancreas, and hepatic vessels datasets. Selected areas are magnified to highlight the notable discrepancies in segmentation quality, with red arrows indicating areas of over-/under-segmentation. Overall, DeformUX-Net demonstrates the best segmentation quality compared to the ground-truth. for abdominal multi-organ segmentation. For MSD dataset, we employ in total of 585 abdominal contrast-enhanced CT scans for both pancreas and tumor (282) segmentation, and hepatic vessels and tumor (303) segmentation. More details of these four public datasets can be found in appendix A.2. **Implementation Details** We specifically evaluate on direct supervised training scenario with all four datasets for volumetric segmentation. We perform five-fold cross-validations with 80% (train)/ 10% (validation)/ 10% (test) split to MSD and KiTS datasets, while single fold is performed with 80% (train)/ 10% (validation)/ 10% (test) split for AMOS dataset. The complete preprocessing and training details are available at the appendix A.1. Overall, we evaluate 3D DeformUX-Net performance by comparing with current volumetric transformer and CNN SOTA approaches for volumetric segmentation in fully-supervised setting. We leverage the Dice similarity coefficient as an evaluation metric to compare the overlapping regions between predictions and ground-truth labels. Furthermore, we performed ablation studies to investigate the best scenario of adapting the deformable convolution and the variability of substituting different linear layers for feature extraction. ## 6 Results ### Evaluation on Organ/Vessel & Tumor Segmentation We initiated our study by evaluating the adaptability across different organ/tissue scales through organ/vessel and tumor segmentation tasks. Table 1 provides a quantitative comparison against the leading state-of-the-art (SOTA) transformers and CNNs. In our analysis of the KiTS and MSD pancreas datasets, hierarchical transformers utilizing local attention mechanisms, such as SwinUNETR, outperformed the large kernel convolution mechanism employed by 3D UX-Net in MSD pancreas dataset, while the large kernel operation demonstrates better performance in KiTS. Given that the large kernel convolution statically summarizes features, the flexibility of local attention seems particularly advantageous when handling the sparsity within tumor regions. Our innovative convolution design subsequently improved performance metrics, pushing the mean organ dice from 0.680 to 0.720 in KiTS and 0.676 to 0.717 in MSD pancreas datasets. Notably, UNesT-L achieved performance metrics on par with our proposed block in the KiTS dataset, possibly due to its significant model capacity and unique local attention mechanism (experiments shown in supplementary material). Nevertheless, DeformUX-Net surpassed UNesT-B's performance in KiTS, boasting an enhancement of 1.41% in mean dice, and also outperformed UNesT-L on the MSD pancreas dataset, improving the Dice score from 0.701 to 0.717. As for vessel and tumor segmentation, where prior research has highlighted the vulnerabilities of large kernel convolution, 3D UX-Net still show significant strides over SwinUNETR. Our custom-designed block further elevated the performance, registering an increase of 4.84% in mean dice over both 3D UX-Net and UNesT-B. Moreover, qualitative visuals presented in Figure 3 clearly showcase our model's prowess in delineating organ boundaries and reducing the propensity for over-segmentation across adjacent organ regions. \begin{table} \begin{tabular}{l|c|c|c|c c c|c c c|c c c} \hline \hline & & & & & & & & & & & & & & \\ \hline Methods & **\#Pronns** & FLOPs & Kidney & Tumor & Cyst & Mean & Panorra & Tumor & Mean & Hepatic & Tumor & Mean \\ \hline 3D U-Net (Cipka et al. (2016) & 4.81M & 135.9G & 0.918 & 0.657 & 0.361 & 0.645 & 0.711 & 0.584 & 0.648 & 0.569 & 0.609 & 0.589 \\ SegResNet (Miyonnek (2018) & 1.88M & 156.60 & 0.935 & 0.713 & 0.401 & 0.683 & 0.720 & 0.653 & 0.672 & 0.656 & 0.638 \\ RaV-Net (Zhou et al. (2021) & 38.24M & 101.20 & 0.931 & 0.710 & 0.427 & 0.689 & 0.742 & 0.612 & 0.682 & 0.610 & 0.643 & 0.627 \\ \hline ** \begin{tabular}{l} ResNet \\ \end{tabular}** & 31.64M & 116.03 & 0.932 & 0.691 & 0.384 & 0.669 & 0.749 & 0.610 & 0.675 & 0.589 & 0.636 & 0.633 \\ \hline UNIFR Raimaldi et al. (2020) & 92.8M & 82.50 & 0.921 & 0.669 & 0.354 & 0.668 & 0.725 & 0.598 & 0.667 & 0.567 & 0.612 & 0.590 \\ endforme (Zhou et al. (2021) & 149.4M & 213.06 & 0.930 & 0.687 & 0.376 & 0.684 & 0.769 & 0.603 & 0.686 & 0.591 & 0.635 & 0.613 \\ SavitulNetr (Raimaldi et al. (2022a) & 62.34M & 238.16 & 0.939 & 0.702 & 0.440 & 0.680 & 0.785 & 0.632 & 0.708 & 0.682 & 0.647 & 0.653 \\ 3D UX-Net (Zhou et al. (2022) & 53.03M & 69.49 & 0.742 & 0.425 & 0.697 & 0.679 & 0.634 & 0.766 & 0.625 & 0.678 & 0.652 \\ UNesT-B (Zhou et al. (2023) & 57.24M & 248.45 & 0.943 & 0.746 & 0.451 & 0.710 & 0.778 & 0.601 & 0.690 & 0.611 & 0.655 & 0.640 \\ \hline **DeformUX-Net (Ours)** & 55.8M & 635.8G & **0.948** & **0.763** & **0.450** & **0.720** & **0.790** & **0.643** & **0.717** & **0.637** & **0.705** & **0.671** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of SOTA approaches on the three different testing datasets. (*: \(p<0.01\), with Paired Wilcoxon signed-rank test to all baseline networks) ### Evaluation on Multi-Organ Segmentation Apart from segmenting tissues across scales, Table 2 presents a quantitative comparison between the current SOTA transformers and CNNs for volumetric multi-organ segmentation. Employing our novel deformable convolution block design as the encoder backbone, the DeformUX-Net consistently outperforms its peers, showcasing a notable Dice score improvement ranging from 0.871 to 0.908. This enhancement is evident when compared to both SwinUNETR (which uses the Swin Transformer as its encoder backbone) and the 3D UX-Net (that relies on large kernel convolution for its encoder backbone). We also evaluate the effectiveness of DeformUX-Net against the most recent hierarchical transformer model, UUsT, known for its unique hierarchical block aggregation to capture local attention features. Remarkably, our deformable block design still outshined UUsesT across all organ evaluations, registering a 1.11% uptick in mean Dice score, while operating at a fifth of UUsesT's model capacity. Further reinforcing our claims, Figure 3 visually underscores the segmentation quality improvements achieved by DeformUX-Net, illustrating its precision in preserving the morphology of organs and tissues in alignment with the ground-truth labels. ### Ablation Analysis After assessing the foundational performance of DeformUX-Net, we delved deeper to understand the contributions of its individual components--both within our novel deformable operation and the overarching architecture. We sought to pinpoint how these components synergize and contribute to the observed enhancements in performance. To this end, we employed the AMOS, KiTS, and MSD pancreas datasets for comprehensive ablation studies targeting specific modules. All ablation studies are conducted with kernel size \(3\times 3\times 3\). \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c|c} \hline \hline \multicolumn{11}{c}{Train Point Scratch Scenario} \\ \hline Methods & \(\bigotimes\) & Region & R & Kid & L & Kid & Gell & Evo & Liver & Swin & Avote & IVC & Pane & R& LAG & Dvo & Blid & Prox & Avg \\ \hline \hline an-UNet & 0.951 & 0.919 & 0.930 & 0.845 & 0.797 & 0.975 & 0.963 & 0.941 & 0.989 & 0.813 & 0.730 & 0.677 & 0.772 & 0.797 & 0.815 & 0.850 \\ \hline TransETS & 0.930 & 0.921 & 0.909 & 0.798 & 0.722 & 0.966 & 0.951 & 0.900 & 0.820 & 0.702 & 0.641 & 0.556 & 0.684 & 0.709 & 0.730 \\ \hline UNETR & 0.935 & 0.923 & 0.933 & 0.771 & 0.906 & 0.759 & 0.857 & 0.821 & 0.687 & 0.688 & 0.534 & 0.659 & 0.710 & 0.700 & 0.740 \\ \hline SwinUNETR & 0.932 & 0.928 & 0.914 & 0.831 & 0.734 & 0.968 & 0.958 & 0.935 & 0.828 & 0.725 & 0.675 & 0.572 & 0.677 & 0.717 & 0.596 & 0.785 \\ \hline SwinUNETR & 0.936 & 0.957 & 0.949 & 0.891 & 0.820 & 0.978 & 0.880 & 0.939 & 0.894 & 0.818 & 0.808 & 0.740 & 0.839 & 0.819 & 0.871 \\ UUX-Net & 0.946 & 0.956 & 0.951 & 0.903 & 0.831 & 0.980 & 0.910 & 0.950 & 0.913 & 0.830 & 0.805 & 0.756 & 0.846 & 0.897 & 0.863 & 0.890 \\ UUX-Net & 0.966 & 0.961 & 0.956 & 0.903 & 0.840 & 0.980 & 0.914 & 0.947 & 0.912 & 0.888 & 0.803 & 0.758 & 0.846 & 0.895 & 0.854 & 0.891 \\ \hline DeformUX-Net (Ours) & **0.972** & **0.970** & **0.962** & **0.920** & **0.871** & **0.954** & **0.955** & **0.925** & **0.851** & **0.835** & **0.797** & **0.866** & **0.919** & **0.836** & **0.905** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluations on the AMOS testing split in different scenarios.(*: \(p<0.01\), with Paired Wilcoxon signed-rank test to all baseline networks) \begin{table} \begin{tabular}{l|c|c c} \hline Methods & \#Params (M) & \begin{tabular}{c} AMOS \\ Mean Dice \\ \end{tabular} & \begin{tabular}{c} KiTS \\ Mean Dice \\ \end{tabular} & \begin{tabular}{c} Pancreas \\ \end{tabular} \\ \hline SwinUNETR & 62.2 & 0.871 & 0.680 & 0.708 \\ 3D UX-Net & 53.0 & 0.890 & 0.697 & 0.676 \\ UUsesT-B & 87.2 & 0.894 & 0.710 & 0.690 \\ \hline Use Standard Deformable Conv. & 55.5 & 0.878 & 0.701 & 0.682 \\ Use Depth-wise Deformable Conv. & 53.0 & 0.908 & 0.720 & 0.717 \\ \hline Offset Kernel=\(1\times 1\times 1\) & 52.5 & 0.908 & 0.720 & 0.717 \\ Offset Kernel=\(3\times 3\times 3\) & 52.7 & 0.894 & 0.713 & 0.698 \\ \hline x-y offset only Ying et al. (2020) & 54.0 & 0.878 & 0.701 & 0.689 \\ x-z offset only Ying et al. (2020) & 54.0 & 0.893 & 0.694 & 0.681 \\ y-z offset only Ying et al. (2020) & 54.0 & 0.883 & 0.712 & 0.697 \\ Tt-planar offset (Ours) & 55.8 & 0.908 & 0.720 & 0.717 \\ \hline No MLP & 51.1 & 0.879 & 0.659 & 0.661 \\ Use MLP & 55.8 & 0.908 & 0.720 & 0.717 \\ Use Depth-wise Conv. Scaling Lee et al. (2022) & 53.0 & 0.889 & 0.684 & 0.679 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies of variable block and architecture designs on AMOS, KiTS, and MSD pancreas datasets **Comparing with Different Convolution Mechanisms:** We examined both standard deformable convolution and depth-wise deformable convolution for feature extraction. To ensure a fair comparison between the experiments, we computed the deformable offset using a kernel size of \(1\times 1\times 1\). Utilizing standard deformable convolution led to a notable decrease in performance across all datasets, while simultaneously increasing the model parameters. Conversely, by employing deformable convolution in a depth-wise manner, we found that independently extracting features across each channel was the most effective approach for subsequent segmentation tasks. **Variation of Kernel Sizes for Offset Computation:** We observed a notable decrease when varying the method of offset computation. Using a MLP for offset calculation led to a significant performance improvement compared to the traditional method, which computes offsets with a kernel size of \(3\times 3\times 3\). Intriguingly, by drawing inspiration from the parallel branch design in structural re-parameterization, we managed to reduce the model parameters while maintaining performance across all datasets. **Variability of Offset Directions:** Our study, drawing inspiration from Ying et al. (2020), delves into how the calculation of offsets in different directions impacts performance. Notably, the results hinge dramatically on the specific offset direction. For instance, in vessel segmentation, offsets determined for the x-z plane considerably outperform those in the other two planes. Conversely, offsets in the x-y plane are found to be more advantageous for multi-organ segmentation tasks. Given these findings, tri-planar offsets emerge as the most effective strategy for volumetric segmentation across various organ and tissue scenarios. **Linear Scaling with MLP:** In addition to the primary deformable convolution operations, we examined the interplay between the convolution mechanism and various linear scaling operators. Adapting an MLP for linear scaling yielded substantial performance improvements across all datasets. In contrast, the gains from the depth-wise scaling, as introduced in 3D UX-Net, were more modest. To further amplify the spatial aggregation among adjacent features, the MLP proved to be an efficient and impactful choice. ## 7 Discussion In this work, we delve into the nuanced intricacies and associated trade-offs between convolution and self-attention mechanisms. Our goal is to introduce a block-wise design that addresses the shortcomings of prevailing SOTA operators, particularly large kernel convolution and global-local self-attention. Taking cues from the 3D UX-Net, we integrate our design within a "U-Net"-like architecture using skip connections tailored for volumetric segmentation. Beyond merely broadening the ERF for generic performance boosts, we discern that achieving excellence in dense prediction tasks hinges on two pivotal factors: **1) Establishing an optimal ERF for feature computation, and 2) Recognizing and adapting to the spatial significance among adjacent features**. Simply enlarging convolution kernel sizes doesn't invariably enhance segmentation; performance might plateau or even regress without targeted guidance during optimization phases. Concurrently, the hierarchical transformer revitalizes local self-attention by computing features within specific sliding windows (e.g., \(7\times 7\times 7\)), albeit at the cost of some long-range dependency. Our insights from Tables 1 & 2 underscore that while large kernel convolution excels in multi-organ segmentation, global-local self-attention outperforms particularly in tumor segmentation. Traditional convolutions, being static, don't cater to spatial importance variations between neighboring regions in the way that self-attention does. This aspect of self-attention is especially potent in addressing tumor region sparsity. To address these challenges, we position deformable convolution as a promising alternative, distinguished by its inherent properties. Our adapted deformable mechanism emphasizes long-range dependencies even with smaller kernels, and we've expanded this approach in a depth-wise setting to emulate self-attention behavior. Drawing inspiration from structural re-parameterization's parallel branch design, we employ MLPs to compute deformable offsets. This facilitates discerning the importance and inter-relationships between neighboring voxels, which, when incorporated into feature computation, bolsters volumetric performance. Consequently, our novel design offers marked improvements over traditional designs employing standard deformable convolution. ## 8 Conclusion We introduce DeformUX-Net, the first volumetric network tackling the trade-offs across CNNs to ViTs for medical image segmentation. We re-design the encoder blocks with depth-wise deformable convolution and projections to simulate all distinctive advantages from self-attention mechanism to large kernel convolution operations. Furthermore, we adapt tri-planar offset computation with a parallel branch design in the encoder block to further enhance the long-range dependency and adaptive spatial aggregation with small kernels. DeformUX-Net outperforms current transformer SOTAs with fewer model parameters using four challenging public datasets in direct supervised training scenarios.
2309.15391
A Risk-Ratio-Based Marginal Sensitivity Model for Causal Effects in Observational Studies
In observational studies, the identification of causal estimands depends on the no unmeasured confounding (NUC) assumption. As this assumption is not testable from observed data, sensitivity analysis plays an important role in observational studies to investigate the impact of unmeasured confounding on the causal conclusions. In this paper, we proposed a risk-ratio-based sensitivity analysis framework by introducing a modified marginal sensitivity model for observational studies with binary treatments. We further extended the proposed framework to the multivalued treatment setting.We then showed how the point estimate intervals and the corresponding percentile bootstrap confidence intervals can be constructed efficiently under the proposed framework. Simulation results suggested that the proposed framework of sensitivity analysis performs well in the presence of adequate overlap among the treatment groups. Lastly, we demonstrated our proposed sensitivity analysis framework by estimating the causal effect of maternal education on female fertility in Bangladesh.
Md Abdul Basit, Mahbub A. H. M. Latif, Abdus S Wahed
2023-09-27T04:06:19Z
http://arxiv.org/abs/2309.15391v1
# A Risk-Ratio-Based Marginal Sensitivity Model for Causal Effects in Observational Studies ###### Abstract In observational studies, the identification of causal estimands depends on the no unmeasured confounding (NUC) assumption. As this assumption is not testable from observed data, sensitivity analysis plays an important role in observational studies to investigate the impact of unmeasured confounding on the causal conclusions. In this paper, we proposed a risk-ratio-based sensitivity analysis framework by introducing a modified marginal sensitivity model for observational studies with binary treatments. We further extended the proposed framework to the multi-valued treatment setting. We then showed how the point estimate intervals and the corresponding percentile bootstrap confidence intervals can be constructed efficiently under the proposed framework. Simulation results suggested that the proposed framework of sensitivity analysis performs well in the presence of adequate overlap among the treatment groups. Lastly, we demonstrated our proposed sensitivity analysis framework by estimating the causal effect of maternal education on female fertility in Bangladesh. Causal Inference, Inverse Probability Weighting, Percentile Bootstrap, Sensitivity Analysis. ## 1 Introduction Randomized controlled trials (RCTs) are considered to be the gold standard for estimating the causal effect of a treatment on an outcome of interest. However, researchers are often limited to investigating causal relationships using only observational data when conducting randomized experiments is not feasible. In the absence of randomized treatment assignments, the estimation of causal effects in observational studies is usually based on a set of identifiability assumptions. One of the most important such assumptions is the strong ignorability or the no unmeasured confounding (NUC) assumption, which implies that we can observe all relevant confounders in an observational study. Since the NUC assumption is essentially unverifiable using observed data, it is necessary to perform sensitivity analyses that assess the robustness of the causal conclusions obtained from observational studies in the presence of unmeasured confounding. Sensitivity analysis is recognized as an essential step in the process of causal inference from observational studies. It refers to the approaches that investigate how causal conclusions are impacted in the presence of unmeasured confounding in observational studies. Bind & Rubin (2021), in their guidelines for the statistical reporting of observational studies, listed sensitivity analysis as one of the five major steps that need to be carried out before reporting causal conclusions elicited from observational studies. Many sponsors and regulatory bodies also nowadays require researchers to conduct some form of sensitivity analysis while drawing causal inferences from observational studies (for example, see standard MD-4 of the Patient-Centered Outcome Research Institute (PCORI) methodology standards for handling missing data at [https://tinyurl.com/kcbkvfwv](https://tinyurl.com/kcbkvfwv)). Cornfield et al. (1959) conducted the first formal sensitivity analysis to investigate the impact of unmeasured confounding on the causal relationship between smoking and lung cancer. However, this initial sensitivity analysis framework was applicable to only binary outcomes, and it did not account for the sampling variation. Rosenbaum and colleagues overcame these limitations and greatly expanded the theory and methods for sensitivity analysis in a series of pioneering works (Rosenbaum, 1987; Gastwirth et al., 1998; Rosenbaum et al., 2002). Recently, sensitivity analysis has gained a lot of research interest including Bonvini et al. (2022); Dorn et al. (2021); Carnegie et al. (2016); VanderWeele & Ding (2017); Zhao et al. (2019); Kallus et al. (2019); Yadlowsky et al. (2022), among others. Most of these proposed sensitivity analysis frameworks deal with only binary treatments. Zhao et al. (2019) recently proposed a sensitivity analysis framework for smooth estimators of causal effects, such as the inverse probability weighting (IPW) estimators. Their proposed marginal sensitivity model is a natural modification of Rosenbaum's sensitivity model (Rosenbaum, 2002) for matched observational studies, and it quantifies the magnitude of unmeasured confounding by the odds ratio between the conditional probability of being treated given the measured confounders (observed propensity score) and conditional probability of being treated given both the measured and unmeasured confounders (true propensity score). It is well known that risk ratios are more consistent with the general intuition than odds ratios, and hence, are easier to interpret. Therefore, it is desirable to develop sensitivity analysis frameworks that interpret the sensitivity analysis results using risk ratios instead of odds ratios. In this paper, we first modified the odds-ratio-based sensitivity analysis framework of Zhao et al. (2019) to a risk-ratio-based framework. In particular, we proposed a modified marginal sensitivity model that measures the violation of the NUC assumption using the risk ratio between the true propensity score and the observed propensity score (see Section 3.1 for the exact definition). The proposed modified marginal sensitivity model introduces a new implicit sensitivity parameter that restricts the true propensity scores within its valid range for a given sensitivity model. We then extended the proposed modified marginal sensitivity model to the multivalued treatment setting based on the work of Basit et al. (2023). We showed that the estimation of causal effects for binary and multivalued treatments can be performed efficiently under the proposed sensitivity analysis framework. The rest of the paper is structured as follows: Section 2 describes necessary notations and briefly reviews the existing odds-ratio-based marginal sensitivity model; Section 3 introduces the proposed modified marginal sensitivity model for the binary treatment setting; Section 4 extends the proposed sensitivity model to multivalued treatment settings; Section 5 reports results from simulation studies; Section 6 demonstrates sensitivity analysis under the proposed framework using a real data application; Section 7 provides some concluding remarks. ## 2 The Sensitivity Analysis Framework ### The Potential Outcome Framework Consider an observational study with a binary treatment \(A\) (1, if treated, and 0, if control), a vector of measured confounders \(\mathbf{X}\in\mathscr{X}\subset\mathbb{R}^{d}\), and the observed binary outcome Y, where \((A,\mathbf{X},Y)\sim F_{0}\). We observe \((A_{1},\mathbf{X}_{1},Y_{1})\), \((A_{2},\mathbf{X}_{2},Y_{2}),\ldots,(A_{n},\mathbf{X}_{n},Y_{n})\), which denote data points observed from \(n\) i.i.d. units from the true data generating distribution \(F_{0}\). Moreover, let \(Y_{i}(0)\) and \(Y_{i}(1)\) denote the potential outcomes corresponding to treatment levels \(0\) and \(1\), respectively, for each unit \(i\in[n]\). Under the stable unit treatment value assumption (SUTVA) (Rubin, 1978), the observed outcomes can be defined in terms of potential outcomes as \(Y_{i}=Y_{i}(A_{i})=A_{i}Y_{i}(1)+(1-A_{i})Y_{i}(0)\). In order to estimate causal effects, we make the following identifiability assumptions: _Assumption 1 (strong ignorability or NUC)._ There is no unmeasured confounding (NUC), i.e., \((Y(0),Y(1))\perp\!\!\!\perp A\,|\,\mathbf{X}\). In other words, the set of observed covariates, \(\mathbf{X}\) includes all common causes of \(A\) and \(Y\). _Assumption 2 (positivity or overlap)._ Each unit has a non-zero probability of receiving the treatment. That is, \(0<\mathbb{P}_{0}(A=1\,|\,X=x)<1,\ \forall\,\mathbf{x}\in\mathbf{X}\). In observational studies with a binary treatment, one of the most commonly used causal estimands of interest is the average treatment effect (ATE), \(\Delta:=\mathbb{E}_{0}[Y(1)]-\mathbb{E}_{0}[Y(0)]\), where \(\mathbb{E}_{0}\) indicates that the expectation is taken over the true data generating distribution \(F_{0}\). If we define the observed propensity score as \(e_{0}(\mathbf{X})=\mathbb{P}_{0}(A=1\,|\,\mathbf{X})\), then under Assumptions 1 and 2, a consistent estimator of ATE \(\Delta\) based on inverse probability weighting (IPW) is defined as \[\hat{\Delta}_{\mathrm{IPW}}=\frac{1}{n}\sum_{i=1}^{n}\left[\frac{A_{i}Y_{i}}{ \hat{e}\left(\mathbf{X}_{i}\right)}-\frac{\left(1-A_{i}\right)Y_{i}}{1-\hat{e} \left(\mathbf{X}_{i}\right)}\right],\] where \(\hat{e}(\mathbf{X})\) is a sample estimate of \(e_{0}(\mathbf{X})\). It is well known that the IPW estimates become unstable when the estimated propensity scores are close to \(0\) or \(1\)(Kang et al., 2007). Therefore, the stabilized IPW (SIPW) estimator of ATE, obtained by multiplying the inverse probability weights by the probability of receiving the actual treatment, is frequently used in practice. ### The Odds-Ratio-Based Marginal Sensitivity Model Zhao et al. (2019) developed a sensitivity analysis framework that can be used for smooth estimators of causal effects, e.g., the inverse probability weighting (IPW) estimators of ATE in observational studies with a binary treatment. Let us consider an unmeasured confounder \(U\) that sums up all unmeasured confounding present in an observational study. Robins (2002) suggested that the variable \(U\) can be considered as any of the potential outcomes while defining the conditional probabilities of receiving treatments, i.e., propensity scores. So, let us denote the unobserved or true propensity score (conditional on both observed and unobserved confounders) by \(e_{a,0}(\mathbf{x},y)=\mathbb{P}_{0}\{A=a\,|\,\mathbf{X}=\mathbf{x},Y(a)=y\}\) for \(a\in\{0,1\}\) and the observed propensity score (conditional on observed confounders only) by \(e_{a,0}(\mathbf{x})=\mathbb{P}_{0}\{A=a\,|\,\mathbf{X}=\mathbf{x}\}\). Then, the marginal sensitivity model assumes that \[\frac{1}{\Lambda}\leqslant\mathrm{OR}\left\{e_{a}(\mathbf{x},y),e_{a,\mathbf{\beta}_ {0}}(\mathbf{x})\right\}\leqslant\Lambda,\ \forall\,\mathbf{x}\in\mathscr{X},y\in\mathbb{R},a\in\{0,1\}, \tag{1}\] where \(\mathrm{OR}(p_{1},p_{2})=[p_{1}/(1-p_{1})]/[p_{2}/(1-p_{2})]\) is the odds ratio, \(e_{a,\mathbf{\beta}_{0}}(\mathbf{x})\) is a parametric for \(e_{a,0}(\mathbf{x})\) for \(a\in\{0,1\}\) using observed confounders, and \(\Lambda\geqslant 1\) is a fixed sensitivity parameter that quantifies the magnitude of unmeasured confounding by the odds ratio of the true propensity score \(e_{a}(\mathbf{x},y)\) and the observed propensity score \(e_{\beta_{a,0}}(\mathbf{x})\) for \(a\in\{0,1\}\). The marginal sensitivity model can also be defined nonparametrically by replacing \(e_{a,\boldsymbol{\beta}_{0}}(\boldsymbol{x})\) with a nonparametric model \(e_{a,0}(\boldsymbol{x})\) in the sensitivity model defined in Expression (1). Under the marginal sensitivity models, Zhao et al. (2019) used a generalized mini-max inequality and percentile bootstrap to convert the estimation problem of causal effects under any given sensitivity model to a linear programming problem (LPP), which can be solved very efficiently. ## 3 The proposed risk-ratio-based Framework of Sensitivity Analysis ### The Modified Marginal Sensitivity Model For observational studies with binary treatments, the marginal sensitivity model (1) presented in Section 2 measures the violation of the NUC assumption in terms of the odds ratio between the observed and unobserved propensity score \(e_{0}(\boldsymbol{x})\) and \(e_{0}(\boldsymbol{x},y)\). Using the assumptions and notations defined in Section 2, a natural modification of the marginal sensitivity model (1) that quantifies the magnitude of the violation of the NUC assumption by the risk ratio between \(e_{a,0}(\boldsymbol{x})\) and \(e_{a,0}(\boldsymbol{x},y)\) could be defined as \(e_{a,0}(\boldsymbol{x},y)\in\mathcal{E}_{\beta_{0}}(\Gamma)\), where \[\mathcal{E}_{\beta_{0}}(\Gamma)=\Big{\{}e_{a}(\boldsymbol{x},y):\frac{1}{ \Gamma}\leq\mathrm{RR}\,\big{\{}e_{a,\beta_{0}}(\boldsymbol{x}),e_{a}( \boldsymbol{x},y)\big{\}}\leq\Gamma,\forall\boldsymbol{x}\in\mathscr{X},y\in \mathbb{R},a\in\{0,1\}\Big{\}}, \tag{2}\] where \(\Gamma\geqslant 1\) is the sensitivity parameter and \(\mathrm{RR}\,(p_{1},p_{2})=p_{1}/p_{2}\) is the risk ratio. Next, let us define \[k_{a,\beta_{0}}(\boldsymbol{x})=\log\big{(}e_{a,\beta_{0}}(\boldsymbol{x}) \big{)},\quad k_{a,0}(\boldsymbol{x},y)=\log\big{(}e_{a,0}(\boldsymbol{x},y) \big{)},\quad\text{ and }\quad k_{a}(\boldsymbol{x},y)=\log\big{(}e_{a}( \boldsymbol{x},y)\big{)},\] and let \(l_{a,\beta_{0}}(\boldsymbol{x},y)=k_{a,\beta_{0}}(\boldsymbol{x})-k_{a,0}( \boldsymbol{x},y)\) be the log-scale difference of the observed and the unobserved propensity scores. Similarly, for a postulated sensitivity model \(e_{a}(\boldsymbol{x},y)\), we define \(l_{a}(\boldsymbol{x},y)=k_{a,\beta_{0}}(\boldsymbol{x})-k_{a}(\boldsymbol{x},y)\). However, under the sensitivity model (2), the unobserved propensity score \(e_{a}(\boldsymbol{x},y)\) does not lie within \((0,1)\). Because, note that when \(l_{a}(\boldsymbol{x},y)\to\infty\), then \(\log\big{(}e_{a}(\boldsymbol{x},y)\big{)}\to-\infty\), and hence, \(e_{a}(\boldsymbol{x},y)\to 0\). But, as \(l_{a}(\boldsymbol{x},y)\to-\infty\), then \(\log\big{(}e_{a}(\boldsymbol{x},y)\big{)}\to\infty\), and hence, \(e_{a}(\boldsymbol{x},y)\to\infty\). That is, under the sensitivity model (2), the unobserved or true propensity score \(e(\boldsymbol{x},y)\in(0,\infty)\). To circumvent this problem, we introduce an additional sensitivity parameter in the sensitivity model (2) that restricts \(e_{a}(\boldsymbol{x},y)\) to be in its valid range \((0,1)\) as \(l_{a}(x,y)\to-\infty\). **Definition 1** (modified marginal sensitivity model): _Fix a parameter \(\Gamma_{0}\geqslant 1\). Moreover, define \(\Gamma_{1}=\max\big{\{}e_{a,\beta_{0}}(\boldsymbol{x}),\Gamma_{0}^{-1}\big{\}}\). In observational studies with binary treatments, the modified marginal sensitivity model assumes that \(e_{a,0}(\boldsymbol{x},y)\in\mathcal{E}_{\beta_{0}}(\Gamma_{0},\Gamma_{1})\), where_ \[\mathcal{E}_{\beta_{0}}(\Gamma_{0},\Gamma_{1})=\Big{\{}e_{a}( \boldsymbol{x},y):\Gamma_{1}\leq\mathrm{RR}\{e_{a,\beta_{0}}(\boldsymbol{x}),e _{a}(\boldsymbol{x},y)\}\leq\Gamma_{0},\] \[\forall\,\boldsymbol{x}\in\mathscr{X},y\in\mathbb{R},a\in\{0,1\} \Big{\}}, \tag{3}\] _and \(\mathrm{RR}\,(p_{1},p_{2})=p_{1}/p_{2}\) is the risk ratio._ _Remark 1_.: From the Definition 1, we can see that the sensitivity parameter \(\Gamma_{1}\) implicitly depends on the observed propensity score \(e_{a,\beta_{0}}(\boldsymbol{x})\) and the sensitivity parameter \(\Gamma_{0}\). Therefore, we do not need to explicitly specify the value of \(\Gamma_{1}\) while conducting a sensitivity analysis. Consequently, the introduction of the additional sensitivity parameter \(\Gamma_{1}\) in the modified marginal sensitivity model does not necessarily increase the complexity of conducting the sensitivity analysis. The main purpose of the parameter \(\Gamma_{1}\) is to restrict the unobserved propensity score \(e_{a}(\boldsymbol{x},y)\) within its valid range \((0,1)\) under a specific sensitivity model. Now, Under the modified marginal sensitivity model (3), it is easy to observe that \[-\gamma_{1}\leq l_{a}(\boldsymbol{x},y)\leq\gamma_{0},\] where \(\gamma_{0}=\log(\Gamma_{0})\) and \(\gamma_{1}=\log(\Gamma_{1})=\max\big{\{}\log(e_{0}(\boldsymbol{x})),-\gamma_{0} \big{\}}\). Therefore, it can be shown that the sensitivity model (3) is similar to assuming \(l_{a}\in\mathcal{L}_{\beta_{0}}\left(\gamma_{0},\gamma_{1}\right)\), where \[\mathcal{L}_{\beta_{0}}\left(\gamma_{0},\gamma_{1}\right)=\left\{l_{a}: \mathscr{X}\times\mathbb{R}\rightarrow\mathbb{R}\text{ and }\gamma_{1}\leqslant l_{a} \leqslant\gamma_{0},a\in\left\{0,1\right\}\right\}.\] For the remainder of the paper, we consider \(l_{a}(\boldsymbol{x},y)\) as sensitivity models for fixed sensitivity parameters \(\gamma_{0}\geqslant 0\). _Remark 2_.: We can also define the modified marginal sensitivity model (2) nonparametrically by replacing \(e_{a,\beta_{0}}(\boldsymbol{x})\), and consequently, \(k_{a,\beta_{0}}(\boldsymbol{x})\) with corresponding nonparametric model \(e_{a}(\boldsymbol{x})\) and \(k_{a}(\boldsymbol{x})\), respectively. The statistical methods used in the proposed sensitivity analysis framework can be applied regardless of the choice of parametric or non-parametric model for \(e_{a}(\boldsymbol{x})\), as long as the model is smooth enough for a valid bootstrap. ### Estimation of ATE under Modified Marginal Sensitivity models In order to estimate the ATE under the modified marginal sensitivity model, under a specific sensitivity model \(l_{a}\in\mathcal{L}_{\beta_{0}}(\gamma_{0},\gamma_{1})\) for \(a\in\left\{0,1\right\}\), let us define the shifted propensity scores as \[e_{a}^{(l_{a})}(\boldsymbol{x},y)=\bigg{[}\exp\Big{\{}l_{a}( \boldsymbol{x},y)-k_{a,\beta_{0}}(\boldsymbol{x})\Big{\}}\bigg{]}^{-1},\] and the shifted estimand of ATE as \[\Delta^{(l_{0},l_{1})}= \Bigg{\{}\mathbb{E}_{0}\Bigg{[}\frac{A}{e_{1}^{(l_{1})}( \boldsymbol{X},Y)}\Bigg{]}^{-1}\mathbb{E}_{0}\Bigg{[}\frac{AY}{e_{1}^{(l_{1}) }(\boldsymbol{X},Y)}\Bigg{]}\Bigg{\}}\] \[-\Bigg{\{}\mathbb{E}_{0}\Bigg{[}\frac{1-A}{e_{0}^{(l_{0})}( \boldsymbol{X},Y)}\Bigg{]}^{-1}\mathbb{E}_{0}\Bigg{[}\frac{(1-A)Y}{e_{0}^{(l_ {0})}(\boldsymbol{X},Y)}\Bigg{]}\Bigg{\}}. \tag{4}\] Note that, for any \(l_{a}\in\mathcal{L}_{\beta_{0}}(\gamma_{0},\gamma_{1})\), we have \[e_{a}^{(l_{a})}(\boldsymbol{x},y) =\big{[}\exp\big{(}k_{a,\beta_{0}}(\boldsymbol{x})-k_{a}( \boldsymbol{x},y)-k_{a,\beta_{0}}(\boldsymbol{x})\big{)}\big{]}^{-1}\] \[=\Big{[}\exp\big{(}-\log(e_{a}(\boldsymbol{x},y))\big{)}\big{]}^ {-1}\] \[=\Bigg{[}\frac{1}{e_{a}(\boldsymbol{x},y)}\Bigg{]}^{-1}=e_{a}( \boldsymbol{x},y). \tag{5}\] That is, under any given sensitivity model \(l_{a}\), our defined shifted propensity score is equivalent to the true propensity score \(e_{a}(\boldsymbol{x},y)\). We can estimate these shifted propensity scores with \[\hat{e}_{a}^{(l_{a})}(\boldsymbol{x},y)=\bigg{[}\exp\Big{\{}l_{a}( \boldsymbol{x},y)-\hat{k}_{a}(\boldsymbol{x})\Big{\}}\bigg{]}^{-1},\] where \(\hat{k}_{a}(\boldsymbol{x})=\hat{k}_{a,\hat{\boldsymbol{x}}}(\boldsymbol{x}) =\log\big{(}\hat{e}_{a,\beta}(\boldsymbol{x})\big{)}\) and \(\hat{e}_{a,\beta}(\boldsymbol{x})\) is a parametric estimate of \(e_{a,\beta}(\boldsymbol{x})\) for \(a\in\left\{0,1\right\}\). Consequently, we can define the stabilized IPW (SIPW) estimate of \(\Delta^{(l_{0},l_{1})}\) as \[\hat{\Delta}^{(l_{0},l_{1})}= \Bigg{\{}\Bigg{[}\frac{1}{n}\sum_{i=1}^{n}\frac{A_{i}}{\hat{e}_{1}^ {(l_{1})}\left(\mathbf{X}_{i},Y_{i}\right)}\Bigg{]}^{-1}\cdot\frac{1}{n}\sum_{i=1}^ {n}\frac{A_{i}Y_{i}}{\hat{e}_{1}^{(l_{1})}\left(\mathbf{X}_{i},Y_{i}\right)}\Bigg{\}}\] \[-\Bigg{\{}\Bigg{[}\frac{1}{n}\sum_{i=1}^{n}\frac{1-A_{i}}{\hat{e}_ {0}^{(l_{0})}\left(\mathbf{X}_{i},Y_{i}\right)}\Bigg{]}^{-1}\cdot\frac{1}{n}\sum_{ i=1}^{n}\frac{(1-A_{i})Y_{i}}{\hat{e}_{0}^{(l_{0})}\left(\mathbf{X}_{i},Y_{i} \right)}\Bigg{\}}\] \[= \frac{\sum_{i=1}^{n}A_{i}Y_{i}\,\hat{e}_{1}^{(l_{1})}\left(\mathbf{X }_{i},Y_{i}\right)}{\sum_{i=1}^{n}A_{i}\,\hat{e}_{1}^{(l_{1})}\left(\mathbf{X}_{i },Y_{i}\right)}-\frac{\sum_{i=1}^{n}(1-A_{i})Y_{i}\,\hat{e}_{0}^{(l_{0})} \left(\mathbf{X}_{i},Y_{i}\right)}{\sum_{i=1}^{n}(1-A_{i})\,\hat{e}_{0}^{(l_{0})} \left(\mathbf{X}_{i},Y_{i}\right)}\] \[= \frac{\sum_{i=1}^{n}A_{i}Y_{i}\big{[}\exp\{l_{1}(\mathbf{X}_{i},Y_{i} )-\hat{k}_{1}(\mathbf{X}_{i})\}\big{]}}{\sum_{i=1}^{n}A_{i}\big{[}\exp\{l_{1}(\mathbf{ X}_{i},Y_{i})-\hat{k}_{1}(\mathbf{X}_{i})\}\big{]}}\] \[-\frac{\sum_{i=i}^{n}(1-A_{i})Y_{i}\big{[}\exp\{l_{0}(\mathbf{X}_{i},Y _{i})-\hat{k}_{0}(\mathbf{X}_{i})\}\big{]}}{\sum_{i=1}^{n}(1-A_{i})\big{[}\exp\{l _{0}(\mathbf{X}_{i},Y_{i})-\hat{k}_{0}(\mathbf{X}_{i})\}\big{]}}. \tag{6}\] Zhao et al. (2019) showed that estimation problems such as the one given in Equation (6) can be transformed to a linear fractional programming (LFP) problem using Charnes-Cooper transformation (Charnes & Cooper, 1962). Before defining the LFP for Equation (6), let us simplify the notations by assuming that the first \(m\leqslant n\) units are in the treatment group \((A=1)\) and the rest of the observations are in the control group \((A=0)\), and that the outcomes are sorted in decreasing order among the first \(m\) units and the other \(n-m\) units. Finally, we can convert Equation (6) to the following LFP \[\text{min or max} \frac{\sum_{i=1}^{n}Y_{i}\big{[}z_{i}\exp\{-\hat{k}_{1}(\mathbf{X}_{i })\}\big{]}}{\sum_{i=1}^{m}\big{[}z_{i}\exp\{-\hat{k}_{1}(\mathbf{X}_{i})\}\big{]} }-\frac{\sum_{i=m+1}^{n}Y_{i}\big{[}z_{i}\exp\{-\hat{k}_{0}(\mathbf{X}_{i})\}\big{]} }{\sum_{i=m+1}^{n}\big{[}z_{i}\exp\{-\hat{k}_{0}(\mathbf{X}_{i})\}\big{]}}\] (7) subject to \[\Gamma_{1i}\leqslant z_{i}\leqslant\Gamma_{0},\qquad\text{for }1 \leqslant i\leqslant n,\] where \(z_{i}=\exp\big{\{}l_{1}\left(\mathbf{X}_{i},Y_{i}\right)\big{\}}\), for \(1\leqslant i\leqslant m\), and \(z_{i}=\exp\big{\{}l_{0}\left(\mathbf{X}_{i},Y_{i}\right)\big{\}}\), for \(m+1\leqslant i\leqslant n\). The LFP defined in Equation (7) can further be transformed to a linear programming problem (LPP), which can be solved efficiently. The solution of the LPP (7) yields a partially identified point estimate interval of ATE under a specific sensitivity model \(l_{a}\in\mathcal{L}_{\beta_{0}}\left(\gamma_{0},\gamma_{1}\right)\) and \(a\in\{0,1\}\). We can also obtain a \(100(1-\alpha)\%\) asymptotic confidence interval for ATE under a postulated sensitivity model \(l_{a}\) using a percentile bootstrap approach (Zhao et al., 2019). ## 4 Extension to Observational Studies with Multivalued Treatments In this section, we extend our proposed sensitivity analysis framework to the multivalued treatment setting. Suppose we observe i.i.d. \((A_{i},\mathbf{X}_{i},Y_{i})_{i=1}^{n}\) from an observational study with \(J>2\) treatment levels, where \(A_{i}\in\mathcal{A}=\{1,2,\ldots,J\}\) with the corresponding set of potential outcomes \(\mathcal{Y}_{i}=\{Y_{i}(1),Y_{i}(2),\ldots,Y_{i}(J)\}\), and \(\mathbf{X}_{i}\in\mathcal{X}\subset\mathbb{R}^{d}\) is a vector of observed confounders for each subject \(i\in[n]\). Let us also define a treatment indicator \(D_{i}(a)\) (\(1\) if \(A_{i}=a\), \(0\) otherwise) for \(a\in\mathcal{A}\). The identifiability assumptions for observational studies with multivalued treatments are almost equivalent to those in the binary treatment setting. We assume the overlap assumption that implies \(\mathbb{P}_{0}(A=a|\mathbf{X}=\mathbf{x})>0\) for all \(a\in\mathcal{A}\). However, instead of the strong ignorability assumption, we assume weak _ignorability_(Imbens, 2000). _Assumption 3 (weak ignorability)_.: \(D(a)\perp Y(a)\mid\boldsymbol{X},\ \forall\,a\in\mathcal{A}\). Imbens (2000) extended the concept of propensity scores from binary treatments to multivalued treatments introducing the generalized propensity score (GPS), which is defined as \(r_{a,0}(\boldsymbol{x})=\mathbb{P}_{0}(A=a|\boldsymbol{X}=\boldsymbol{x})\) for \(a\in\mathcal{A}\). As in Section 3, let us further denote the unobserved or true propensity score as \(r_{a,0}(\boldsymbol{x},y)=\mathbb{P}_{0}(A=a|\boldsymbol{X}=\boldsymbol{x},Y(a )=y)\). Based on these assumptions and notations, Basit et al. (2023) recently proposed a sensitivity analysis framework for the multivalued treatment setting extending the framework of Zhao et al. (2019) for binary treatments. Using similar ideas, we propose the following risk-ratio-based modified marginal sensitive model for observational studies with multivalued treatments. **Definition 2** (modified marginal sensitivity model for multivalued treatments).: _For fixed \(\Gamma_{0}\geqslant 1\) and \(\Gamma_{1}=\max\big{\{}r_{a,\beta_{0}}(\boldsymbol{x}),\Gamma_{0}^{-1}\big{\}}\), the modified marginal sensitivity model for multivalued treatments assumes that_ \[\Gamma_{1}\leqslant\mathrm{RR}\big{\{}r_{a,\beta_{0}}(\boldsymbol{x}),r_{a}( \boldsymbol{x},y)\big{\}}\leqslant\Gamma_{0},\ \forall\,\boldsymbol{x}\in\mathcal{X},a\in\mathcal{A},y\in\mathbb{R}, \tag{8}\] _where \(\Gamma_{0}\) and \(\Gamma_{1}\) are sensitivity parameters and \(\mathrm{RR}\,(p_{1},p_{2})=p_{1}/p_{2}\) is the risk ratio._ Next, as in section 3.1, let us define \[k_{a,\beta_{0}}(\boldsymbol{x})=\log\big{(}r_{a,\beta_{0}}(\boldsymbol{x}) \big{)}\quad\text{and}\quad k_{a}(\boldsymbol{x},y)=\log\big{(}r_{a}( \boldsymbol{x},y)\big{)},\] and let \(l_{a}(\boldsymbol{x},y)=k_{a,\beta_{0}}(\boldsymbol{x})-l_{a}(\boldsymbol{x},y)\) for all \(a\in\mathcal{A}\) be the log-scale difference between the observed and the unobserved GPSs under any specified sensitivity model \(r_{a}(\boldsymbol{x},y)\). Then, it can be shown that the sensitivity model (8) is equivalent to assuming \(l_{a}\in\mathcal{L}_{\beta_{0}}(\gamma_{0},\gamma_{1})\), where \[\mathcal{L}_{\beta_{0}}(\gamma_{0},\gamma_{1})=\big{\{}l_{a}:\mathcal{X} \times\mathbb{R}\to\mathbb{R}\text{ and }\gamma_{1}\leqslant l_{a}\leqslant\gamma_{0},\ a\in\mathcal{A}\big{\}},\] \(\gamma_{0}=\log(\Gamma_{0})\), and \(\gamma_{1}=\log(\Gamma_{1})=\max\big{(}\log(r_{a,\beta_{0}}(\boldsymbol{x})), -\gamma_{0}\big{)}\) are the sensitivity parameters. In order to define estimands of causal effects in the multivalued treatment setting, we use a general class of additive causal estimands proposed by Basit et al. (2023). This class of estimands is based on inverse probability weighting and is defined as \[\tau(\boldsymbol{c})=\sum_{a=1}^{J}c_{a}m(a)=\sum_{a=1}^{J}c_{a}\mathbb{E}_{0} \Bigg{[}\frac{Y\,D(a)}{r_{a}(\boldsymbol{X})}\Bigg{]}, \tag{9}\] where \(\boldsymbol{c}=(c_{1},\ldots,c_{J})^{\prime}\) is a vector of contrasts, \(m(a)=\mathbb{E}_{0}[Y(a)]\) is the average potential outcome for \(a\in\mathcal{A}\). These estimands encompass many commonly used causal estimands of interest for multivalued treatments, such as the pairwise average treatment effects (PATEs). Based on the identifiability assumptions defined for multivalued treatments, we define the shifted estimand of the average potential outcome \(m(a)\) under a specified sensitivity model \(l_{a}\) as \[m^{(h_{a})}(a)=\mathbb{E}_{0}\Bigg{[}\frac{YD(a)}{r_{a}(\boldsymbol{X})} \Bigg{]}=\mathbb{E}_{0}\Bigg{[}\frac{YD(a)}{r_{a}(\boldsymbol{X},Y)}\Bigg{]}= \mathbb{E}_{0}\Bigg{[}\frac{YD(a)}{r^{(h_{a})}(\boldsymbol{X},Y)}\Bigg{]}, \tag{10}\] where \(r^{(l_{a})}(\boldsymbol{x},y)=\big{[}\exp\big{(}l_{a}(\boldsymbol{x},y)-k_{a, \beta_{0}}(\boldsymbol{x})\big{)}\big{]}^{-1}\) is the shifted GPS. Consequently, under the modified marginal sensitivity model (8), the shifted causal estimand becomes \[\tau^{(l_{a})}(\boldsymbol{c})=\sum_{a=1}^{J}c_{a}m^{(l_{a})}(a). \tag{11}\] Similar to the estimation of the shifted ATE defined in Equation (4) for binary treatments, we can show that the estimation of the causal estimand can be converted to a linear programming problem (LPP) that allows us to efficiently estimate the partially identified point estimate intervals of \(\tau(\mathbf{c})\) for any \(l_{a}\in\mathcal{L}_{\beta_{0}}(\gamma_{0},\gamma_{1})\). The percentile bootstrap approach of Zhao et al. (2019) is also applicable for the computation of \(100(1-\alpha)\%\) asymptotic confidence intervals for \(\tau(\mathbf{c})\)(Basit et al., 2023). ## 5 Simulation Study We conducted simulated studies to investigate the performance of the proposed sensitivity analysis framework in the multivalued treatment setting. Our data generating mechanism and simulation settings are equivalent to Basit et al. (2023). We simulate three covariates as \(X_{i1}\sim\mathrm{Bernoulli}(0.5)\), \(X_{i2}\sim\mathrm{U}(-1,1)\), and \(X_{i3}\sim\mathrm{N}(0,0.5)\). For each \(i\in[n]\), the covariate vector then becomes \(\mathbf{X}_{i}=(1,X_{i1},X_{i2},X_{i3})^{T}\). We simulated the treatment assignment mechanism using the following multinomial distribution \[\big{(}D_{i}(1),D_{i}(2),D_{i}(3)\big{)}\ \big{|}\ \mathbf{X}_{i}\sim\mathrm{ Multinom}\,\big{(}r_{1}(\mathbf{X}_{i}),r_{2}(\mathbf{X}_{i}),r_{3}(\mathbf{X}_{i})\big{)},\] where \(D_{i}(a)\) is the treatment indicator and \[r_{a}(\mathbf{X}_{i},Y_{i})=r_{a}(\mathbf{X}_{i})=\frac{\exp\big{(}\mathbf{X}_{i}^{T}\beta _{a}\big{)}}{\sum_{a^{\prime}=1}^{3}\exp\big{(}\mathbf{X}_{i}^{T}\beta_{a^{\prime }}\big{)}}\] denotes the complete or unobserved GPSs for \(a\in\{1,2,3\}\) with \(\beta_{1}=(0,0,0,0)^{T}\), \(\beta_{2}=k_{2}\times(0,1,1,1)^{T}\), and \(\beta_{3}=k_{3}\times(0,1,1,-1)^{T}\). In order to assess the influence of the degree of overlap on the point estimates and confidence intervals of our proposed frameworks, we considered two simulation scenarios. In the first scenario, we set \((k_{2},k_{3})=(0.1,-0.1)\) to simulate a scenario with adequate overlap in the covariates, and in the second scenario, we set \((k_{2},k_{3})=(3,3)\) to induce lack of overlap. The potential outcomes are generated from the following multinomial distribution \[\big{(}Y_{i}(1),Y_{i}(2),Y_{i}(3)\big{)}\ \big{|}\ \mathbf{X}_{i}\sim\mathrm{ Multinom}\,\big{(}p_{Y_{1}}(\mathbf{X}_{i}),p_{Y_{2}}(\mathbf{X}_{i}),p_{Y_{3}}(\mathbf{X}_{i}) \big{)},\] where \(Y_{i}(a)\) is the potential outcome for treatment level \(a\) and \[p_{Y_{a}}(\mathbf{X}_{i})=\mathbb{P}(Y(a)=1\big{|}\mathbf{X}_{i})=\frac{\exp\big{(} \mathbf{X}_{i}^{T}\delta_{a}\big{)}}{\sum_{a^{\prime}=1}^{3}\exp\big{(}\mathbf{X}_{i} ^{T}\delta_{a^{\prime}}\big{)}}\] with \(\delta_{1}=(1,1,1,1)^{T}\), \(\delta_{2}=(1,1,-1,1)^{T}\), and \(\delta_{3}=(1,1,1,-1)^{T}\). The observed outcome was then obtained as \(Y_{i}=\sum_{a=1}^{J}D_{i}(a)Y_{i}(a)\) for each subject \(i\in[n]\). We simulated \(1000\) datasets with sample size \(n=750\) for each scenario and estimate the interval of point estimates and confidence intervals for the pairwise ATEs using our proposed sensitivity analysis framework. The pairwise ATEs are denoted by \(\tau_{i,j}\), for \(i,j\in\{1,2,3\}\). We considered six values of the sensitivity parameters \(\gamma_{0}\), namely, \(\gamma_{0}=\{0,0.1,0.2,0.5,1,2\}\). The true partially identified intervals were obtained under each simulation scenario using large scale numerical approximations. Since the treatment under consideration is multivalued with three treatment levels, the observed generalized propensity scores (GPSs) are modeled using the multinomial logit regression model. We simulate \(1000\) datasets for each scenario and estimate the interval of point estimates and construct \(90\%\) confidence intervals for the pairwise ATEs under the proposed sensitivity analysis for multivalued treatment settings. We report the percentage average bias (in SD units) in the lower and upper bounds of the point estimate interval, the non-coverage rate of the confidence interval, the median interval of point estimates, and the median confidence intervals calculated from the \(1000\) simulated datasets. The simulation results under the proposed framework of sensitivity analysis are presented in Table 1. When there is adequate overlap (Scenario-I), we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. Therefore, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. However, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. The percentile bootstrap confidence intervals also satisfy the nominal \(90\%\) coverage rate in the presence of adequate overlap. Therefore, we observe that the average bias in the SIPW point estimators of the pairwise ATEs lies within \(10\%\) of its standard deviation in almost all cases under under the proposed framework. that the performance of the SIPW point estimators and the percentile bootstrap confidence intervals deteriorates when there is a lack of overlap (Scenario-II). These findings are similar to the ones observed by Zhao et al. (2019) and Basit et al. (2023), and therefore, we recommend to interpret the sensitivity analysis results under the proposed framework with caution when there is a lack of overlap in the covariate distribution among different treatment groups. ## 6 Real Data Application In this section, we applied our proposed sensitivity analysis framework for multivalued treatments to estimate the causal effect of maternal education on the fertility rate in Bangladesh. The dataset used in this analysis was obtained from the latest round of the Bangladesh Multiple Indicator Cluster Survey (MICS) 2019 (BBS & UNICEF, 2019). We considered the number of children ever born to women in the reproductive age (\(15-49\) years) as a measure of fertility, which is the outcome of interest. The treatment variable was the maternal education level that has four levels- "pre-primary or none", "primary", "secondary", and "higher secondary or beyond". The following variables were considered as the observed confounders, which were assumed to be associated with both treatment and outcome: place of residence ("urban" or "rural"), district, religion ("muslim" or "non-muslim"), household's wealth index, women's age at marriage, and education level of the household head (Tomal et al., 2022). After discarding missing values from the outcome, treatment, and the observed confounders, we obtained complete data on \(53,481\) women aged \(15-49\) years to conduct the analysis. Using the proposed sensitivity analysis framework, we conduct the sensitivity analysis for the three pairwise ATEs between "pre-primary or none", "primary", and "secondary" vs. "higher secondary or beyond" level of maternal education, denoted by \(\tau_{1,4}\), \(\tau_{2,4}\), and \(\tau_{3,4}\), respectively. We considered \(\Gamma_{0}=\exp(\gamma_{0})=\{1,1.25,1.5,1.75,2.00,2.25,2.50,2.75\}\) to assess the sensitivity of the causal effect estimates to the presence of varying magnitudes of unmeasured confounding. As discussed in Remark 1, we did not need to specify the values of the sensitivity parameter \(\Gamma_{1}\) to conduct the sensitivity analysis. Since the treatment variable is ordinal, we fitted an ordinal logistic regression model, namely the continuation ratio regression model, to estimate the generalized propensity scores (GPSs) using the chosen observed confounders. The estimated GPSs ranged from \(0.01\) to \(0.98\) with a mean of \(0.25\) across the four treatment groups. In light of our findings in the simulation studies in Section 5, we recommend to interpret the results obtained from the SIPW estimators with caution as some of the estimated GPSs are close to zero. For each value of the sensitivity parameter \(\Gamma_{0}\), we estimated the partially identified point estimate intervals and obtained \(100(1-\alpha)\%\) confidence intervals using percentile bootstrap with \(B=1000\). Table 2 and Figure 1 represent the sensitivity analysis results under the risk-ratio-based framework. We can observe that in the case of the pairwise ATE between "pre-primary or none" and "higher secondary or beyond" level of education (\(\tau_{1,4}\)), the confidence intervals do not contain the null value of zero, suggesting a significant non-zero causal effect, for at least \(\Gamma_{0}=2.75\). That is, the estimated pairwise ATE \(\tau_{1,4}\) is significantly different from zero even in the presence of unmeasured confounders that cause the true (unobserved) GPSs to be 2.75 times higher than the observed GPSs. Similarly, the ATE between the "primary" and "higher secondary or beyond" level of education (\(\tau_{2,4}\)) is statistically significant for at least \(\Gamma_{0}=2.25\). However, the ATE between "secondary" and "higher secondary or beyond" education (\(\tau_{3,4}\)) is insignificant for values of \(\Gamma_{0}\) as small as about \(1.5\). Based on the conducted sensitivity analysis under our proposed framework, we can imply that the estimated pairwise ATE between "pre-primary or none" vs "higher secondary or beyond" education (\(\tau_{1,4}\)) and "primary" vs "higher secondary or beyond" education (\(\tau_{2,4}\)) are less sensi tive to the presence of unmeasured confounding in that very strong unmeasured confounders are needed to invalid the causal estimates of \(\tau_{1,4}\) and \(\tau_{2,4}\). This can be perceived as a substantial evidence of significant causal effect of maternal education on the fertility of women in Bangladesh. ## 7 Conclusion Sensitivity analysis is crucial for the accurate interpretation of causal conclusions drawn from observational studies. In this paper, we proposed a risk-ratio-based modified marginal sensitivity model by extending the odds-ratio-based sensitivity model of Zhao et al. (2019). We further extended our proposed sensitivity model to observational studies with multivalued treatments. As the sensitivity parameter in our proposed framework quantifies the degree of unmeasured confounding using risk ratios of the observed and true (generalized) propensity scores, it is easier and more intuitive to interpret the sensitivity analysis results under the proposed framework compared to that under odds-ratio-based frameworks. We illustrated that estimation of the in \begin{table} \begin{tabular}{c c c c c} \hline \hline Estimand & \(\Gamma_{0}\) & \(\gamma_{0}\) & \begin{tabular}{c} Point estimate \\ interval \\ \end{tabular} & \begin{tabular}{c} 90\% confidence \\ Interval \\ \end{tabular} \\ \hline \multirow{7}{*}{\(\tau_{1,4}\)} & 1 & 0.00 & \((1.97,1.97)\) & \((1.92,2.02)\) \\ & 1.25 & 0.22 & \((1.59,2.46)\) & \((1.52,2.53)\) \\ & 1.5 & 0.41 & \((1.22,2.83)\) & \((1.14,2.90)\) \\ & 1.75 & 0.56 & \((0.91,3.14)\) & \((0.83,3.21)\) \\ & 2 & 0.69 & \((0.66,3.39)\) & \((0.59,3.46)\) \\ & 2.25 & 0.81 & \((0.46,3.61)\) & \((0.38,3.68)\) \\ & 2.5 & 0.92 & \((0.29,3.82)\) & \((0.22,3.90)\) \\ & 2.75 & 1.01 & \((0.13,4.01)\) & \((0.03,4.09)\) \\ \hline \multirow{7}{*}{\(\tau_{2,4}\)} & 1 & 0.00 & \((1.46,1.46)\) & \((1.43,1.48)\) \\ & 1.25 & 0.22 & \((1.15,1.95)\) & \((1.10,1.99)\) \\ & 1.5 & 0.41 & \((0.82,2.27)\) & \((0.77,2.32)\) \\ & 1.75 & 0.56 & \((0.56,2.57)\) & \((0.50,2.63)\) \\ & 2 & 0.69 & \((0.30,2.83)\) & \((0.24,2.88)\) \\ & 2.25 & 0.81 & \((0.07,3.06)\) & \((0.02,3.11)\) \\ & 2.5 & 0.92 & \((-0.12,3.24)\) & \((-0.17,3.28)\) \\ & 2.75 & 1.01 & \((-0.28,3.39)\) & \((-0.33,3.44)\) \\ \hline \multirow{7}{*}{\(\tau_{3,4}\)} & 1 & 0.00 & \((0.75,0.75)\) & \((0.72,0.80)\) \\ & 1.25 & 0.22 & \((0.45,1.17)\) & \((0.39,1.23)\) \\ & 1.5 & 0.41 & \((0.13,1.49)\) & \((0.08,1.54)\) \\ & 1.75 & 0.56 & \((-0.13,1.75)\) & \((-0.18,1.80)\) \\ & 2 & 0.69 & \((-0.33,1.97)\) & \((-0.38,2.01)\) \\ & 2.25 & 0.81 & \((-0.49,2.13)\) & \((-0.54,2.17)\) \\ & 2.5 & 0.92 & \((-0.62,2.27)\) & \((-0.67,2.31)\) \\ & 2.75 & 1.01 & \((-0.73,2.38)\) & \((-0.78,2.41)\) \\ \hline \hline \end{tabular} \end{table} Table 2: _SIPW point estimate intervals and 90% percentile bootstrap confidence intervals for the pairwise ATEs under the RR framework for different values of the sensitivity parameter \(\Gamma_{0}\). The pairwise ATEs of pre-primary or no education, primary education, and secondary education vs. higher secondary or beyond education are denoted by \(\tau_{1,4}\), \(\tau_{2,4}\), and \(\tau_{3,4}\), respectively_ verse probability weighting (IPW) causal effect estimators under our proposed framework can be computed efficiently using a percentile bootstrap approach. We also conducted simulation studies that suggest that our proposed sensitivity analysis framework performs well when there is an adequate overlap among the treatment groups. However, the performance deteriorates due to the instability of the inverse probability weights when there is a lack of overlap. Finally, we demonstrated sensitivity analysis under the proposed framework using an empirical study, where we have estimated the causal effect of maternal education on the female fertility in Bangladesh. There are a number of further potential research directions. We are currently working on incorporating other smooth causal effect estimators, such as the doubly-robust augmented IPW (AIPW) estimators (Robins et al., 1994) and generalized overlap weighting (GOW) estimators (Li et al., 2019) into our proposed framework. Furthermore, we intend to work on calibrating the the sensitivity parameter in our framework to the observed confounders. Zhang & Small (2020) have recently worked on such calibration of the sensitivity model of Gastwirth et al. (1998) in matched observational studies.
2309.16840
Constant Approximation for Individual Preference Stable Clustering
Individual preference (IP) stability, introduced by Ahmadi et al. (ICML 2022), is a natural clustering objective inspired by stability and fairness constraints. A clustering is $\alpha$-IP stable if the average distance of every data point to its own cluster is at most $\alpha$ times the average distance to any other cluster. Unfortunately, determining if a dataset admits a $1$-IP stable clustering is NP-Hard. Moreover, before this work, it was unknown if an $o(n)$-IP stable clustering always \emph{exists}, as the prior state of the art only guaranteed an $O(n)$-IP stable clustering. We close this gap in understanding and show that an $O(1)$-IP stable clustering always exists for general metrics, and we give an efficient algorithm which outputs such a clustering. We also introduce generalizations of IP stability beyond average distance and give efficient, near-optimal algorithms in the cases where we consider the maximum and minimum distances within and between clusters.
Anders Aamand, Justin Y. Chen, Allen Liu, Sandeep Silwal, Pattara Sukprasert, Ali Vakilian, Fred Zhang
2023-09-28T20:42:46Z
http://arxiv.org/abs/2309.16840v1
# Constant Approximation for Individual Preference Stable Clustering ###### Abstract Individual preference (IP) stability, introduced by Ahmadi et al. (ICML 2022), is a natural clustering objective inspired by stability and fairness constraints. A clustering is \(\alpha\)-IP stable if the average distance of every data point to its own cluster is at most \(\alpha\) times the average distance to any other cluster. Unfortunately, determining if a dataset admits a 1-IP stable clustering is NP-Hard. Moreover, before this work, it was unknown if an \(o(n)\)-IP stable clustering always _exists_, as the prior state of the art only guaranteed an \(O(n)\)-IP stable clustering. We close this gap in understanding and show that an \(O(1)\)-IP stable clustering always exists for general metrics, and we give an efficient algorithm which outputs such a clustering. We also introduce generalizations of IP stability beyond average distance and give efficient, near-optimal algorithms in the cases where we consider the maximum and minimum distances within and between clusters. Introduction In applications involving and affecting people, socioeconomic concepts such as game theory, stability, and fairness are important considerations in algorithm design. Within this context, Ahmadi et al. [1] introduced the notion of _individual preference stability (IP stability)_ for clustering. At a high-level, a clustering of an input dataset is called IP stable if, for each individual point, its average distance to any other cluster is larger than the average distance to its own cluster. Intuitively, each individual prefers its own cluster to any other, and so the clustering is stable. There are plenty of applications of clustering in which the utility of each individual in any cluster is determined according to the other individuals who belong to the same cluster. For example, in designing _personalized medicine_, the more similar the individuals in each cluster are, the more effective medical decisions, interventions, and treatments can be made for each group of patients. Stability guarantees can also be used in personalized learning environments or marketing campaigns to ensure that no individual wants to deviate from their assigned cluster. Furthermore, the focus on individual utility in IP stability (a clustering is only stable if every individual is "happy") enforces a sort of individual fairness in clustering. In addition to its natural connections to cluster stability, algorithmic fairness, and Nash equilibria, IP stability is also algorithmically interesting in its own right. While clustering is well-studied with respect to global objective functions (e.g. the objectives of centroid-based clustering such as \(k\)-means or correlation/hierarchical clustering), less is known when the goal is to partition the dataset such that every point in the dataset is individually satisfied with the solution. Thus, IP stability also serves as a natural and motivated clustering framework with a non-global objective. ### Problem Statement and Preliminaries The main objective of our clustering algorithms is to achieve IP stability given a set \(P\) of \(n\) points lying in a metric space \((M,d)\) and \(k\), the number of clusters. **Definition 1.1** (Individual Preference (IP) Stability [1]).: The goal is to find a disjoint \(k\)-clustering \(\mathcal{C}=(C_{1},\cdots,C_{k})\) of \(P\) such that every point, _on average_, is closer to the points of its own cluster than to the points in any other cluster. Formally, for all \(v\in P\), let \(C(v)\) denote the cluster that contains \(v\). We say that \(v\in P\) is IP stable with respect to \(\mathcal{C}\) if either \(C(v)=\{v\}\) or for every \(C^{\prime}\in\mathcal{C}\) with \(C^{\prime}\neq C(v)\), \[\frac{1}{|C(v)|-1}\sum_{u\in C(v)}d(v,u)\leq\frac{1}{|C^{\prime}|}\sum_{u\in C ^{\prime}}d(v,u). \tag{1}\] The clustering \(\mathcal{C}\) is 1-IP stable (or simply IP stable) if and only if every \(v\in P\) is stable with respect to \(\mathcal{C}\). Ahmadi et al. [1] showed that an arbitrary dataset may not admit an IP stable clustering. This can be the case even when \(n=4\). Furthermore, they proved that it is NP-hard to decide whether a given a set of points have an IP stable \(k\)-clustering, even for \(k=2\). This naturally motivates the study of the relaxations of IP stability. **Definition 1.2** (Approximate IP Stability).: A \(k\)-clustering \(\mathcal{C}=(C_{1},\cdots,C_{k})\) of \(P\) is \(\alpha\)-approximate IP stable, or simply \(\alpha\)-IP stable, if for every point \(v\in P\), the following holds: either \(C(v)=\{v\}\) or for every \(C^{\prime}\in\mathcal{C}\) and \(C^{\prime}\neq C\), \[\frac{1}{|C(v)|-1}\sum_{u\in C(v)}d(v,u)\leq\frac{\alpha}{|C^{\prime}|}\sum_{u \in C^{\prime}}d(v,u). \tag{2}\] The work of [1] proposed algorithms to outputting IP stable clusterings on the one-dimensional line for any value of \(k\) and on tree metrics for \(k=2\). The first result implies an \(O(n)\)-IP stable clustering for general metrics, by applying a standard \(O(n)\)-distortion embedding to one-dimensional Euclidean space. In addition, they give a bicriteria approximation that discards an \(\varepsilon\)-fraction of the input points and outputs an \(O\left(\frac{\log^{2}n}{\varepsilon}\right)\)-IP stable clustering for the remaining points. Given the prior results, it is natural to ask if the \(O(n)\) factor for IP stable clustering given in [1] can be improved. ### Our Results New Approximations.Improving on the \(O(n)\)-IP stable algorithm in [1], we present a deterministic algorithm which for general metrics obtains an \(O(1)\)-IP stable \(k\)-clustering, for any value of \(k\). Note that given the existence of instances without 1-IP stable clusterings, our approximation factor is optimal up to a constant factor. **Theorem 1.3**.: _(Informal; see Theorem 3.1) Given a set \(P\) of \(n\) points in a metric space \((M,d)\) and a number of desired clusters \(k\leq n\), there exists an algorithm that computes an \(O(1)\)-IP stable \(k\)-clustering of \(P\) in polynomial time._ Our algorithm outputs a clustering with an even stronger guarantee that we call uniform (approximate) IP stability. Specifically, for some global parameter \(r\) and for every point \(v\in P\), the average distance from \(v\) to points in its own cluster is upper bounded by \(O(r)\) and the average distance from \(v\) to points in any other cluster is lower bounded by \(\Omega(r)\). Note that the general condition of \(O(1)\)-IP stability would allow for a different value of \(r\) for each \(v\). We again emphasize that Theorem 1.3 implies that an \(O(1)\)-IP stable clustering always exists, where prior to this work, only the \(O(n)\) bound from [1] was known for general metrics. Additional \(k\)-Center Clustering Guarantee.The clustering outputted by our algorithm satisfies additional desirable properties beyond \(O(1)\)-IP stability. In the \(k\)-center problem, we are given \(n\) points in a metric space, and our goal is to pick \(k\) centers as to minimize the maximal distance of any point to the nearest center. The clustering outputted by our algorithm from Theorem 1.3 has the added benefit of being a constant factor approximation to the \(k\)-center problem in the sense that if the optimal \(k\)-center solution has value \(r_{0}\), then the diameter of each cluster outputted by the algorithm is \(O(r_{0})\). In fact, we argue that IP stability is more meaningful when we also seek a solution that optimizes some clustering objective. If we only ask for IP stability, there are instances where it is easy to obtain \(O(1)\)-IP stable clusterings, but where such clusterings do not provide insightful information in a typical clustering application. Indeed, as we will show in Appendix B, randomly \(k\)-coloring the nodes of an unweighted, undirected graph (where the distance between two nodes is the number of edges on the shortest path between them), gives an \(O(1)\)-IP stable clustering when \(k\leq O\left(\frac{\sqrt{n}}{\log n}\right)\). Our result on trees demonstrates the idiosyncrasies of individual objectives thus our work raises further interesting questions about studying standard global clustering objectives under the restriction that the solutions are also (approximately) IP stable. Max and Min-IP Stability.Lastly, we introduce a notion of \(f\)-IP stability, generalizing IP stability. **Definition 1.4** (\(f\)-Ip Stability).: Let \((M,d)\) be a metric space, \(P\) a set of \(n\) points of \(M\), and \(k\) the desired number of partitions. Let \(f:P\times 2^{P}\rightarrow\mathbb{R}^{\geq 0}\) be a function which takes in a point \(v\in P\), a subset \(C\) of \(P\), and outputs a non-negative real number. we say that a \(k\)-clustering \(\mathcal{C}=(C_{1},\cdots,C_{k})\) of \(P\) is \(f\)-IP stable if for every point \(v\in P\), the following holds: either \(C(v)=\{v\}\) or for every \(C^{\prime}\in\mathcal{C}\) and \(C^{\prime}\neq C\), \[f\left(v,C(v)\setminus\{v\}\right)\leq f\left(v,C^{\prime}\right). \tag{3}\] Note that the standard setting of IP stability given in Definition 1.1 corresponds to the case where \(f(v,C)=(1/|C|)\times\sum_{v^{\prime}\in C}d(v,v^{\prime})\). The formulation of \(f\)-IP stability, therefore, extends IP stability beyond average distances and allows for alternative objectives that may be more desirable in certain settings. For instance, in hierarchical clustering, average, minimum, and maximum distance measures are well-studied. In particular, we focus on max-distance and min-distance in the definition of \(f\)-IP stable clustering in addition to average distance (which is just Definition 1.1), where \(f(v,C)=\max_{v^{\prime}\in C}d(v,v^{\prime})\) and \(f(v,C)=\min_{v^{\prime}\in C}d(v,v^{\prime})\). We show that in both the max and min distance formulations, we can solve the corresponding \(f\)-IP stable clustering (nearly) optimally in polynomial time. We provide the following result: **Theorem 1.5** (Informal; see Theorem 4.1 and Theorem 4.2).: _In any metric space, Min-IP stable clustering can be solved optimally and Max-IP stable clustering can be solved approximately within a factor of \(3\), in polynomial time._ We show that the standard greedy algorithm of \(k\)-center, a.k.a, the Gonzalez's algorithm [1], yields a \(3\)-approximate Max-IP stable clustering. Moreover, we present a conceptually clean algorithm which is motivated by considering the minimum spanning tree (MST) to output a Min-IP stable clustering. This implies that unlike the average distance formulation of IP stable clustering, a Min-IP stable clustering always exists. Both algorithms work in general metrics. Empirical Evaluations.We experimentally evaluate our \(O(1)\)-IP stable clustering algorithm against \(k\)-means++, which is the empirically best-known algorithm in [1]. We also compare \(k\)-means++ with our optimal algorithm for Min-IP stability. We run experiments on the Adult \begin{table} \begin{tabular}{|l|l|l|l|} \hline Metric & Approximation Factor & Reference & Remark \\ \hline \hline 1D Line metric & 1 & [1] & [1] \\ \hline Weighted tree & 1 & [1] & Only for \(k=2\) \\ \hline General metric & \(O(n)\) & [1] & \\ \hline General metric & \(O(1)\) & **This work** & \\ \hline \end{tabular} \end{table} Table 1: Our results on IP stable \(k\)-clustering of \(n\) points. All algorithms run in polynomial time. data set1 used by [1]. For IP stability, we also use four more datasets from UCI ML repositoriy [1] and a synthetic data set designed to be a hard instance for \(k\)-means++. On the Adult data set, our algorithm performs slightly worse than \(k\)-means++ for IP stability. This is consistent with the empirical results of [1]. On the hard instance2, our algorithm performs better than \(k\)-means++, demonstrating that the algorithm proposed in this paper is more robust than \(k\)-means++. Furthermore for Min-IP stability, we empirically demonstrate that \(k\)-means++ can have an approximation factors which are up to a factor of \(\mathbf{5x}\) worse than our algorithm. We refer to Section 5 and Appendix C for more details. Footnote 1: [https://archive.ics.uci.edu/ml/datasets/adult](https://archive.ics.uci.edu/ml/datasets/adult); see [16]. Footnote 2: The construction of this hard instance is available in the appendix of [1]. ### Technical Overview The main contribution is our \(O(1)\)-approximation algorithm for IP stable clustering for general metrics. We discuss the proof technique used to obtain this result. Our algorithm comprises two steps. We first show that for any radius \(r\), we can find a clustering \(\mathcal{C}=(C_{1},\ldots,C_{t})\) such that (a) each cluster has diameter \(O(r)\), and (b) the average distance from a point in a cluster to the points of any other cluster is \(\Omega(r)\). Conditions (a) and (b) are achieved through a ball carving technique, where we iteratively pick centers \(q_{i}\) of distance \(>6r\) to previous centers such that the radius \(r\) ball \(B(q_{i},r)\) centered at \(q_{i}\) contains a maximal number of points, say \(s_{i}\). For each of these balls, we initialize a cluster \(D_{i}\) containing the \(s_{i}\) points of \(B(q_{i},r)\). We next consider the annulus \(B(q_{i},3r)\setminus B(q_{i},2r)\). If this annulus contains less than \(s_{i}\) points, we include all points from \(B(q_{i},3r)\) in \(D_{i}\). Otherwise, we include _any_\(s_{i}\) points in \(D_{i}\) from the annulus. We assign each unassigned point to the _first_ center picked by our algorithm and is within distance \(O(r)\) to the point. This is a subtle but crucial component of the algorithm as the more natural "assign to the closest center" approach fails to obtain \(O(1)\)-IP stability. One issue remains. With this approach, we have no guarantee on the number of clusters. We solve this by merging some of these clusters while still maintaining that the final clusters have radius \(O(r)\). This may not be possible for any choice of \(r\). Thus the second step is to find the right choice of \(r\). We first run the greedy algorithm of \(k\)-center and let \(r_{0}\) be the minimal distance between centers we can run the ball carving algorithm \(r=cr_{0}\) for a sufficiently small constant \(c<1\). Then if we assign each cluster of \(\mathcal{C}\) to its nearest center among those returned by the greedy algorithm \(k\)-center, we do indeed maintain the property that all clusters have diameter \(O(r)\), and since \(c\) is a small enough constant, all the clusters will be non-empty. The final number of clusters will therefore be \(k\). As an added benefit of using the greedy algorithm for \(k\)-center as a subroutine, we obtain that the diameter of each cluster is also \(O(r_{0})\), namely the output clustering is a constant factor approximation to \(k\)-center. ### Related Work Fair Clustering.One of the main motivations of IP stable clustering is its interpretation as a notion of individual fairness for clustering [1]. Individual fairness was first introduced by [1] for the classification task, where, at high-level, the authors aim for a classifier that gives "similar predictions" for "similar" data points. Recently, other formulations of individual fairness have been studied for clustering [1, 2, 3], [1], [2], [1], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [100], [101], [102], [103], [104], [105], [106], [107], [108], [109], [110], [111], [112], [113], [114], [115], [116], [117], [118], [119], [120], [121], [122], [123], [124], [125], [126], [127], [128], [129], [130], [131], [132], [133], [134], [135], [136], [137], [140], [141], [142], [143], [144], [145], [146], [147], [148], [149], [150], [151], [152], [153], [154], [155], [156], [157], [160], [161], [162], [163], [164], [165], [166], [167], [168], [169], [170], [171], [172], [173], [174], [175], [176], [177], [178], [179], [180], [181], [182], [183], [184], [185], [186], [187], [188], [189], [190], [191], [192], [193], [194], [195], [196], [197], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [209], [210], [209], [221], [201], [201], [202], [203], [204], [205], [206], [207], [208], [209], [211], [209], [212], [213], [214], [215], [216], [217], [218], [219], [220], [222], [223], [224], [225], [226], [227], [228], [229], [230], [229], [240], [228], [229], [231], [232], [233], [234], [235], [236], [237], [238], [239], [241], [239], [242], [243], [244], [245], [246], [247], [248], [249], [250], [251], [252], [260], [253], [261], [262], [263], [264], [265], [266], [267], [268], [269], [270], [282], [290], [291], [292], [293], [294], [295], [296], [297], [297], [298], [299], [299], [299], [299], [290], [291], [293], [294], [295], [296], [297], [298], [299], [299], [299], [299], [290], [291], [299], [292], [293], [294], [295], [296], [297], [298], [299], [299], [299], [299], [29], [299], [299], [29], [299], [299], [299], [29], [299], [299], [299], [290], [ proposed a notion of fairness for centroid-based clustering: given a set of \(n\) points \(P\) and the number of clusters \(k\), for each point, a center must be picked among its \((n/k)\)-th closest neighbors. The optimization variant of it was later studied by [20, 21, 22]. [1] studied a pairwise notion of fairness in which data points represent people who gain some benefit from being clustered together. In a subsequent work, [1] introduced a stochastic variant of this notion. [1] studied the setting in which the output is a distribution over centers and "similar" points are required to have "similar" centers distributions. Stability in Clustering.Designing efficient clustering algorithms under notions of stability is a well-studied problem3. Among the various notion of stability, _average stability_ is the most relevant to our model [1]. In particular, they showed that if there is a ground-truth clustering satisfying the requirement of Equation (1) with an additive gap of \(\gamma>0\), then it is possible to recover the solution in the list model where the list size is exponential in \(1/\gamma\). Similar types of guarantees are shown in the work by [10]. While this line of research mainly focuses on presenting faster algorithms utilizing the strong stability conditions, the focus of IP stable clustering is whether we can recover such stability properties in general instances, either exactly or approximately. Footnote 3: For a comprehensive survey on this topic, refer to [1]. Hedonic Games.Another game-theoretic study of clustering is hedonic games [1, 2, 10]. In a hedonic game, players choose to form coalitions (i.e., clusters) based on their utility. Our work differs from theirs, since we do not model the data points as selfish players. In a related work, [11] proposes another utility measure for hedonic clustering games on graphs. In particular, they define a closeness utility, where the utility of node \(i\) in cluster \(C\) is the ratio between the number of nodes in \(C\) adjacent to \(i\) and the sum of distances from \(i\) to other nodes in \(C\). This measure is incomparable to IP stability. In addition, their work focuses only on clustering in graphs while we consider general metrics. ## 2 Preliminaries and Notations We let \((M,d)\) denote a metric space, where \(d\) is the underlying distance function. We let \(P\) denote a fixed set of points of \(M\). Here \(P\) may contain multiple copies of the same point. For a given point \(x\in P\) and radius \(r\geq 0\), we denote by \(B(x,r)=\{y\in P\mid d(x,y)\leq r\}\), the ball of radius \(r\) centered at \(x\). For two subsets \(X,Y\subseteq P\), we denote by \(d(X,Y)=\inf_{x\in X,y\in Y}d(x,y)\). Throughout the paper, \(X\) and \(Y\) will always be finite and then the infimum can be replaced by a minimum. For \(x\in P\) and \(Y\subseteq P\), we simply write \(d(x,Y)\) for \(d(\{x\},Y)\). Finally, for \(X\subseteq P\), we denote by \(\operatorname{diam}(X)=\sup_{x,y\in X}d(x,y)\), the diameter of the set \(X\). Again, \(X\) will always be finite, so the supremum can be replaced by a maximum. ## 3 Constant-Factor IP Stable Clustering In this section, we prove our main result: For a set \(P=\{x_{1},\ldots,x_{n}\}\) of \(n\) points with a metric \(d\) and every \(k\leq n\), there exists a \(k\)-clustering \(\mathcal{C}=(C_{1},\ldots,C_{k})\) of \(P\) which is \(O(1)\)-approximate IP stable. Moreover, such a clustering can be found in time \(\widetilde{O}(n^{2}T)\), where \(T\) is an upper bound on the time it takes to compute the distance between two points of \(P\). AlgorithmOur algorithm uses a subroutine, Algorithm 1, which takes as input \(P\) and a radius \(r\in\mathbb{R}\) and returns a \(t\)-clustering \(\mathcal{D}=(D_{1},\ldots,D_{t})\) of \(P\) with the properties that (1) for any \(1\leq i\leq t\), the maximum distance between any two points of \(D_{i}\) is \(O(r)\), and (2) for any \(x\in P\) and any \(i\) such that \(x\notin D_{i}\), the average distance from \(x\) to points of \(D_{i}\) is \(\Omega(r)\). These two properties ensure that \(\mathcal{D}\) is \(O(1)\)-approximate IP stable. However, we have no control on the number of clusters \(t\) that the algorithm produces. To remedy this, we first run a greedy \(k\)-center algorithm on \(P\) to obtain a set of centers \(\{c_{1},\ldots,c_{k}\}\) and let \(r_{0}\) denote the maximum distance from a point of \(P\) to the nearest center. We then run Algorithm 1 with input radius \(r=cr_{0}\) for some small constant \(c\). This gives a clustering \(\mathcal{D}=(D_{1},\ldots,D_{t})\) where \(t\geq k\). Moreover, we show that if we assign each cluster of \(\mathcal{D}\) to the nearest center in \(\{c_{1},\ldots,c_{k}\}\) (in terms of the minimum distance from a point of the cluster to the center), we obtain a \(k\)-clustering \(\mathcal{C}=(C_{1},\ldots,C_{k})\) which is \(O(1)\)-approximate IP stable. The combined algorithm is Algorithm 2. ``` 1:Input: A set \(P=\{x_{1},\ldots,x_{n}\}\) of \(n\) points with a metric \(d\) and a radius \(r>0\). 2:Output: Clustering \(\mathcal{D}=(D_{1},\ldots,D_{t})\) of \(P\). 3:\(Q\leftarrow\emptyset\), \(i\gets 1\) 4:while there exists \(x\in P\) with \(d(x,Q)>6r\)do 5:\(q_{i}\leftarrow\arg\max_{x\in P:d(x,Q)>6r}|B(x,r)|\) 6:\(Q\gets Q\cup\{q_{i}\}\), \(s_{i}\leftarrow|B(q_{i},r)|\), \(A_{i}\gets B(q_{i},3r)\setminus B(q_{i},2r)\) 7:if\(|A_{i}|\geq s_{i}\) 8:\(S_{i}\leftarrow\) any set of \(s_{i}\) points from \(A_{i}\) 9:\(D_{i}\gets B(q_{i},r)\cup S_{i}\) 10:else\(D_{i}\gets B(q_{i},3r_{i})\) 11:\(i\gets i+1\) 12:endwhile 13:for\(x\in P\) assigned to no \(D_{i}\)do 14:\(j\leftarrow\min\{i\mid d(x,q_{i})\leq 7r\}\) 15:\(D_{j}\gets D_{j}\cup\{x\}\) 16:endfor 17:\(t\leftarrow|Q|\) 18:return\(\mathcal{D}=(D_{1},\ldots,D_{t})\) ``` **Algorithm 1** Ball-Carving We now describe the details of Algorithm 1. The algorithm takes as input \(n\) points \(x_{1},\ldots,x_{n}\) of a metric space \((M,d)\) and a radius \(r\). It first initializes a set \(Q=\emptyset\) and then iteratively adds points \(x\) from \(P\) to \(Q\) that are of distance greater than \(6r\) from points already in \(Q\) such that \(|B(x,r)|\), the number of points of \(P\) within radius \(r\) of \(x\), is maximized. This is line 5-6 of the algorithm. Whenever a point \(q_{i}\) is added to \(Q\), we define the annulus \(A_{i}:=B(q_{i},3r)\setminus B(q_{i},2r)\). We further let \(s_{i}=|B(q_{i},r)|\). At this point the algorithm splits into two cases. * If \(|A_{i}|\geq s_{i}\), we initialize a cluster \(D_{i}\) which consists of the \(s_{i}\) points in \(B(x,r)\) and any arbitrarily chosen \(s_{i}\) points in \(A_{i}\). This is line 8-9 of the algorithm. * If, on the other hand, \(|A_{i}|<s\), we define \(D_{i}:=B(q_{i},3r)\), namely \(D_{i}\) contains all points of \(P\) within distance \(3r\) from \(q_{i}\). This is line 10 of the algorithm. After iteratively picking the points \(q_{i}\) and initializing the clusters \(D_{i}\), we assign the remaining points as follows. For any point \(x\in P\setminus\bigcup_{i}D_{i}\), we find the minimum \(i\) such that \(d(x,q_{i})\leq 7r\) and assign \(x\) to \(D_{i}\). This is line 13-16 of the algorithm. We finally return the clustering \(\mathcal{D}=(D_{1},\ldots,D_{t})\). We next describe the details of Algorithm 2. The algorithm iteratively pick \(k\) centers \(c_{1},\ldots,c_{k}\) from \(P\) for each center maximizing the minimum distance to previously chosen centers. For each center \(c_{i}\), it initializes a cluster, starting with \(C_{i}=\{c_{i}\}\). This is line 4-7 of the algorithm. Letting \(r_{0}\) be the minimum distance between pairs of distinct centers, the algorithm runs Algorithm 1 on \(P\) with input radius \(r=r_{0}/15\) (line 8-9). This produces a clustering \(\mathcal{D}\). In the final step, we iterate over the clusters \(D\) of \(\mathcal{D}\), assigning \(D\) to the \(C_{i}\) for which \(d(c_{i},D)\) is minimized (line 11-13). We finally return the clustering \((C_{1},\ldots,C_{k})\). ``` 1:Input: Set \(P=\{x_{1},\ldots,x_{n}\}\) of \(n\) points with a metric \(d\) and integer \(k\) with \(2\leq k\leq n\). 2:Output: \(k\)-clustering \(\mathcal{C}=(C_{1},\ldots,C_{k})\) of \(P\). 3:\(S\leftarrow\emptyset\) 4:for\(i=1,\ldots,k\)do 5:\(c_{i}\leftarrow\arg\max_{x\in P}\{d(x,S)\}\) 6:\(S\gets S\cup\{c_{i}\},\;C_{i}\leftarrow\{c_{i}\}\) 7:endfor 8:\(r_{0}\leftarrow\min\{d(c_{i},c_{j})\mid 1\leq i<j\leq k\}\) 9:\(\mathcal{D}\leftarrow\textsc{Ball-Carving}(P,r_{0}/15)\) 10:for\(D\in\mathcal{D}\)do 11:\(j\leftarrow\arg\min_{i}\{d(c_{i},D)\}\) 12:\(C_{j}\gets C_{j}\cup D\) 13:endfor 14:return\(\;\mathcal{C}=(C_{1},\ldots,C_{k})\) ``` **Algorithm 2** IP-Clustering AnalysisWe now analyze our algorithm and provide its main guarantees. **Theorem 3.1**.: _Algorithm 2 returns an \(O(1)\)-approximate IP stable \(k\) clustering in time \(O(n^{2}T+n^{2}\log n)\). Furthermore, the solution is also a constant factor approximation to the \(k\)-center problem._ In order to prove this theorem, we require the following lemma on Algorithm 1. **Lemma 3.2**.: _Let \((D_{1},\ldots,D_{t})\) be the clustering output by Algorithm 1. For each \(i\in[t]\), the diameter of \(D_{i}\) is at most \(14r\). Further, for \(x\in D_{i}\) and \(j\neq i\), the average distance from \(x\) to points of \(D_{j}\) is at least \(\frac{r}{4}\)._ Given Lemma 3.2, we can prove the the main result. Proof of Theorem 3.1.: We first argue correctness. As each \(c_{i}\) was chosen to maximize the minimal distance to points \(c_{j}\) already in \(S\), for any \(x\in P\), it holds that \(\min\{d(x,c_{i})\mid i\in[k]\}\leq r_{0}\). By Lemma 3.2, in the clustering \(\mathcal{D}\) output by Ball-Carving\((P,r_{0}/15)\) each cluster has diameter at most \(\frac{14}{15}r_{0}<r_{0}\), and thus, for each \(i\in[k]\), the cluster \(D\in\mathcal{D}\) which contains \(c_{i}\) will be included in \(C_{i}\) in the final clustering. Indeed, in line 11 of Algorithm 2, \(d(c_{i},D)=0\) whereas \(d(c_{j},D)\geq\frac{1}{15}r_{0}\) for all \(j\neq i\). Thus, each cluster in \((C_{1},\ldots,C_{k})\) is non-empty. Secondly, the diameter of each cluster is at most \(4r_{0}\), namely, for each two points \(x,x^{\prime}\in C_{i}\), they are both within distance \(r_{0}+\frac{14}{15}r_{0}<2r_{0}\) of \(c_{i}\). Finally, by Lemma 3.2, for \(x\in D_{i}\) and \(j\neq i\), the average distance from \(x\) to points of \(D_{j}\) is at least \(\frac{r_{0}}{60}\). Since, \(\mathcal{C}\) is a coarsening of \(\mathcal{D}\), i.e., each cluster of \(\mathcal{C}\) is the disjoint union of some of the clusters in \(\mathcal{D}\), it is straightforward to check that the same property holds for the clustering \(\mathcal{C}\). Thus \(\mathcal{C}\) is \(O(1)\)-approximate IP stable. We now analyze the running time. We claim that Algorithm 2 can be implemented to run in \(O(n^{2}T+n^{2}\log n)\) time, where \(T\) is the time to compute the distance between any two points in the metric space. First, we can query all pairs to form the \(n\times n\) distance matrix \(A\). Then we sort \(A\) along every row to form the matrix \(A^{\prime}\). Given \(A\) and \(A^{\prime}\), we easily implement our algorithms as follows. First, we argue about the greedy \(k\)-center steps of Algorithm 2, namely, the for loop on line 4. The most straightforward implementation computes the distance from every point to new chosen centers. At the end, we have computed at most \(nk\) distances from points to centers which can be looked up in \(A\) in time \(O(nk)=O(n^{2})\) as \(k\leq n\). In line 8, we only look at every entry of \(A\) at most once so the total time is also \(O(n^{2})\). The same reasoning also holds for the for loop on line 10. It remains to analyze the runtime. Given \(r\), Algorithm 1 can be implemented as follows. First, we calculate the size of \(|B(x,r)|\) for every point \(x\) in our dataset. This can easily be done by binary searching on the value of \(r\) along each of the (sorted) rows of \(A^{\prime}\), which takes \(O(n\log n)\) time in total. We can similarly calculate the sizes of \(|B(x,2r)|\) and \(|B(x,3r)|\), and thus the number of points in the annulus \(|B(x,3r)\setminus B(x,2r)|\) in the same time to initialize the clusters \(D_{i}\). Similar to the \(k\)-center reasoning above, we can also pick the centers in Algorithm 1 which are \(>6r\) apart iteratively by just calculating the distances from points to the chosen centers so far. This costs at most \(O(n^{2})\) time, since there are at most \(n\) centers. After initializing the clusters \(D_{i}\), we finally need to assign the remaining unassigned points (line 13-16). This can easily be done in time \(O(n)\) per point, namely for each unassigned point \(x\), we calculate its distance to each \(q_{i}\) assigning it to \(D_{i}\) where \(i\) is minimal such that \(d(x,q_{i})\leq 7r\). The total time for this is then \(O(n^{2})\). The \(k\)-center guarantees follow from our choice of \(r_{0}\) and Lemma 3.2. _Remark 3.3_.: We note that the runtime can possibly be improved if we assume special structure about the metric space (e.g., Euclidean metric). See Appendix A for a discussion. We now prove Lemma 3.2. Proof of Lemma 3.2.: The upper bound on the diameter of each cluster follows from the fact that for any cluster \(D_{i}\) in the final clustering \(\mathcal{D}=\{D_{1},\ldots,D_{t}\}\), and any \(x\in D_{i}\), it holds that \(d(x,q_{i})\leq 7r\). The main challenge is to prove the lower bound on the average distance from \(x\in D_{i}\) to \(D_{j}\) where \(j\neq i\). Suppose for contradiction that, there exists \(i,j\) with \(i\neq j\) and \(x\in D_{i}\) such that the average distance from \(x\) to \(D_{j}\) is smaller than \(r/4\), i.e., \(\frac{1}{|D_{j}|}\sum_{y\in D_{j}}d(x,y)<r/4\). Then, it in particular holds that \(|B(x,r/2)\cap D_{j}|>|D_{j}|/2\), namely the ball of radius \(r/2\) centered at \(x\) contains more than half the points of \(D_{j}\). We split the analysis into two cases corresponding to the if-else statements in line 7-10 of the algorithm. Case 1: \(|A_{j}|\geq s_{j}\):In this case, cluster \(D_{j}\) consists of at least \(2s_{j}\) points, namely the \(s_{j}\) points in \(B(q_{j},r)\) and the set \(S_{j}\) of \(s_{j}\) points in \(A_{j}\) assigned to \(D_{j}\) in line 8-9 of the algorithm. It follows from the preceding paragraph that, \(|B(x,r/2)\cap D_{j}|>s_{j}\). Now, when \(q_{j}\) was added to \(Q\), it was chosen as to maximize the number of points in \(B(q_{j},r)\) under the constraint that \(q_{j}\) had distance greater than \(6r\) to previously chosen points of \(Q\). Since \(|B(x,r)|\geq|B(x,r/2)|>|B(q_{j},r)|\), at the point where \(q_{j}\) was chosen, \(Q\) already contained some point \(q_{j_{0}}\) (with \(j_{0}<j\)) of distance at most \(6r\) to \(x\) and thus of distance at most \(7r\) to any point of \(B(x,r/2)\). It follows that \(B(x,r/2)\cap D_{j}\) contains no point assigned during line 13- 16 of the algorithm. Indeed, by the assignment rule, such a point \(y\) would have been assigned to either \(D_{j_{0}}\) or potentially an even earlier initialized cluster of distance at most \(7r\) to \(y\). Thus, \(B(x,r/2)\cap D_{j}\) is contained in the set \(B(q_{j},r)\cup S_{j}\). However, \(|B(q_{j},r)|=|S_{j}|=s_{j}\) and moreover, for \((y_{1},y_{2})\in B(q_{j},r)\times S_{j}\), it holds that \(d(y_{1},y_{2})>r\). In particular, no ball of radius \(r/2\) can contain more than \(s_{j}\) points of \(B(q_{j},r)\cup S_{j}\). As \(|B(x,r/2)\cap D_{j}|>s_{j}\), this is a contradiction. Case 2: \(|A_{j}|<s_{j}\):In this case, \(D_{j}\) includes all points in \(B(q_{j},3r)\). As \(x\notin D_{j}\), we must have that \(x\notin B(q_{j},3r)\) and in particular, the ball \(B(x,r/2)\) does not intersect \(B(q_{j},r)\). Thus, \[|D_{j}|\geq|B(x,r/2)\cap D_{j}|+|B(q_{j},r)\cap D_{j}|>|D_{j}|/2+s_{j},\] so \(|D_{j}|>2s_{j}\), and finally, \(|B(x,r/2)\cap D_{j}|>|D_{j}|/2>s_{j}\). Similarly to case 1, \(B(x,r/2)\cap D_{j}\) contains no points assigned during line 13- 16 of the algorithm. Moreover, \(B(x,r/2)\cap B(q_{j},3r)\subseteq A_{j}\). In particular, \(B(x,r/2)\cap D_{j}\subseteq S_{j}\), a contradiction as \(|S_{j}|=s_{j}\) but \(|B(x,r/2)\cap D_{j}|>s_{j}\). ## 4 Min and Max-IP Stable Clustering The Min-IP stable clustering aims to ensure that for any point \(x\), the _minimum_ distance to a point in the cluster of \(x\) is at most the minimum distance to a point in any other cluster. We show that a Min-IP stable \(k\)-clustering always exists for any value of \(k\in[n]\) and moreover, can be found by a simple algorithm (Algorithm 3). ``` 1:Input: Pointset \(P=\{x_{1},\ldots,x_{n}\}\) from a metric space \((M,d)\) and integer \(k\) with \(2\leq k\leq n\). 2:Output:\(k\)-clustering \(\mathcal{C}=(C_{1},\ldots,C_{k})\) of \(P\). 3:\(L\leftarrow\{(x_{i},x_{j})\}_{1\leq i<j\leq n}\) sorted according to \(d(x_{i},x_{j})\) 4:\(E\leftarrow\emptyset\) 5:while\(G=(P,E)\) has \(>k\) connected components do 6:\(e\leftarrow\) an edge \(e=(x,y)\) in \(L\) with \(d(x,y)\) minimal. 7:\(L\gets L\setminus\{e\}\) 8:if\(e\) connects different connected components of \(G\)then\(E\gets E\cup\{e\}\) 9:endwhile 10:return the connected components \((C_{1},\ldots,C_{k})\) of \(G\). ``` **Algorithm 3** Min-IP-Clustering The algorithm is identical to Kruskal's algorithm for finding a minimum spanning tree except that it stops as soon as it has constructed a forest with \(k\) connected components. First, it initializes a graph \(G=(V,E)\) with \(V=P\) and \(E=\emptyset\). Next, it computes all distances \(d(x_{i},x_{j})\) between pairs of points \((x_{i},x_{j})\) of \(P\) and sorts the pairs \((x_{i},x_{j})\) according to these distances. Finally, it goes through this sorted list adding each edge \((x_{i},x_{j})\) to \(E\) if it connects different connected components of \(G\). After computing the distances, it is well known that this algorithm can be made to run in time \(O(n^{2}\log n)\), so the total running time is \(O(n^{2}(T+\log n))\) where \(T\) is the time to compute the distance between a single pair of points. **Theorem 4.1**.: _The \(k\)-clustering output by Algorithm 3 is a Min-IP stable clustering._ Proof.: Let \(\mathcal{C}\) be the clustering output by the algorithm. Conditions (1) and (2) in the definition of a min-stable clustering are trivially satisfied. To prove that (3) holds, let \(C\in\mathcal{C}\) with \(|C|\geq 2\) and \(x\in C\). Let \(y_{0}\neq x\) be a point in \(C\) such that \((x,y_{0})\in E\) (such an edge exists because \(C\) is the connected component of \(G\) containing \(x\)) and let \(y_{1}\) be the closest point to \(x\) in \(P\setminus C\). When the algorithm added \((x,y_{0})\) to \(E\), \((x,y_{1})\) was also a candidate choice of an edge between connected components of \(G\). Since the algorithm chose the edge of minimal length with this property, \(d(x,y_{0})\leq d(x,y_{1})\). Thus, we get the desired bound: \[\min_{y\in C\setminus\{x\}}d(x,y)\leq d(x,y_{0})\leq d(x,y_{1})=\min_{y\in P \setminus C}d(x,y).\qed\] **Theorem 4.2**.: _The solution output by the greedy algorithm of \(k\)-center is a \(3\)-approximate Max-IP stable clustering._ Proof.: To recall, the greedy algorithm of \(k\)-center (aka Gonzalez algorithm [14]) starts with an arbitrary point as the first center and then goes through \(k-1\) iterations. In each iteration, it picks a new point as a center which is furthest from all previously picked centers. Let \(c_{1},\cdots,c_{k}\) denote the selected centers and let \(r:=\max_{v\in P}d(v,\{c_{1},\cdots,c_{k}\})\). Then, each point is assigned to the cluster of its closest center. We denote the constructed clusters as \(C_{1},\cdots,C_{k}\). Now, for every \(i\neq j\in[k]\) and each point \(v\in C_{i}\), we consider two cases: * \(d(v,c_{i})\leq r/2\). Then \[\max_{u_{i}\in C_{i}}d(v,u_{i}) \leq d(v,c_{i})+d(u_{i},c_{i})\leq 3r/2,\] \[\max_{u_{j}\in C_{j}}d(v,u_{j}) \geq d(v,c_{j})\geq d(c_{i},c_{j})-d(v,c_{i})\geq r/2.\] * \(d(v,c_{i})>r/2\). Then \[\max_{u_{i}\in C_{i}}d(v,u_{i}) \leq d(v,c_{i})+d(u_{i},c_{i})\leq 3d(v,c_{i}),\] \[\max_{u_{j}\in C_{j}}d(v,u_{j}) \geq d(v,c_{j})\geq d(v,c_{i}).\] In both cases, \(\max_{u_{i}\in C_{i}}d(v,u_{i})\leq 3\max_{u_{j}\in C_{j}}d(v,u_{j})\). ## 5 Experiments While the goal and the main contributions of our paper are mainly theoretical, we also implement our optimal Min-IP clustering algorithm as well as extend the experimental results for IP stable clustering given in [1]. Our experiments demonstrate that our optimal Min-IP stable clustering algorithm is superior to \(k\)-means++, the strongest baseline in [1], and show that our IP clustering algorithm for average distances is practical on real world datasets and is competitive to \(k\)-means++ (which fails to find good stable clusterings in the worst case [1]). We give our experimental results for Min-IP stability and defer the rest of the empirical evaluations to Section C. All experiments were performed in Python 3. The results shown below are an average of 10 runs for \(k\)-means++. MetricsWe measure the quality of a clustering using the same metrics used in [1] for standardization. Considering the question of \(f\)-IP stability (Definition 1.4), let the violation of a point \(x\) be defined as \(\mathrm{Vi}(x)=\max_{C_{i}\neq C(x)}\frac{f(x,C(x)\setminus\{x\})}{f(x,C_{i})}\). For example, setting \(f(x,C)=\sum_{y\in C}d(x,y)/|C|\) corresponds to the standard IP stability objective and \(f(x,C)=\min_{y\in C}d(x,y)\) is the Min-IP formulation. Note point \(x\) is stable iff \(\mathrm{Vi}(x)\leq 1\). We measure the extent to which a \(k\)-clustering \(\mathcal{C}=(C_{1},\ldots,C_{k})\) of \(P\) is (un)stable by computing \(\mathrm{MaxVi}=\max_{x\in P}\mathrm{Vi}(x)\) (maximum violation) and \(\mathrm{MeanVi}=\sum_{x\in P}\mathrm{Vi}(x)/|P|\) (mean violation). ResultsFor Min-IP stability, we have an optimal algorithm; it always returns a stable clustering for all \(k\). We see in Figures 1 that for the max and mean violation metrics, our algorithm outperforms \(k\)-means++ by up to a factor of \(\mathbf{5x}\), consistently across various values of \(k\). \(k\)-means ++ can return a much worse clustering under Min-IP stability on real data, motivating the use of our theoretically-optimal algorithm in practice. ## 6 Conclusion We presented a deterministic polynomial time algorithm which provides an \(O(1)\)-approximate IP stable clustering of \(n\) points in a general metric space, improving on prior works which only guaranteed an \(O(n)\)-approximate IP stable clustering. We also generalized IP stability to \(f\)-stability and provided an algorithm which finds an exact Min-IP stable clustering and a 3-approximation for Max-IP stability, both of which hold for all \(k\) and in general metric spaces. Future directionsThere are multiple natural open questions following our work. * Note that in some cases, an \(\alpha\)-IP stable clustering for \(\alpha<1\) may exist. On the other hand, in the hard example on \(n=4\) from [1], we know that there some constant \(C>1\) such that no \(C\)-IP stable clustering exists. For a given input, let \(\alpha^{*}\) be the minimum value such that an \(\alpha^{*}\)-IP stable clustering exists. Is there an efficient algorithm which returns an \(O(\alpha^{*})\)-IP stable clustering? Note that our algorithm satisfies this for \(\alpha=\Omega(1)\). An even stronger result would be to find a PTAS which returns a \((1+\varepsilon)\alpha^{*}\)-IP stable clustering. Figure 1: Maximum and mean violation for Min-IP stability for the Adult dataset, as used in [1]; lower values are better. * For what specific metrics (other than the line or tree metrics with \(k=2\)) can we get 1-IP stable clusterings efficiently? * In addition to stability, it is desirable that a clustering algorithm also achieves strong global welfare guarantee. Our algorithm gives constant approximation for \(k\)-center. What about other standard objectives, such as \(k\)-median and \(k\)-means?
2309.07795
Late-time phenomenology required to solve the $H_0$ tension in view of the cosmic ladders and the anisotropic and angular BAO data sets
The $\sim 5\sigma$ mismatch between the value of the Hubble parameter measured by SH0ES and the one inferred from the inverse distance ladder (IDL) constitutes the biggest tension afflicting the standard model of cosmology, which could be pointing to the need of physics beyond $\Lambda$CDM. In this paper we study the background history required to solve the $H_0$ tension if we consider standard prerecombination physics, paying special attention to the role played by the data on baryon acoustic oscillations (BAO) employed to build the IDL. We show that the anisotropic BAO data favor an ultra-late-time (phantom-like) enhancement of $H(z)$ at $z\lesssim 0.2$, accompanied by a transition in the absolute magnitude of supernovae of Type Ia $M(z)$ in the same redshift range. This agrees with previous findings in the literature. The effective dark energy (DE) density must be smaller than in the standard model at higher redshifts. Instead, when angular BAO data (claimed to be less subject to model dependencies) is employed in the analysis, we find that the increase of $H(z)$ starts at much higher redshifts, typically in the range $z\sim 0.5-0.8$. In this case, $M(z)$ could experience also a transition (although much smoother) and the effective DE density becomes negative at $z\gtrsim 2$. Both scenarios require a violation of the weak energy condition (WEC), but leave an imprint on completely different redshift ranges and might also have a different impact on the perturbed observables. They allow for the effective crossing of the phantom divide. Finally, we employ two alternative methods to show that current data from cosmic chronometers do not exclude the violation of the WEC, but do not add any strong evidence in its favor neither. Our work puts the accent on the utmost importance of the choice of the BAO data set in the study of the possible solutions to the $H_0$ tension.
Adrià Gómez-Valent, Arianna Favale, Marina Migliaccio, Anjan A. Sen
2023-09-14T15:32:51Z
http://arxiv.org/abs/2309.07795v2
Late-time phenomenology required to solve the \(H_{0}\) tension in view of the cosmic ladders and the anisotropic and angular BAO data sets ###### Abstract The \(\sim 5\sigma\) mismatch between the value of the Hubble parameter measured by SH0ES and the one inferred from the inverse distance ladder (IDL) constitutes the biggest tension afflicting the standard model of cosmology, which could be pointing to the need of physics beyond \(\Lambda\)CDM. In this paper we study the background history required to solve the \(H_{0}\) tension if we consider standard prerecombination physics, paying special attention to the role played by the data on baryon acoustic oscillations (BAO) employed to build the IDL. We show that the anisotropic BAO data favor an ultra-late-time (phantom-like) enhancement of \(H(z)\) at \(z\lesssim 0.2\) to solve the tension, accompanied by a transition in the absolute magnitude of supernovae of Type Ia \(M(z)\) in the same redshift range. This agrees with previous findings in the literature. The effective dark energy (DE) density must be smaller than in the standard model at higher redshifts. Instead, when angular BAO data (claimed to be less subject to model dependencies) is employed in the analysis, we find that the increase of \(H(z)\) starts at much higher redshifts, typically in the range \(z\sim 0.6-0.9\). In this case, \(M(z)\) could experience also a transition (although much smoother) and the effective DE density becomes negative at \(z\gtrsim 2\). Both scenarios require a violation of the weak energy condition (WEC), but leave an imprint on completely different redshift ranges and might also have a different impact on the perturbed observables. They allow for the effective crossing of the phantom divide. Finally, we employ two alternative methods to show that current data from cosmic chronometers do not exclude the violation of the WEC, but do not add any strong evidence in its favor neither. Our work puts the accent on the utmost importance of the choice of the BAO data set in the study of the possible solutions to the \(H_{0}\) tension. cosmological parameters - Cosmology: observations - Cosmology: theory - dark energy ## I Introduction It is not possible to constrain the Hubble parameter, \(H_{0}\), with uncalibrated data on supernovae of Type Ia (SNIa) or baryon acoustic oscillations (BAO). One needs to calibrate first these data sets with a measurement of the absolute magnitude of SNIa, \(M\), and the comoving sound horizon at the baryon-drag epoch, \(r_{d}\), respectively. These are the calibrators of the so-called direct and inverse cosmic distance ladders, which are two of the main ways we have of measuring the current expansion rate of the Universe. The latter is of course a very important quantity, since it enters the computation of cosmic times and distances. The SH0ES Team has measured \(M^{R22}=(-19.253\pm 0.027)\) mag using calibrated Cepheid variable stars in galaxies that also host SNIa. This leads to the measurement of \(H_{0}^{R22}=(73.04\pm 1.04)\) km/s/Mpc making use of the supernovae in the Hubble flow [1]. On the other hand, the _Planck_ Collaboration finds \(r_{d}^{P18}=(147.09\pm 0.26)\) Mpc and \(H_{0}^{P18}=(67.36\pm 0.54)\) km/s/Mpc under the assumption of \(\Lambda\)CDM and using the TT,TE,EE+lowE+lensing cosmic microwave background (CMB) likelihood [2]. This implies a \(\sim 5\sigma\) mismatch between the local measurement by SH0ES and the _Planck_/\(\Lambda\)CDM inference. A similar level of tension with SH0ES is found when \(r_{d}^{P18}\) is employed to calibrate the anisotropic (or 3D) BAO data in fitting analyses of the standard model (see e.g. [3]) or by using these calibrated data with SNIa apparent magnitudes in parametric and cosmographical analyses [4; 5; 6; 7; 8]. We refer the reader to the reviews [9; 10; 11] for further details. The discrepancy between the two cosmic ladders in the context of the standard model could be due to some issue with their calibrators [12; 13; 14; 15], either by unaccounted for systematic errors or by new physics. In order to grasp the possible origin of the Hubble tension it is useful to analyze the constraints in the \(M-r_{d}\) plane that are obtained by studying the compatibility of uncalibrated SNIa and 3D BAO data. This forces the calibrators of the direct and inverse distance ladders to lie in a quite narrow degeneracy band, see the grey contours in Fig. 1. In that figure we also plot the SH0ES constraint \(M^{R22}\) (in cyan) and the _Planck_/\(\Lambda\)CDM constraint \(r_{d}^{P18}\) (in purple). The mismatch between the overlapping region of these two bands and the SNIa+BAO degeneracy band is nothing else but the clear manifestation of the Hubble tension. In the absence of systematics in the data, one pos sibility to try to solve the tension is to consider a departure from the standard model before the decoupling of the CMB photons, capable of decreasing the distance traveled by the sound waves in the photo-baryon fluid1. This can be achieved in different ways: by changing the strength of gravity [17; 18; 19; 20; 21; 22; 23; 24; 25], modifying the recombination time with the help of primordial magnetic fields [26] or varying atomic constants [27; 28; 29], altering the shape of the primordial power spectrum [30], or by considering different forms of early dark energy [31; 32; 33; 34; 35; 36; 37; 38; 39] or dark radiation [12]. See the reviews [40; 41; 42] for a more extended bibliography. These options are appealing, but it is important to mention that their ability to loosen the tension is in general limited by e.g. their impact on photon diffusion [43] or the early integrated Sachs-Wolfe effect [44], and typically require much larger values of the spectral index of the primordial power spectrum of scalar perturbations [19; 25; 32; 44], closer to \(n_{s}\sim 1\). This is welcome by small-scale CMB experiments as the Atacama Cosmology Telecope (ACT), but not by _Planck_[45; 46], especially when CMB polarization data is included in the fitting analyses, see e.g. [12; 23; 24]. Footnote 1: This has to be complemented also by changes in the late Universe with respect to the _Planck_/\(\Lambda\)CDM best-fit cosmology, of course. To understand why see e.g. [16]. Fig. 1 tells us that if there is no new physics before recombination, i.e. if \(r_{d}\sim 147\) Mpc, the absolute magnitude of SNIa, considered to be constant in time, must be \(M\sim-19.4\) mag. This value is in clear tension with the one measured by SH0ES in the second rung of the direct distance ladder (at \(z\lesssim 0.02\)). According to this very well-known result and taking for granted the absence of systematics in the SH0ES measurement of \(M\), _Planck_ and the SNIa data sets, it seems that there are only three possible routes to try to explain the \(H_{0}\) (or \(M\)) tension with low-\(z\) solutions: 1. _Systematics in the anisotropic BAO data:_ The 3D BAO data could be affected by biases due e.g. to some model dependence introduced in the reconstruction of the BAO peak. It is therefore interesting to explore alternative BAO data sets, as the transversal or angular BAO (a.k.a 2D BAO) data set, which is perhaps less subject to model dependencies [48], see also [49; 50]. This could open the window to solutions that keep a large value of \(M\sim M^{R22}\) in a wider redshift range and introduce some new physics at \(z\gtrsim 2\) to keep the location of the first peak of the CMB temperature power spectrum unaltered. This would move the grey band upwards in Fig. 1, towards the overlapping region of SH0ES and _Planck_. In this work we will study how the low-redshift solution to the Hubble tension changes when the 2D BAO data set is considered instead of the 3D BAO data set. The work [51] already reported the existing tension between these two data sets and showed that there is a better agreement between SH0ES and the 2D BAO data, considering a constant \(M\) (see also [52]). Here we will show what is the shape of \(H(z)\) required not to spoil the description of the CMB, and the possible implications for the effective dark energy fluid. In addition, we will see that the data does not exclude a mild evolution of \(M(z)\). 2. _A sudden transition in \(M\) at some point of the second rung of the direct distance ladder (at \(z\lesssim 0.01\)) leading to \(M\sim-19.4\) mag in the Hubble flow:_ This would automatically reestabl the concordance between the SH0ES measurement and the value \(H_{0}\sim 67.5\) km/s/Mpc obtained from the fit of the \(\Lambda\)CDM to the 3D BAO and _Planck_ 2018 data, essentially erasing the Hubble tension. The study [53] does not exclude this possibility, but does not find any compelling evidence in its favor neither, as it is clear from Fig. 16 of that reference. There is actually room for this transition to happen at higher redshifts and this is a possibility that we will discuss below, in point 3. Such a rapid transition could be due, for instance, to some difference of color correction between the SNIa in the calibration and cosmological samples, or to another unaccounted for refinement of the SNIa standardization procedure (the so-called Tripp calibration [54]) [55; 56; 57; 58]. The ultimate physical origin of these effects remains unknown, though. Alternatively, this transition in \(M\) could also be Figure 1: Degeneracy band at 68% and 95% C.L. in the \(M\)-\(r_{d}\) plane (in grey) inferred from unclibrated BAO and SNIa data. We have obtained these constraints using the method previously employed in [15], which is based on the Index of Inconsistency by Lin and Ishak [47], see Appendix A for details. We also include the vertical band with the SH0ES measurement of \(M\) (\(M^{R22}\)) [1] and the horizontal band with the \(\Lambda\)CDM value of \(r_{d}\) obtained by _Planck_ (\(r_{d}^{P18}\)) [2], both at \(1\sigma\) C.L. It is evident that the intersection of these two bands lie far away from the region preferred by the 3D BAO and SNIa data. caused by an ultra-late-time change of \(G\) achieved through some modified theory of gravity [59; 60; 61; 62], see also [63; 64]. Nevertheless, it seems unnatural that such a transition in redshift, at both cosmological and local scales, leads to Newton's constant in the local environment, exactly as in the cosmological regime at higher redshifts. We would expect this transition to alter the gravitational strength entering the Friedmann equation, therefore changing the \(\Lambda\)CDM background evolution and the ability of the model to fit the CMB and the anisotropic BAO data2. Explaining the phenomenology as due only to local (screening) effects is also contrived, since one would expect them to be present also at higher redshifts, leading to a more stable value of \(M\). A natural implementation of these ideas might not be easy3. Footnote 2: Notice that \(H_{0}^{2}=8\pi G\rho_{c}^{0}/3\). Hence, if we keep \(H_{0}\) as in the _Planck_/\(\Lambda\)CDM model and consider a \(G\) different from Newton’s constant we need to change also the critical energy density \(\rho_{c}^{0}\) to properly fit the 3D BAO and CMB data. The only way of doing so by keeping \(\Omega_{m}=\rho_{m}^{0}/\rho_{c}^{0}\) to the value preferred by the SNIa data is by changing both, the value of the cosmological constant and the matter density \(\rho_{m}^{0}\). This would alter the amplitude and location of the CMB peaks. 3. _A smoother transition in \(M\) together with an increase of \(H(z)\) happening at \(z\lesssim 0.2\):_ If we assume, as in the previous scenario, that the 3D BAO data are not affected by significant biases, but instead assume that the value of \(M\) at the beginning of the third rung of the cosmic distance ladder is given by the SH0ES measurement, how should we change the shape of \(H(z)\) to explain the CMB and 3D BAO observations while keeping \(H_{0}^{R22}\sim 73\) km/s/Mpc? Two conditions have to be satisfied: (i) \(H(z)\) has to increase at small redshifts to reach the region of \(H_{0}\) preferred by SH0ES; and (ii) at some redshift, \(H(z)\) has to go below the \(\Lambda\)CDM curve preferred by _Planck_ in order to compensate the previous increase and leave the value of the angular diameter distance to the last scattering surface \(D_{A}(z_{*})\) intact. These modifications must respect the good description of the 3D BAO data. In addition, taking into account that the latter, when calibrated with \(r_{d}^{P18}\), lead to angular diameter distances and values of \(H(z)\) in good agreement with the standard model, we expect our result for \(H(z)\) to automatically force the redshift evolution of \(M\) in order to match the low-\(z\) and high-\(z\) estimates of this quantity. Otherwise, the inverse distance ladder keeps the tension between CMB and SH0ES high at the level of \(M\), even if the model is able to produce a value close to \(H_{0}^{R22}\). Therefore, we need to consider, on top of a modified Hubble expansion, a variation in \(M\). This is a very important result, firstly noted in [65] and further explored in [66]. If the inverse distance ladder (built with 3D BAO) and SH0ES are free from systematic errors, in the context of standard pre-recombination physics, we cannot avoid the variation of \(M\), regardless of the nature of the late-time new physics required at cosmological level to solve the Hubble tension. The data still give room for this variation to happen at \(z>0.01\). We will see in Sec. III that the transition in \(H(z)\) and \(M(z)\) is allowed to happen at \(z\lesssim 0.2\) at most. Apart from that, we will also assess in this work the real need of a crossing of the phantom divide of the effective dark energy fluid. We will show that there is no clear evidence for a deviation from a cosmological constant before the transition. Our conclusions are in some sense aligned with [65; 67; 68]. The aim is to proceed in a model-independent fashion to try to constrain the shape of \(H(z)\) at low redshifts using 3D BAO data calibrated with the measurement of \(r_{d}\) from _Planck_ 2018, the SH0ES prior on \(H_{0}\) and the CMB distance prior on \(D_{A}(z_{*})\) from _Planck_. We will also study the impact of cosmic chronometers (CCH). The reconstruction of \(H(z)\) will be used then to reconstruct the shape of \(M(z)\) needed to avoid the inverse distance ladder bottleneck. A few comments about other references on these matters are now in order. Based on [69], in [70; 71; 72] the authors explored transitions happening at \(z=0.1\) in the context of models with a change from a cosmological constant to phantom dark energy. They found that although these models can alleviate the Hubble tension, they are unable to reconcile the values of \(M\) measured with the direct and inverse distance ladders. Similar transitions, from quintessence to phantom, were also studied in [73; 74], and in [75] the authors explored a modified gravity parametrization with a transition at \(z=0.1\) as well and found evidence in its favor, with an increase of \(H_{0}\). However, they used only BAO and CMB data to constrain their model. The inclusion of SNIa data in their analysis would produce exactly the same problems found in [70; 71; 72], since the resulting value of \(M\) would be at odds with \(M^{R22}\). These works do not solve the \(M\) tension, since they did not consider the redshift evolution of this quantity. As already mentioned in the first point, in this work we will study how the shape of \(H(z)\) required to solve the Hubble tension changes with the BAO data set, assuming standard physics before recombination. In particular, we will show that the shape preferred by the 2D BAO data set is quite different from the one preferred by the 3D BAO data. In the former case, the deviations from the \(\Lambda\)CDM appear also at much higher redshifts (\(z\gtrsim 2\)) and the transition is much smoother. In addition, we will also reconstruct the shape of the deceleration parameter. This work is organized as follows. In Sec. II we explain the methodology used in the reconstruction of \(H(z)\) and \(M(z)\), as well as the data sets employed in our analyses. We also describe two methods to test the violation of the weak energy condition (WEC) with the help of cosmic chronometers. In Sec. III we present the results obtained using the anisotropic and angular BAO data sets in combination with CMB priors, with and without the addition of cosmic chronometers. We also discuss in detail the violation of the WEC required by these low-\(z\) solutions. Finally, in Sec. IV we provide our conclusions. Appendices A and B complement the content of the main body of the paper. ## II Methodology and data ### Fitting function for \(H(z)\) We assume throughout this paper a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe and use the following fitting expression for the Hubble function, \[H(z)=\left\{\begin{array}{ll}\bar{H}(z)+\delta H_{1}(z)&\mbox{if}\quad 0<z \leq z_{\rm p}\\ \bar{H}(z)+\delta H_{2}(z)&\mbox{if}\quad z_{p}<z<z_{\rm max}\\ H_{\Lambda}(z)&\mbox{if}\quad z\geq z_{\rm max}\end{array}\right. \tag{1}\] with \[\bar{H}(z)\equiv\bar{H}_{0}\sqrt{1+\bar{\Omega}_{m}[(1+z)^{3}-1]}\,, \tag{2}\] \[H_{\Lambda}(z)\equiv\tilde{H}_{0}\sqrt{1+\bar{\Omega}_{m}[(1+z)^{3}-1]+\bar{ \Omega}_{r}[(1+z)^{4}-1]}\,, \tag{3}\] and \[\delta H_{1}(z)\equiv a+bz+cz^{2}\quad;\quad\delta H_{2}(z)\equiv d+ez+fz^{2} \tag{4}\] It is a versatile fitting expression, which can reproduce phantom and quintessence behaviors, and also permits the crossing of the phantom divide4. It allows us to avoid the use of any DE or modified gravity model, and also the choice of how to split the energy budget of the dark sector at late times, while sticking to \(\Lambda\)CDM at higher redshifts. Footnote 4: For more complicated forms of \(H(z)\) see e.g. [76]. The parameters \((\bar{\Omega}_{m},\bar{H}_{0})=(0.3153,67.36)\) entering Eq. (2) are fixed to the best-fit values obtained in the TT,TE,EE+lowE+lensing \(\Lambda\)CDM analysis by _Planck_[2]. The quadratic polynomials \(\delta H_{1}(z)\) and \(\delta H_{2}(z)\) in the first and second rows of Eq. (1) (given in Eq. (4)) parametrize deviations with respect to the mean Hubble function in the \(\Lambda\)CDM at \(z<z_{\rm max}\)5. All the freedom of the model in this redshift range is transferred to the parameters \(\{a,b,c,d,e,f\}\). For the part of \(H(z)\) at \(z\geq z_{\rm max}\) (see Eq. (3)) we use a Gaussian prior from _Planck_ on \((\bar{\Omega}_{m},\tilde{H}_{0})\), taking into account their correlation. In this way we make sure that at \(z>z_{\rm max}\) the shape of the Hubble function does not depart from the standard one and keep also the physics at recombination untouched. The uncertainties of \((\bar{\Omega}_{m},\tilde{H}_{0})\) have an impact on the observables at \(z>z_{\rm max}\), and also propagate to the uncertainties of the parameters entering the Hubble function at smaller redshifts. This is why it is important to consider them. The radiation parameter appearing in the last row of Eq. (1) reads \(\tilde{\Omega}_{r}=4.18343\cdot 10^{-5}/\tilde{h}^{2}\), with \(\tilde{h}=\tilde{H}_{0}/(100\) km/s/Mpc). The numerical coefficient is fixed by the current CMB temperature [77], assuming for the sake of simplicity three relativistic neutrino species. Footnote 5: In Appendix B we explore a similar parametrization of \(\delta H_{i}(z)\), a second order polynomial in terms of \(a-1\) instead of \(z\), with \(a=(1+z)^{-1}\) the scale factor. We impose the following six constraints, which allow us to compute the parameters \(\{a,b,c,d,e,f\}\) entering the functions \(\delta H_{1}(z)\) and \(\delta H_{2}(z)\), \[\delta H_{1}(z=0)=H_{0}-\bar{H}_{0}\equiv\delta H_{0}\,, \tag{5}\] \[\delta H_{1}(z_{p})=\delta H_{2}(z_{p})\equiv\delta H_{p}\,, \tag{6}\] \[\left.\frac{\partial\delta H_{1}}{\partial z}\right|_{z=z_{p}}=\left.\frac{ \partial\delta H_{2}}{\partial z}\right|_{z=z_{p}}=0\,, \tag{7}\] \[\delta H_{2}(z_{\rm max})=H(z_{\rm max})-\bar{H}(z_{\rm max})\equiv\delta H _{\rm max}\,, \tag{8}\] with \(z_{p}\) the pivot redshift at which we have the extrema of \(\delta H_{1}\) and \(\delta H_{2}\). The two conditions in Eqs. (6) and (7) are obtained by demanding at \(z_{p}\) the continuity of \(H(z)\) and its derivative, respectively, whereas the condition (8) enforces the continuity of the Hubble function at \(z_{\rm max}\). We consider the suite of parameters \(\{\tilde{\Omega}_{m},\tilde{H}_{0},H_{0},z_{p},\delta H_{p},z_{\rm max}\}\). The pair \(\{\tilde{\Omega}_{m},\tilde{H}_{0}\}\) and \(\tilde{H}_{0}\) will be mainly controlled by the _Planck_ and SH0ES priors, respectively. The triad \(\{z_{p},\delta H_{p},z_{\rm max}\}\) is a priori more uncertain. However, we can already understand that if we set \(z_{\rm max}\) to a value smaller than the smallest redshift in the BAO data set, \(z_{\rm BAO,min}\), we will be unable to put strong constraints on the pair \(\{z_{p},\delta H_{p}\}\), since in this case there exists a full degeneracy between these two parameters. This degeneracy is essentially fixed by the CMB prior on the angular diameter distance to the last scattering surface, see Sec. II.4. Thus, for \(z_{\rm max}\leq z_{\rm BAO,min}\) we can only constrain the direction of the degeneracy line. For \(z_{\rm max}>z_{\rm BAO,min}\) the situation could be different, of course. The conditions (5)-(8) can be written in a very simple way, \[\begin{pmatrix}\delta H_{0}\\ \delta H_{p}\\ 0\end{pmatrix}=\begin{pmatrix}1&0&0\\ 1&z_{p}&z_{p}^{2}\\ 0&1&2z_{p}\end{pmatrix}\begin{pmatrix}a\\ b\\ c\end{pmatrix} \tag{9}\] and \[\begin{pmatrix}\delta H_{\rm max}\\ \delta H_{p}\\ 0\end{pmatrix}=\begin{pmatrix}1&z_{\rm max}&z_{\rm max}^{2}\\ 1&z_{p}&z_{p}^{2}\\ 0&1&2z_{p}\end{pmatrix}\begin{pmatrix}d\\ e\\ f\end{pmatrix}\,, \tag{10}\] so the constants \(\{a,b,c,d,e,f\}\) needed to compute \(H(z)\) can be obtained straightforwardly. We reconstruct the Hubble function by means of Monte Carlo analyses carried out with Mathematica[78], making use of Eq. (1) and the Baseline data sets described in Sec. II.46. We set \(z_{\rm max}\) to different values in order to study its impact in our analyses, and sample the five parameters contained in the vector \(\{\bar{\Omega}_{m},\bar{H}_{0},H_{0},z_{p},\delta H_{p}\}\). Footnote 6: Our approach is similar to the one employed in [79], but in this study we will show explicitly the need of an evolving \(M(z)\) on top of the new physics at cosmological level. In addition, it is very important to notice that the authors of [79] did not employ a CMB prior on \(D_{A}(z_{*})\). They made use of a prior on \(H(z_{i})\) at several high redshifts (\(z>4\)) obtained with the _Planck_/\(\Lambda\)CDM cosmology. This is insufficient if we want to study the Hubble tension in a robust and unbiased way, since by considering only the constraints on \(H(z_{i})\) at \(z>4\) from _Planck_ we obtain curves of \(H(z)\) that do not respect in general the very tight constraint we have on \(D_{A}(z_{*})\) and, hence, spoil the good description of the CMB data. Using the resulting \(H(z)\) it is also possible to reconstruct in a trivial way higher order cosmographical functions as the deceleration parameter, which reads, \[q(z)=-1+\frac{(1+z)}{H(z)}\frac{dH}{dz}\,. \tag{11}\] We will show results for \(q(z)\) too in Sec. III. ### Reconstruction of \(M(z)\) Let us consider the relation between the luminosity distance to a given object in a flat FLRW Universe, \[D_{L}(z)=c(1+z)\int_{0}^{z}\frac{d\tilde{z}}{H(\tilde{z})}\,, \tag{12}\] its apparent magnitude \(m\) and its absolute magnitude \(M\), which is given by \[M=m-25-5\log_{10}\left(\frac{D_{L}}{1\,{\rm Mpc}}\right)\,. \tag{13}\] For standardizable objects, the standardized absolute magnitude is just a constant and, hence, does not depend on the position nor the redshift. This is what it is usually assumed for SNIa. Here, though, we abandon this assumption and consider that the usual standardization method of SNIa can still receive an unknown correction, making the absolute magnitude to evolve with the redshift. We use Gaussian Processes [80] to generate samples of \(m(z)\) from the Pantheon+ SNIa compilation (see Sec. II.4), and combine them with our Markov chains of \(H(z)\) to reconstruct \(M(z)\). This allows us to assess whether the low-\(z\) solutions to the \(H_{0}\) tension require an evolution of \(M\). For the reconstruction of \(m(z)\) we use the public package _Gaussian Processes in Python_ (GaPP)7[81]. In particular, we use the Matern 32 kernel and the optimization of its hyperparameters. This procedure has been already tested and employed in [82]. Footnote 7: [https://github.com/carlosandrepaes/GaPP](https://github.com/carlosandrepaes/GaPP) We present the results obtained using anisotropic and angular BAO data in Secs. III.1 and III.2, respectively. ### Assessing the fulfillment of the weak energy condition with the aid of CCH The weak energy condition is fulfilled if \[T_{\mu\nu}t^{\mu}t^{\nu}\geq 0\,, \tag{14}\] where \(T_{\mu\nu}\) is the energy-momentum tensor and \(t^{\mu}\) is a time-like vector. For a perfect fluid with density \(\rho\) and pressure \(p\) in a FLRW universe, Eq. (14) translates into the two conditions \[\rho\geq 0\qquad\text{and}\qquad\frac{d\rho}{dz}\geq 0\,, \tag{15}\] which essentially stand for the positivity of the energy density and its constant or decaying nature. They are satisfied by all the fluids considered in the \(\Lambda\)CDM model, so the violation of the WEC would automatically imply the existence of physics beyond the standard model8. Here we are interested in testing the fulfillment of the WEC by the effective dark energy fluid in charge of the accelerated expansion of the universe, considering that it is covariantly self-conserved and, hence, that matter is diluted according to the usual law9, Footnote 9: Scenarios with a coupling with matter are also interesting, but the analysis would depend on the choice of the source vector, which controls the transfer of energy and momentum between the dark components. \[\rho_{m}(z)=\rho_{m}^{0}(1+z)^{3}\,. \tag{16}\] It was shown in [89] that in this case the Friedmann equation together with the second inequality of Eq. (15) lead to \[\Omega_{m}\leq\frac{E^{2}(z)-1}{(1+z)^{3}-1}\,, \tag{17}\] with \(E(z)=H(z)/H_{0}\) the normalized Hubble rate. We denote the right-hand side of this inequality as \(\Omega_{m}^{\rm max}\). It is an upper bound that cannot be surpassed if the effective dark energy fluid is not phantom. As already shown in [89], one can use cosmic chronometers in combination with a prior on \(H_{0}\) to obtain as many estimates of \(\Omega_{m}^{\rm max}\) as data points we have on CCH10. One can sample the Gaussian distributions of \(H_{0}\) and the CCH data to obtain a chain with \(\Omega_{m}^{\rm max}(z_{i})\). Footnote 10: Similar methods, as the \(Om\) and \(Omh^{2}\) diagnostics, have also proved useful to test the \(\Lambda\)CDM model [90; 91]. We aim to improve the analysis of [89] in several ways, namely: (i) we take advantage of the larger sample of CCH measurements (we have now 33 data points instead of 9), see Sec. II.4; (ii) we also employ their corresponding covariance matrix; and (iii) we apply an advanced method to get a single representative value of \(\Omega_{m}^{\rm max}\), duly accounting for the correlations and assessing the impact of non-Gaussian features in the multivariate distribution. The latter is done by means of the so-called Edgeworth expansion. It allows us to compute an analytical approximation of the underlying (exact) distribution, \[\begin{split} f(\vec{x})=G(\vec{x},\lambda)[1+&\frac {1}{6}k^{ijk}h_{ijk}(\vec{x},\lambda)\\ &+\frac{1}{24}k^{ijkl}h_{ijkl}(\vec{x},\lambda)+...]\,,\end{split} \tag{18}\] see [92] and references therein. Here \(x^{i}=d^{i}-\mu^{i}\), with \(d^{i}=\Omega_{m}^{\rm max}(z_{i})\) and \(\vec{\mu}\) the mean vector, i.e. \(\mu^{i}=<\Omega_{m}^{\rm max}(z_{i})>\). \(\lambda=C^{-1}\) is the inverse of the covariance matrix, with elements \(C^{ij}=<x^{i}x^{j}>\). \(G(\vec{x},\lambda)\) is the multivariate Gaussian distribution built from that mean and covariance matrix, and \(k^{ijk}=<x^{i}x^{j}x^{k}>\) and \(k^{ijkl}=<x^{i}x^{j}x^{k}x^{l}>-C^{ij}C^{kl}-C^{ik}C^{jl}-C^{il}C^{jk}\) are the elements of the higher-order cumulant matrices, called skewness and kurtosis matrices, respectively. On the other hand, \[h_{ij...}(\vec{x},\lambda)=(-1)^{r}G^{-1}(\vec{x},\lambda)\partial_{ij...}G( \vec{x},\lambda)\,, \tag{19}\] are the Hermite tensors of order \(r\), with \(r\) the number of indices. The Hermite tensors of order 3 and 4 appearing in Eq. (18) read, respectively, \[h_{ijk}(\vec{x})=\lambda_{in}\lambda_{jl}\lambda_{kl}x^{n}x^{t}x^{l}-(\lambda _{ij}\lambda_{kt}+\lambda_{ik}\lambda_{jt}+\lambda_{jk}\lambda_{it})x^{t}\,, \tag{20}\] \[\begin{split} h_{lijk}(\vec{x})=&\lambda_{ln}x^{n} h_{ijk}(\vec{x})+\lambda_{ij}\lambda_{kl}+\lambda_{ik}\lambda_{jl}+\lambda_{jk} \lambda_{il}\\ &-(\lambda_{il}\lambda_{jt}\lambda_{kn}+\lambda_{in}\lambda_{jl} \lambda_{kt}+\lambda_{in}\lambda_{jl}\lambda_{kl})x^{n}x^{t}\,.\end{split} \tag{21}\] In this calculation we have made use of the fact that \(\partial_{i}G=-G\lambda_{ij}x^{j}\) and of Einstein's summation convention. \begin{table} \begin{tabular}{c c c c c} \hline Survey & \(z\) & Observable & Measurement & References \\ \hline \hline 6dFGS+SDSS MGS & 0.122 & \(D_{V}(r_{d}^{lat}/r_{d})\) & \(539\pm 17\) [Mpc] & [83] \\ \hline WiggleZ & 0.44 & \(D_{V}(r_{d}^{lat}/r_{d})\) & \(1716.4\pm 83.1\) [Mpc] & [84] \\ & 0.60 & \(D_{V}(r_{d}^{lat}/r_{d})\) & \(2220.8\pm 100.6\) [Mpc] & \\ & 0.73 & \(D_{V}(r_{d}^{lat}/r_{d})\) & \(2516.1\pm 86.1\) [Mpc] & \\ \hline BOSS DR12 & 0.32 & \(r_{d}H/(10^{8}km/s)\) & \(11.549\pm 0.385\) & [85] \\ & & \(D_{A}/r_{d}\) & \(6.5986\pm 0.1337\) & \\ & 0.57 & \(r_{d}H/(10^{3}km/s)\) & \(14.021\pm 0.225\) & \\ & & \(D_{A}/r_{d}\) & \(9.389\pm 0.103\) & \\ \hline DES Y3 & 0.835 & \(D_{M}/r_{d}\) & \(18.92\pm 0.51\) & [86] \\ \hline Quasars eBOSS DR16 & 1.48 & \(D_{M}/r_{d}\) & \(30.21\pm 0.79\) & [87] \\ & & \(c/(Hr_{d})\) & \(13.23\pm 0.47\) & \\ \hline Ly\(\alpha\)-Forests eBOSS DR16 & 2.334 & \(D_{M}/r_{d}\) & \(37.5_{-1.1}^{+1.1}\) & [88] \\ & & \(c/(Hr_{d})\) & \(8.99_{-0.19}^{+0.20}\) & \\ \hline \end{tabular} \end{table} Table 1: List with the 13 anisotropic BAO data points used in this work. The fiducial values of the comoving sound horizon appearing in the third column are \(r_{d}^{fid}=147.5\) Mpc for [83] and \(r_{d}^{fid}=148.6\) Mpc for [84]. \(D_{M}(z)=(1+z)D_{A}(z)\) is the comoving angular diameter distance and \(D_{V}(z)=[D_{M}^{2}(z)cz/H(z)]^{1/3}\) is the so-called dilation scale. We have duly taken into account the existing internal correlations between the data points of WiggleZ, BOSS DR12, and QSOs and Ly\(\alpha\) eBOSS DR16. See the quoted references for details. All these objects can be directly computed from the chain of \(\Omega_{m}^{\rm max}(z_{i})\), with \(i\) the index that runs over the CCH data points. Once we have Eq. (18), we can sample it treating it as a one-dimensional distribution for \(\Omega_{m}^{\rm max}\) (instead of a multivariate distribution for the array \(\{\Omega_{m}^{\rm max}(z_{i})\}\)). If the non-Gaussian features are negligible, then it reduces of course to a Gaussian with the following weighted mean and variance, \[\bar{\Omega}_{m}^{\rm max}=\frac{\sum\limits_{i,j=1}^{33}\mu^{i}\lambda_{ij}}{ \sum\limits_{i,j=1}^{33}\lambda_{ij}}\qquad;\qquad\sigma^{2}=\frac{1}{\sum \limits_{i,j=1}^{33}\lambda_{ij}}\,. \tag{22}\] Otherwise, one has to take into account the corrections entering Eq. (18) to compute the central value and confidence intervals of \(\Omega_{m}^{\rm max}\). Another possibility to test the WEC is to consider the following relation, \[\frac{\rho_{\rm de}(z)}{\rho_{\rm de}^{0}}=\frac{E^{2}(z)-\Omega_{m}(1+z)^{3}}{ 1-\Omega_{m}}\,, \tag{23}\] which is directly obtained from the Friedmann equation. Sampling \(H(z)\) from the CCH data and combining this information with a prior on \(H_{0}\) and \(\omega_{m}\) that lets us compute \(\Omega_{m}\), it is possible to constrain Eq. (23) at the redshifts of the CCH. Negative values of this quantity hint at a violation of the first WEC of Eq. (15). Instead, \(0<\rho_{\rm de}(z)/\rho_{\rm de}^{0}<1\) means that the second condition is not fulfilled. We provide the results of these analyses in Sec. III.3. ### The data In this work we make use of the following data sets: * The SH0ES prior on the Hubble parameter, \(H_{0}^{R22}\)[1]. * The anisotropic (3D) BAO data listed in Table 1. * The transversal (angular, 2D) BAO data listed in Table 2. Angular BAO might be less model-dependent than 3D BAO, but have larger error bars [49; 50]. * The _Planck_ 2018 CMB TT,TE,EE+lowE+lensing \(\Lambda\)CDM Gaussian priors on the quantities \(\{D_{A}(z_{*}),r_{d},\bar{\Omega}_{m},\bar{H}_{0}\}\), including the corresponding covariance matrix. This information can \begin{table} \begin{tabular}{c c c c} \hline \(z\) & \(\theta_{BAO}\) [deg] & \(\sigma_{BAO}\) [deg] & References \\ \hline \hline 0.11 & 19.8 & 3.26 & [49] \\ \hline 0.235 & 9.06 & 0.23 & [93] \\ 0.365 & 6.33 & 0.22 & [94] \\ \hline 0.45 & 4.77 & 0.17 & [94] \\ 0.47 & 5.02 & 0.25 & \\ 0.49 & 4.99 & 0.21 & \\ 0.51 & 4.81 & 0.17 & \\ 0.53 & 4.29 & 0.30 & \\ 0.55 & 4.25 & 0.25 & \\ \hline 0.57 & 4.59 & 0.36 & [95] \\ 0.59 & 4.39 & 0.33 & \\ 0.61 & 3.85 & 0.31 & \\ 0.63 & 3.90 & 0.43 & \\ 0.65 & 3.55 & 0.16 & \\ \hline 2.225 & 1.77 & 0.31 & [96] \\ \hline \end{tabular} \end{table} Table 2: List with the 15 2D BAO data points used in this work, with \(\theta_{\rm BAO}(z)\,[{\rm rad}]=r_{d}/[(1+z)D_{A}(z)]\). We employ a diagonal covariance matrix. See the quoted references for details. \begin{table} \begin{tabular}{c c c} \hline \(z\) & \(H(z)\) [Km/s/Mpc] & References \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: List with the 33 CCH data points on \(H(z)\) used in this work, obtained from the references quoted in the last column. In the case of Refs. [100; 101], the central values of \(H(z)\) are computed by performing the arithmetic mean of the measurements obtained with the BC03 [107] and MaStro [108] stellar population synthesis models1. The covariance matrix is computed using the method presented in [110], which incorporates both the statistical and systematic errors. See the quoted references for details. be obtained from the _Planck_ Legacy Archive11. The use of this prior is fully justified by our assumption of standard physics before recombination. By generating samples of these parameters and the BAO data out of the respective multivariate Gaussian distributions we can obtain a joint data vector which incorporates the BAO angular diameter distances, dilation scales, \(H(z_{i})\), together with \(\{D_{A}(z_{*}),\tilde{\Omega}_{m},\tilde{H}_{0}\}\), and their joint covariance matrix. The prior from SH0ES can be easily added, considering 0 correlation with the other parameters, since it is independent. We build the resulting vector of data and covariance matrix before performing the fitting analysis, of course. This will constitute our Baseline_3D or Baseline_2D data sets, depending on the BAO data set that we consider. They are 17- and 19-dimensional, respectively. Footnote 11: [http://pla.esac.esa.int/pla/#home](http://pla.esac.esa.int/pla/#home) * The data on cosmic chronometers and covariance matrix used in [82]. We add the new data point \(H(z=1.26)=(135\pm 65)\) km/s/Mpc [105], for completeness. See Table 3 and references therein. In some of our analyses we have tried the combination Baseline+CCH. In Sec. III.3 we employ CCH together with the SH0ES or _Planck_ priors on the Hubble parameter (\(H_{0}^{R22}\) and \(H_{0}^{P18}\), respectively) to study the weak energy condition applying the methods described in Sec. II.3. * The SNIa data from the Pantheon+ compilation [111]. As explained in Sec. II.2, we do not employ them in the fitting analyses, but to reconstruct \(M(z)\) with the help of the shapes of \(H(z)\) obtained in the Monte Carlo runs. ## III Results and discussion In Fig. 2 we plot the cosmic distances and values of \(H(z)\) obtained from the angular and anisotropic BAO data after their calibration with \(r_{d}^{P18}\), together with the best-fit _Planck_/\(\Lambda\)CDM curves of the corresponding cosmological functions. The agreement between the latter and the vast majority of the 3D BAO data is clear, and this is why in order to accommodate the SH0ES measurement one has to modify the model at redshifts below \(\sim 0.1\), which is the minimum BAO redshift in the 3D data Figure 2: Calibrated 3D and 2D BAO data, in red and blue, respectively. The calibration is carried out with the _Planck_/\(\Lambda\)CDM value of the sound horizon \(r_{d}^{P18}\), see Sec. II.4. We also plot (in black) the curves of the various observables computed with the best-fit _Planck_/\(\Lambda\)CDM cosmology. See the comments in the first paragraph of Sec. III. set, cf. Table 212. Conversely, there is an obvious tension between the best-fit _Planck_/\(\Lambda\)CDM cosmology and the 2D BAO data in the redshift range \(0.4\lesssim z\lesssim 0.7\), since all the blue points of the upper-left plot in that redshift range fall below the black curve. These 2D BAO angular diameter distances are smaller than preferred by the standard model. This implies that 2D BAO data prefers larger values of the Hubble function, more in agreement with SH0ES [51]. Hence, in order to reestablish the concordance in the context of the distance ladder built with _Planck_ and the 2D BAO data we need to modify the shape of \(H(z)\) at much higher redshifts too, \(z\sim 1\). We devote Secs. III.1 and III.2 to show all this explicitly, together with the shapes of \(M(z)\) needed to keep the consistency with the SNIa data. Footnote 12: There are only mild tensions with the DES Y3 data point at \(z=0.835\) and the Ly\(\alpha\)-Forest data at \(z=2.334\) from eBOSS DR16, but they are much less significant than the Hubble tension. ### Analysis with Baseline_3D We present in the left plot of Fig. 3 the reconstructed curves of \(H(z)\) obtained from the fit to the Baseline_3D data set, following the method explained in Sec. II.1. More concretely, we show only the 68% curves with lowest \(\chi^{2}\). The right plot of Fig. 3, instead, contains the associated curves of \(M(z)\), obtained as described in Sec. II.2. In this analysis we have fixed \(z_{\rm max}=1\), but we have checked that values of \(z_{\rm max}=0.5,2,4\) do not introduce any significant change in the shape of the reconstructed function and the minimum values of the \(\chi^{2}\). Therefore, the results obtained with the baseline_3D data set are stable and basically insensitive to the parameter \(z_{\rm max}\). We find in all cases a phantom-like increase of \(H(z)\) at \(z\lesssim 0.2\) with respect to the best-fit _Planck_/\(\Lambda\)CDM model, and slightly smaller values of \(H(z)\) at higher redshifts. This is consistent with the results reported in [65]. This upper bound is close to the minimum redshift of the 3D BAO data, as expected from Fig. 2. In contrast, there is no lower bound on the redshift at which the non-standard growth of \(H(z)\) starts. This means that the Baseline_3D data set is not capable of arbitrating between an ultra-late-time transition at \(z\lesssim 0.01\) and a late-time transition at \(0.01\lesssim z\lesssim 0.2\), corresponding to the scenarios 2 and 3 discussed in the Introduction, respectively. We denote this (transition) redshift as \(z_{t}\), and define it as the redshift at which \(H(z)\) becomes larger than the best-fit _Planck_/\(\Lambda\)CDM curve. The smaller is \(z_{t}\) the faster the increase of \(H(z)\) and \(M(z)\) in the transition, of course. Although there is a clear trend towards the value of \(M\) measured by SH0ES at \(z\sim 0\), \(M^{R22}\), it is not easy to grasp the details of the transition for a fixed \(z_{t}\) due to the large number of curves contained in the plots of Fig. 3. This is why we choose to show in Fig. 4 only those curves with \(0.09<z_{t}<0.15\), as an illustrative example. Also, we study an additional consequence of the fast increase of the Hubble function at those very low redshifts: a natural drop in the deceleration parameter, given by Eq. (11). Its shape is shown in the bottom plot of Fig. 4. The values of \(q(z=0)\equiv q_{0}\) are typically 3 or 4 times lower than the one preferred by \(\Lambda\)CDM, \(q_{0}\approx-0.55\), and can be even larger for lower values of \(z_{t}\)13. We also Figure 3: On the left, curves of \(H(z)\) obtained with the Baseline_3D data set, using Eq. (1) and fixing \(z_{\rm max}=1\), cf. Sec. II.1. On the right, the corresponding absolute magnitude of SNIa, \(M(z)\), see Sec. II.2. We zoom in the region \(z<0.5\) to better appreciate the increase of \(H(z)\) and \(M(z)\) at small redshifts. At \(z\gtrsim 0.2\)\(H(z)\simeq H_{\Lambda}(z)\) (see Eq. 3) and \(M(z)\) becomes compatible with the value obtained with the inverse distance ladder assuming the \(\Lambda\)CDM, \(M\sim-19.4\) mag. observe a positive correlation between the transition redshift and \(q_{0}\), as expected. We show this in the inner plot. The smaller the value of \(z_{t}\) the more accelerated has to be the Universe to transition to \(H_{0}^{R22}\) and, hence, the more negative the value of \(q_{0}\). It is also a good exercise to study what is the typical shape of the density of a hypothetical effective dark energy fluid in these low-redshift solutions, provided that such a fluid is covariantly self-conserved. We define \[\begin{split} H^{2}(z)&\equiv\frac{8\pi G}{3}[ \tilde{\rho}_{m}^{0}(1+z)^{3}+\rho_{\text{de}}(z)]\\ &=\tilde{\Omega}_{m}\tilde{H}_{0}^{2}(1+z)^{3}+\frac{8\pi G}{3} \rho_{\text{de}}(z)\,,\end{split} \tag{24}\] with \(H(z)\) given by Eq. (1), and also use \[\begin{split} H_{\Lambda}^{2}(z)&=\frac{8\pi G}{3}[ \tilde{\rho}_{m}^{0}(1+z)^{3}+\tilde{\rho}_{\Lambda}]\\ &=\tilde{\Omega}_{m}\tilde{H}_{0}^{2}(1+z)^{3}+\frac{8\pi G}{3} \tilde{\rho}_{\Lambda}\,,\end{split} \tag{25}\] where \(H_{\Lambda}(z)\) takes the same form already assumed for \(H(z)\) at \(z>z_{\text{max}}\) (cf. Eq. (3)) and \(\tilde{\rho}_{\Lambda}\) is the energy density associated to the cosmological constant in the \(\Lambda\)CDM model. Using these two expressions we find the following relative difference between the effective dark energy density \(\rho_{\text{de}}(z)\) and \(\tilde{\rho}_{\Lambda}\), \[\Delta(z)\equiv\frac{\rho_{\text{de}}(z)-\tilde{\rho}_{\Lambda}}{\tilde{\rho }_{\Lambda}}=\frac{H^{2}(z)-H_{\Lambda}^{2}(z)}{H_{\Lambda}^{2}(z)-\tilde{ \Omega}_{m}\tilde{H}_{0}^{2}(1+z)^{3}}\,. \tag{26}\] In Fig. 5 we show some typical shapes of \(\Delta(z)\) obtained with \(z_{\text{max}}=0.5\) and \(z_{\text{max}}=1\). By construction of Eq. (1), we always find a crossing of the phantom divide, of course. Nevertheless, we want to remark that the steepness of the quintessence evolution between \(z_{p}\) and \(z_{\text{max}}\) depends on both parameters. Larger values of \(z_{p}\) and smaller values of \(z_{\text{max}}\) favor a transition from quintessence to phantom, but the Baseline_3D data set cannot exclude a solution with an almost constant Figure 4: Same as in Fig. 3, but only for the curves with \(0.09<z_{t}<0.15\). The vertical red band indicates the range of values of \(z_{t}\) covered. In the middle plot, we include the constant values \(M^{R22}\) (in cyan) and \(M=-19.40\) (in purple), the latter being close to the \(\Lambda\)CDM best-fit value obtained from a CMB+BAO+SNIa analysis (see e.g. [112]). In the bottom plot, we present the corresponding shapes of the deceleration parameter \(q(z)\), Eq. (11), and include an inner plot with the positive correlation between its value at \(z=0\), \(q_{0}\), and \(z_{t}\). Figure 5: Typical curves of \(\Delta(z)\) (Eq. (26)) obtained in the analysis of the baseline_3D data set with \(z_{\text{max}}=0.5\) (upper plot) and \(z_{\text{max}}=1\) (lower plot). The transition happens in both cases at \(z_{t}\lesssim 0.2\). before the transition for some combinations of these two parameters. What is always needed is a very fast phantom evolution in the last stages of the cosmic expansion, which is faster for smaller values of \(z_{t}\). Our results resonate well with the conclusions of [65, 67, 68], and also with [73, 74]. In addition, if we decrease \(z_{\rm max}\) we can find more negative values of \(\Delta\) at its minimum. In order to study the goodness of fit, we have computed the reduced \(\chi^{2}\), \[\chi^{2}_{\rm red}=\chi^{2}_{\rm min}/(N-n)\,, \tag{27}\] with \(N=17\) the number of data points and \(n=5\) the number of fitting parameters, cf. Secs. II.1 and II.4. We obtain \(\chi^{2}_{\rm red}\sim 1.3\), slightly above 1. This can be explained essentially by the \(\sim 2\sigma\) tension between \(\Lambda\)CDM and the BAO data from DES and Ly\(\alpha\) eBOSS DR16, and also by the fact that the effective number degrees of freedom can be significantly larger due to the existence of strong correlations between the parameters, see e.g. [114]. If we remove the aforesaid data points, we get \(\chi^{2}_{\rm red}\lesssim 1\). Finally, we have also checked that the inclusion of the CCH data does not alter our results in a significant way. We will explain why in Sec. III.3. In the context of concrete cosmological models, one can in principle study the evolution of perturbations to put tighter constraints on the class of viable ultra-low-redshift solutions to the \(H_{0}\) tension. We expect the growth data to prefer an effective crossing of the phantom divide in order not to worsen the tension with galaxy clustering and weak lensing observations [67, 68]14. For the sake of generality, this study is left for future research, but we remark that a joint solution to the Hubble and growth tensions in the context of models with a transition in the redshift range \(0.01\lesssim z\lesssim 0.2\) implies the existence of a late-time effective phantom regime accompanied by a fast increase of the absolute magnitude of SNIa, and most probably also a crossing of the phantom divide. These effects at very small redshifts would most probably introduce a new coincidence (or "why now") problem. The simultaneous increase of \(H\) and \(M\) could indicate a gravitational origin of these transitions and a hint of deviations from General Relativity. If, instead, the transition happens at \(z\lesssim 0.01\) we retrieve the scenario 2 of the Introduction, and the possible solutions discussed therein. Footnote 14: For details on this tension see e.g. [10, 115], and references therein. Figure 7: Values of the minimum \(\chi^{2}\) obtained in the fitting analyses with the baseline_2D data set, as a function of \(z_{\rm max}\). We also show the reduced \(\chi^{2}\) (Eq. (27)) in the inner plot. The goodness of fit reaches a plateau at \(z_{\rm max}\sim 4\). This explains why the transition redshift remains stable for \(z_{\rm max}\gtrsim 4\), with \(z_{t}\sim 1\). The value \(\chi^{2}_{\rm red}\sim 1\) in the plateau proves the appropriateness of the fitting function, Eq. (1). See the comments in the main text. Figure 6: Same as in Fig. 3, but with the baseline_2D data, using \(z_{\rm max}=4\). The curves of \(H(z)\) deviate from the _Planck_/\(\Lambda\)CDM prediction in a broader redshift range, with a transition from \(H<H_{\Lambda\rm CDM}\) to \(H>H_{\Lambda\rm CDM}\) happening now at \(z_{t}\sim 0.6-0.9\) (see the zoomed-in middle plot). This is reflected also in the transition of \(M(z)\) (bottom plot), which happens also at much higher \(z\) than in the 3D BAO case, cf. again Fig. 3. ### Analysis with Baseline_2D The reconstructions of \(H(z)\) and \(M(z)\) obtained with the baseline_2D data set are presented in Fig. 6. We set \(z_{\rm max}=4\) and show again the 68% of the total number of curves saved in the Monte Carlo Markov chain, only those with smallest \(\chi^{2}\). In Fig. 7 one can see what is the decrease of the \(\chi^{2}\) as a function of \(z_{\rm max}\). The former remains stable for values of \(z_{\rm max}\gtrsim 4\). Indeed, we find that in these cases the transition happens always at \(z_{t}\sim 1\) and, hence, at a much higher redshift than in the case of the 3D BAO analysis studied in the preceding section. It is easy to understand why. Transversal BAO data calibrated with \(r_{d}^{P18}\) leads to smaller angular diameter distances than in \(\Lambda\)CDM (and larger values of \(H(z)\)) at \(z\lesssim 0.7\). Thus, the shape of the Hubble function has to go below the standard one after these redshifts in order to compensate these effects and respect the CMB preferred value of \(D_{A}(z_{*})\). For sufficiently large values of \(z_{\rm max}\) one can increase the compatibility with \(M^{R22}\) within a larger redshift range, up to \(z\gtrsim 0.5\), and the deviation at higher redshifts cannot be considered to be statistically significant. The shapes of \(H(z)\) and \(M(z)\) leading to a potential solution to the Hubble tension are therefore quite different from those required by the 3D BAO data. In this case we do not find significant departures of the deceleration parameter from the \(\Lambda\)CDM at \(z\ll 1\). Again, our results remain stable under the inclusion of CCH data. If interpreted in terms of an effective self-conserved dark energy fluid, the change of \(H(z)\) with respect to the \(\Lambda\)CDM requires in this case negative values of the dark energy density at \(z\gtrsim 2\). The matter energy density at these redshifts is very large, so we need also a large value of \(|\tilde{\rho}_{\rm de}|\) to have a sizable effect on the Hubble function, and \(\tilde{\rho}_{\rm de}\) has to be negative in order the change to happen in the right direction. A couple of representative plots showing this characteristic behavior are presented in Fig. 8. The crossing of the phantom divide is also allowed by the angular BAO data set. Models with a negative DE density at these redshifts are available in the literature. Some examples are the sign-switching cosmological constant model of [116; 117; 118], the self-conserved dark energy model of [119] or models that consider dynamical dark energy on top of an anti-de Sitter vacuum with negative cosmological constant [120; 121; 122; 123]. In addition, it is interesting to note that several works have pointed to the possibility of the presence of unaccounted systematics in the standardization method of SNIa [124; 125; 126]15. This could potentially explain the smooth evolution of \(M(z)\) hinted by our analysis when we consider the SH0ES prior and 2D BAO data is employed instead of 3D BAO to build the inverse cosmic ladder. Footnote 15: See e.g. Fig. 6 in [125], in which the authors report an evolution of the mean SNIa stretch parameter as a function of redshift at \(z<1\). ### Wec The results presented in Secs. III.1 and III.2 tell us that all the low-\(z\) solutions to the Hubble tension that involve new physics at \(z\gtrsim 0.01\) require a violation of the weak energy condition, regardless of the BAO data set employed to build the inverse distance ladder. In both cases there must be a phantom-like evolution of the effective dark energy component, which manifests itself at very different moments of the cosmic expansion. Baseline_3D requires this to happen at \(z\lesssim 0.2\), whereas for Baseline_2D it happens somewhere in the range \(0.6\lesssim z\lesssim 1\). Moreover, in the latter case, the effective DE density takes negative values at \(z\gtrsim 2\). Now we apply the method of [89] (see Sec. II.3) to determine whether CCH, which are independent from the Baseline data sets, require the violation of the WEC given by Eq. (17), assuming a self-conserved effective dark energy fluid. This condition must be obeyed if its energy density does not grow with the expansion, i.e. if DE is not phantom. We study how the result changes when we employ the SH0ES and _Planck_ priors on \(H_{0}\). The 33 constraints on \(\Omega_{m}^{\rm max}\) obtained from the sampling of the CCH data listed in Table 3 and the priors on \(H_{0}\) are shown Figure 8: Typical curves of \(\Delta(z)\) (Eq. (26)) obtained in the analysis of the baseline_2D data set with \(z_{\rm max}=4\) (upper plot) and \(z_{\rm max}=7\) (lower plot). The transition happens in both cases at \(z_{t}\sim 1\), with the dark energy density becoming negative (\(\Delta<-1\)) at \(z\sim 2\). in Fig. 9. In this calculation we have duly accounted for the correlations between the CCH data points. From the list of values of \(\Omega_{m}^{\rm max}\) we extract a single representative upper bound on the matter parameter \(\Omega_{m}\). Some of the error bars in Fig 9 are quite asymmetric, which means that the underlying multivariate distribution has some non-Gaussian features. Nevertheless, the deviation from Gaussianity in the case of the most precise values of \(\Omega_{m}^{\rm max}\) is small, so we do not expect this to have a big effect on the final result. We will quantify its impact making use of the Edgeworth expansion, Eq. (18), and will compare these results with those obtained assuming that the underlying distribution is a multivariate Gaussian with mean and covariance matrix given by Eq. (22). Neglecting the non-Gaussian features we obtain, using the SH0ES and _Planck_ priors on \(H_{0}\), respectively, \[\Omega_{m}^{\rm max}=0.250\pm 0.031\qquad[{\rm CCH}+H_{0}^{R22}]\,, \tag{28}\] \[\Omega_{m}^{\rm max}=0.314\pm 0.036\qquad[{\rm CCH}+H_{0}^{P18}]\,. \tag{29}\] We want to know whether current constraints on \(\Omega_{m}\) fall below or above these upper bounds in order to determine whether they force the violation of the WEC, according to the CCH data set and the local values of \(H_{0}\) considered in the computation of \(\Omega_{m}^{\rm max}\). It is natural to check this for the value of \(\Omega_{m}\) derived from CMB. If we consider standard physics before recombination, we can take _Planck_'s constraint \(\omega_{m}^{P18}=0.1415\pm 0.0009\) and combine it with SH0ES and _Planck_ priors on \(H_{0}\), yielding \[\Omega_{m}=0.265\pm 0.008\qquad[\omega_{m}^{P18}+H_{0}^{R22}]\,, \tag{30}\] \[\Omega_{m}=0.315\pm 0.007\qquad[\omega_{m}^{P18}+H_{0}^{P18}]\,. \tag{31}\] These two results represent the constraints on \(\Omega_{m}\) preferred by CMB, depending on whether we rely on a small or large value of the Hubble constant. To be fully consistent, we should compare these results with Eqs. (28) and (29), respectively. Although the central values of \(\Omega_{m}\) lie slightly above the upper bounds \(\Omega_{m}^{\rm max}\), there is no important evidence for the violation of the WEC according to the CCH+\(H_{0}^{R22}\) and CCH+\(H_{0}^{P18}\) data sets if we assume standard prerecombination physics, since the values (30) and (31) fall below the upper bounds (28) and (29), respectively, at \(1\sigma\) C.L. This is at odds with the results reported in [127]. Our conclusions still hold true if we consider the non-Gaussian corrections of Eq. (18), meaning that the impact of the non-Gaussian features in the distribution of \(\Omega_{m}^{\rm max}\) is practically negligible, as expected. Indeed, we find \(\Omega_{m}^{\rm max}=0.240\pm 0.030\) and \(\Omega_{m}^{\rm max}=0.308\pm 0.036\) at 68% C.L. using the \(H_{0}^{R22}\) and \(H_{0}^{P18}\) priors, respectively. These results are fully compatible with those in Eqs. (28) and (29). In order to further illustrate all this we have fitted the flat \(\Lambda\)CDM model using CCH and provide the 68% C.L. contours in Fig. 1016. It is clear from that plot that, according to the data on cosmic chronometers, it is possible to explain a large value of \(H_{0}\sim H_{0}^{R22}\) and a small Figure 9: Upper bounds on the present value of \(\Omega_{m}\) imposed by the WEC, see Eq. (17). We employ the method explained in Sec. II.3. Since the distributions for \(\Omega_{m}^{\rm max}\) at some redshifts are highly non-Gaussian, we indicate their corresponding peaks (with dots) and the confidence intervals at 68% C.L. We show the results when \(H_{0}\) is sampled using the _Planck_/\(\Lambda\)CDM value \(H_{0}^{P18}\) (in blue) and the SH0ES measurement \(H_{0}^{R22}\) (in orange). Figure 10: Contour plot at 68% C.L. derived from the fit of the \(\Lambda\)CDM model to CCH and CMB data (in blue and green, respectively). We also include: (i) the constraint on \(\Omega_{m}\) obtained by combining \(H_{0}^{R22}\) with the value of \(\omega_{m}^{P18}\) inferred by _Planck_ assuming standard prerecombination physics, Eq. (30) (in purple); (ii) the SH0ES measurement \(H_{0}^{R22}\) (in red); and (iii) the upper bound on \(\Omega_{m}^{\rm max}\), Eq. (28) (in black). All of them at \(1\sigma\) C.L. value of \(\Omega_{m}\) as the one in Eq. (30) within \(\Lambda\)CDM. In the standard model the WEC is automatically fulfilled due to the constancy of the DE density. Hence, we can explain the CCH data with the aforesaid values of \(H_{0}\) and \(\Omega_{m}\) without requiring phantom DE. This is the same conclusion reached applying the method of [89]. The CCH+\(H_{0}\) data themselves do not exclude the violation of the weak energy condition, but do not require its fulfilment either. At the moment, they cannot be used to strongly discriminate among possible solutions to the \(H_{0}\) tension, and this is why the addition of CCH on top of Baseline_2D and Baseline_3D does not have a major impact on our results17. Footnote 17: See also [15; 82], where it is shown that cosmic chronometers do not offer yet a very competitive calibration of the cosmic distance ladders. For instance, the reconstruction of \(M(z)\) using CCH and SNIa is compatible with a constant, but does not exclude an evolution in redshift [82]. Fig. (11) gives more support to our conclusions. We present there the constraints on the ratio \(\rho_{\rm de}(z)/\rho_{\rm de}^{0}\) at 68% C.L. obtained by using Eq. (23) and sampling the CCH data together with the priors \(\omega_{m}^{P18}\) and \(H_{0}^{R22}\). Any of these measurements points to a clear violation of the WEC. We note, though, that the vast majority of the central points below \(z=1\) fall in the range \(0<\rho_{\rm de}(z)/\rho_{\rm de}^{0}<1\), which could be an indication of the preference of the data for a phantom behavior of the effective DE fluid. Moreover, the central point at \(z=1.53\) is one sigma away from the positive region, and two sigma away from the border between the phantom and quintessence regions. This resonates very well with Fig. 9 because the point at \(z=1.5\) is the one that leads to the lowest upper bound on \(\Omega_{m}^{\rm max}\), which is \(\sim 2\sigma\) below the best-fit _Planck_/\(\Lambda\)CDM value, and hence in small tension with the standard model. However, as already mentioned above, these hints of new physics are still mild. ## IV Conclusions The discovery of the cosmic acceleration in the late nineties meant a major breakthrough [128; 129]. It brought to the stage the need of adding a new component to the energy budget of the Universe, which must violate the strong energy condition. Its fundamental nature is largely unknown, but cosmological observations have let us infer some of its basic phenomenological properties, which are mimicked in the simplest scenario by a cosmological constant, see e.g. [130; 131]. Despite the theoretical problems associated to it [132; 133], it is a very important building block of the standard model of cosmology due to its good ability to fit the data. However, in the last decade, with the advent of precision cosmology, some mismatches between \(\Lambda\)CDM and observations have irrupted into the scene [10]. The Hubble tension stands by far as the most significant one, since it already reaches the \(\sim 5\sigma\) C.L. [1; 2]. Its solution could have serious theoretical implications, and this explains why understanding its origin has become one of the most pursued goals by the cosmological community [40; 41; 42]. In this paper we have devoted our efforts to study in detail the low-redshift phenomenology required to get rid of the \(H_{0}\) tension, keeping standard physics before recombination. We have given special emphasis to the role played by the data on baryon acoustic oscillations, which is crucial in the construction of the cosmic inverse distance ladder. We have shown that anisotropic and angular BAO data (combined with CMB and SH0ES priors) lead to very different solutions, as expected [51]. The former require a phantom-like increase of the Hubble function and the absolute magnitude of supernovae of Type Ia at \(0.01\lesssim z\lesssim 0.2\), whereas the latter (which in principle are less affected by model-dependent issues) need this increase to happen much earlier in the cosmic expansion, at \(z_{t}\sim 0.6-0.9\), and more smoothly. \(M(z)\) is in this case still compatible with a constant, but an evolution is not excluded18. In this scenario, if we consider that the new physics can be explained by an effective self-conserved dark energy component, the dark energy density has to be negative during, at least, some period of the cosmic history at \(z\gtrsim 2\)19. Hence, any low-\(z\) solution to the Hubble tension with conserved effective dark energy demands a violation of the weak energy condition, and the possibility of a crossing of the phantom divide is not excluded. Coupled dark energy scenarios do not have to follow the conclusions that we have found assuming the conservation of DE, but have to give rise to forms Figure 11: Constraints on the ratio \(\rho_{\rm de}(z)/\rho_{\rm de}^{0}\) obtained by using Eq. (23) and sampling the CCH data together with the priors \(\omega_{m}^{P_{18}}\) and \(H_{0}^{R22}\). We report the most probable values and 68% confidence intervals. The dashed red lines at 0 and 1 denote the lower bounds for which the two conditions of Eq. (15) are satisfied, i.e. the positivity of the effective DE density and the non-phantom nature of the DE, respectively. of \(H(z)\) and \(M(z)\) compatible with our reconstructions. Another option to solve the Hubble tension is a local (ultra-late-time) change in \(M\), in the second rung of the direct distance ladder, i.e. at \(z\lesssim 0.01\), leaving intact the \(\Lambda\)CDM expansion history. Our results are in agreement with previous works that made use of anisotropic BAO data [65] and also support the conclusions of [67, 68]. Here, though, we make more definite statements than the latter, e.g. about the redshift ranges in which the phantom-like evolution should be active. They are extracted directly from the data. We also perform for the first time in the literature an analysis on similar lines using 2D BAO. In passing, we have also shown in several ways that current data on cosmic chronometers are not capable of determining whether the weak energy condition must be violated or not to solve the \(H_{0}\) tension. They hint only very mildly to such a violation. Therefore, it is safe to assert that current CCH data do not help that much to constrain the form of the low-redshift solutions. This resonates well with [82, 15]. The degree of naturalness of these solutions varies with the solutions themselves, but in this paper we wanted to focus on the phenomenology required to solve the Hubble tension and leave for future research the exploration of some of these routes in the context of concrete theoretical setups, considering also the evolution of perturbations and studying the symbiosis between the \(H_{0}\) and growth tensions. The door for a late- and an ultra-late time solution to the Hubble tension is still open and, interestingly, the concrete form of the solutions depends crucially on the BAO data set that we consider. Future background and BAO data as those from _Euclid_[137] are meant to be pivotal on the discussion and eventual solutions to the cosmic tensions. Particularly important will be the methods that the various collaborations employ to extract the information from the galaxy catalogs. We foresee the use of model-independent techniques to be relevant to obtain data sets as robust as possible, even if this comes at the expense of a decrease in precision [138, 139]. ## Acknowledgements AGV has been funded by the Istituto Nazionale di Fisica Nucleare (INFN) through the project of the InDark INFN Special Initiative: "Dark Energy and Modified Gravity Models in the light of Low-Redshift Observations" (n. 22425/2020). AGV, AF and MM acknowledge support by the INFN project "InDark". MM is also supported by the ASI/LiteBIRD grant n. 2020-9-HH.0 and by the Fondazione ICSC, Spoke 3 Astrophysics and Cosmos Observations, National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) Project ID CN_0000013 "Italian Research Center on High-Performance Computing, Big Data and Quantum Computing" funded by MUR Mission 4 Componente 2 Investimento 1.4: Potenziamento strutture di ricerca e creazione di "campioni nazionali di R&S (M4C2-19 )" - Next Generation EU (NGEU). AAS acknowledges the funding from SERB, Govt of India under the research grant no: CRG/2020/004347. AGV, AF and AAS acknowledge the participation in the COST Action CA21136 "Addressing observational tensions in cosmology with systematics and fundamental physics" (CosmoVerse). ## Appendix A Constraints in the \(M\)-\(r_{d}\) plane from uncalibrated BAO and SNIa We dedicate this appendix to explain how we obtained the degeneracy band in Fig. 1, which shows the big anti-correlation between the two calibrators of the direct and inverse distance ladders, \(M\) and \(r_{d}\), respectively. For this purpose, we make use of the so-called Index of Inconsistency (IOI) by Lin and Ishak [47]. It is usually employed to quantify the level of inconsistency (or tension) between two data sets in the context of a concrete model when the posterior distributions of the parameters of the model are Gaussian in good approximation. However, it can also be employed to calibrate Gaussian data sets in a model-independent way [15]. The IOI between two data sets takes the following form, \[\text{IOI}[\text{i},\text{j}]=\frac{1}{2}\mu^{\text{T}}(\text{C}^{(\text{i})} +\text{C}^{(\text{j})})^{-1}\mu\,, \tag{10}\] where \(i\) and \(j\) label the two data sets under consideration, whereas \(\mu\) and \(C\) denote, respectively, the difference between the corresponding data vectors and the covariance matrices. One can constrain the calibrators of two data sets by minimizing the IOI between them. Here, in particular, we want to study the correlation between \(M\) and \(r_{d}\) by minimizing the IOI between the SNIa and anisotropic BAO data sets. For each pair \((M,r_{d})\), one can calibrate these data sets and extract measurements of angular diameter distances at several redshifts. In the case of SNIa, we do so by using Eq. (13) and the Etherington relation [140], \[D_{A}(z)=\frac{D_{L}(z)}{(1+z)^{2}}\,. \tag{11}\] In the case of BAO, we extract angular diameter distances from the ratios \(D_{A}(z)/r_{d}\) and the dilation scales \(D_{V}(z)/r_{d}\) collected in Table 12. For the latter, we use the expansion Footnote 2: We do not employ here radial BAO data. \[D_{V}\simeq\frac{3}{4}D_{L}\left(\frac{4}{3}z\right)\left(1+\frac{4}{3}z \right)^{-1}(1-0.0245z^{3}+0.0105z^{4})\,, \tag{12}\] which is quite accurate in the range \(z<1\) for plausible accelerating cosmological models [141]. It allows us to compute the luminosity distance \(D_{L}(4z/3)\) (and \(D_{A}(4z/3)\), through Eq. (10)) from a measurement of \(D_{V}(z)\). See also [142] for a recent application of this formula. We explore the space of the calibrators by employing the weights \[w=\exp(-\mathrm{IOI}[\mathrm{BAO},\mathrm{SNIa}])\,, \tag{12}\] and a simple two-dimensional grid in the \(M-r_{d}\) plane. Notice, though, that the data vectors entering Eq. (10) need to correspond to the same redshifts. As this is not the case in the BAO and SNIa data sets, we choose to take the SNIa data points that fall right below and above the angular BAO data points in redshift and marginalize over the SNIa data that do not belong to this ensemble. Then, we sample the resulting distribution of SNIa apparent magnitudes \(m(z)\) and compute the values at the BAO redshifts using a simple linear interpolation formula. This is licit because the difference in redshift of two consecutive SNIa points in the BAO redshift range is very small, so we can neglect the contribution of higher-order corrections. In this way, we end up with the central values of \(m(z_{BAO})\) and the corresponding covariance matrix. This allows us to evaluate the weights using Eqs. (10) and (12) and draw the distribution of \(M\) and \(r_{d}\), from which we get the contours at 68% and 95% C.L., see the grey band in Fig. 1. ## Appendix B Parameterization of \(\delta H_{i}\) in terms on (1-a). Fitting formulae and results In this appendix we test the robustness of the results of Secs. III.1 and III.2 by comparing them with those obtained with an alternative fitting function. Instead of using Eq. (4), here we employ \[\delta H_{1}(a) =A+B(1-a)+C(1-a)^{2}\] \[\delta H_{2}(a) =D+E(1-a)+F(1-a)^{2}\,, \tag{13}\] with \(\{A,B,C,D,E,F\}\) the new fitting parameters. The main reason in doing this is that expansion in terms of \((1-a)\) has better convergence compared to expansion in terms of \(z\) and hence neglecting terms of the order \((1-a)^{3}\) and higher is better justified compared to neglecting terms of the order \(z^{3}\) and higher. So the question is whether parametrization in terms of scale factor gives similar results as already obtained. The corresponding expressions in terms of the redshift are obviously computed by doing \(1-a=z/(1+z)\). Conditions (5)-(8) take now the following matrix form, \[\begin{pmatrix}\delta H_{0}\\ \delta H_{p}\\ 0\end{pmatrix}=\begin{pmatrix}1&0&0\\ 1&\frac{z_{p}}{1+z_{p}}&\frac{z_{p}^{2}}{(1+z_{p})^{2}}\\ 0&1&\frac{2z_{p}}{1+z_{p}}\end{pmatrix}\begin{pmatrix}A\\ B\\ C\end{pmatrix} \tag{14}\] and \[\begin{pmatrix}\delta H_{\mathrm{max}}\\ \delta H_{p}\\ 0\end{pmatrix}=\begin{pmatrix}1&\frac{z_{\mathrm{max}}}{1+z_{\mathrm{max}}}& \frac{z_{\mathrm{max}}^{2}}{(1+z_{\mathrm{max}})^{2}}\\ 1&\frac{z_{p}}{1+z_{p}}&\frac{z_{p}^{2}}{(1+z_{p})^{2}}\\ 0&1&\frac{2z_{p}}{1+z_{p}}\end{pmatrix}\begin{pmatrix}D\\ E\end{pmatrix}\,. \tag{15}\] Figure 12: Reconstructed shapes of \(H(z)\) (upper plot) and \(M(z)\) (lower plot) obtained with the Baseline_2D data set and the parametrization of \(\delta H_{i}(z)\) provided in Eq. (13). We have set \(z_{\mathrm{max}}=4\). See the comments in Appendix B. Figure 13: Reduced \(\chi^{2}\) as a function of \(z_{\mathrm{max}}\) for the analyses with Baseline_2D, using the parametrized form of \(\delta H_{i}(z)\) given in Eq. (13). As in Fig. 7, \(\chi^{2}_{\mathrm{red}}\) reaches a plateau. However, now \(\chi^{2}_{\mathrm{red}}\sim 1.7\) instead of \(\chi^{2}_{\mathrm{red}}\sim 1\), which means that the performance of Eq. (13) is much worse than Eq. (4). See the comments in Appendix B. The results obtained from the fitting analysis with the Baseline_3D data set are essentially the same as those obtained with the fitting function of Eq. (1), see Sec. III.1. The shapes of \(H(z)\) and \(M(z)\) have the same characteristic features shown in Fig. 3. Thus, the conclusions of our main analysis with anisotropic BAO data hold true also for this new parametrization of the Hubble function. The situation with the Baseline_2D data is a bit different and deserves some detailed explanations. We present in Fig. 12 the reconstructions of \(H(z)\) and \(M(z)\) obtained making use of Eq. (14). We choose to fix \(z_{\rm max}=4\) in this figure, but we have explicitly checked that the results remain stable for larger values of this parameter, as we found also in the analysis of Sec. III.2. This is reflected on the fact that the reduced \(\chi^{2}\) reaches a plateau for \(z_{\rm max}\gtrsim 4\), see Fig. 13. However, now, in contrast to what we found with the parametrization (4), the transition happens at smaller redshifts, at \(z_{t}\sim 0.4\) (instead of \(z_{t}\sim 0.6-0.9\)). This is true for both, the Hubble parameter and the absolute magnitude of SNIa. Nevertheless, it is very important to notice that the goodness of fit found with the new parametrization is much worse than the one offered by the original parametrization. Notice that \(\chi^{2}_{\rm red}\sim 1.7\) in the plateau of Fig. 13. Conversely, we obtained \(\chi^{2}_{\rm red}\sim 1\) with parametrization (4) although the results are similar in both parametrizations. This justifies the use of the latter (in terms of \(z\)) in the main body of the paper.
2306.17465
FedBone: Towards Large-Scale Federated Multi-Task Learning
Heterogeneous federated multi-task learning (HFMTL) is a federated learning technique that combines heterogeneous tasks of different clients to achieve more accurate, comprehensive predictions. In real-world applications, visual and natural language tasks typically require large-scale models to extract high-level abstract features. However, large-scale models cannot be directly applied to existing federated multi-task learning methods. Existing HFML methods also disregard the impact of gradient conflicts on multi-task optimization during the federated aggregation process. In this work, we propose an innovative framework called FedBone, which enables the construction of large-scale models with better generalization from the perspective of server-client split learning and gradient projection. We split the entire model into two components: a large-scale general model (referred to as the general model) on the cloud server and multiple task-specific models (referred to as the client model) on edge clients, solving the problem of insufficient computing power on edge clients. The conflicting gradient projection technique is used to enhance the generalization of the large-scale general model between different tasks. The proposed framework is evaluated on two benchmark datasets and a real ophthalmic dataset. Comprehensive results demonstrate that FedBone efficiently adapts to heterogeneous local tasks of each client and outperforms existing federated learning algorithms in most dense prediction and classification tasks with off-the-shelf computational resources on the client side.
Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao, Wuliang Huang
2023-06-30T08:19:38Z
http://arxiv.org/abs/2306.17465v1
# FedBone: Towards Large-Scale Federated Multi-Task Learning ###### Abstract Heterogeneous federated multi-task learning (HFMTL) is a federated learning technique that combines heterogeneous tasks of different clients to achieve more accurate, comprehensive predictions. In real-world applications, visual and natural language tasks typically require large-scale models to extract high-level abstract features. However, large-scale models cannot be directly applied to existing federated multi-task learning methods. Existing HFML methods also disregard the impact of gradient conflicts on multi-task optimization during the federated aggregation process. In this work, we propose an innovative framework called FedBone, which enables the construction of large-scale models with better generalization from the perspective of server-client split learning and gradient projection. We split the entire model into two components: a large-scale general model (referred to as _the general model_) on the cloud server and multiple task-specific models (referred to as _the client model_) on edge clients, solving the problem of insufficient computing power on edge clients. The conflicting gradient projection technique is used to enhance the generalization of the large-scale general model between different tasks. The proposed framework is evaluated on two benchmark datasets and a real ophthalmic dataset. Comprehensive results demonstrate that FedBone efficiently adapts to heterogeneous local tasks of each client and outperforms existing federated learning algorithms in most dense prediction and classification tasks with off-the-shelf computational resources on the client side. ## 1 Introduction The progress of Natural Language Processing (NLP) and Computer Vision (CV) has been significantly driven by the evolution of large-scale pre-trained models. Nonetheless, the efficacy of these extensive models predominantly hinges on substantial computational resources and extensive datasets. This presents challenges in situations where such resources are not readily accessible or practically feasible to employ. Furthermore, the sensitivity and privacy concerns associated with medical data render it impractical and insecure to entrust such data to external high-computational organizations for training large models. In this context, the concept of Federated Learning [14] emerges as a promising solution. Federated learning enables collaborative model training across distributed edge devices or institutions while keeping the data local and confidential. By leveraging the potential of distributed learning, federated learning enables institutions to participate in model training without compromising data privacy or necessitating significant computational resources. Training large-scale models using federated learning faces two primary challenges. Firstly, there is the issue of resource constraints at the edge. Many edge devices, such as edge computing machines and Internet of Things (IoT) devices, have limited computational capabilities, making it difficult to train and deploy large models directly on these devices. Secondly, optimizing multiple heterogeneous tasks among participating parties becomes challenging especially in a federated setting. Existing personalized federated and multi-task federated methods [17, 18] have mainly focused on addressing the heterogeneity of data distributions but have disregarded the consideration of task het Figure 1: The overview of our proposed framework erogeneity. The heterogeneity of tasks results in different optimization objectives for each task, and simply aggregating models trained on diverse tasks can lead the federated model to optimize in a biased direction, which results in a decrease in the generalization ability of federated large-scale models, consequently diminishing their practical utility. In light of these challenges, we propose FedBone, a federated multi-task learning framework as shown in Figure 1, which takes advantage of the server-client split learning paradigm to enable the edge clients to participate in large-scale federated training with low memory footprints. The FedBone framework is designed to execute a multi-stage process for handling heterogeneous client tasks which entail client-side data embedding, server-side universal model feature extraction, and client-side task-specific processing. Throughout the process, edge clients are only responsible for computing data embedding and propagating it to the cloud server for feature extraction of the large-scale general model. The resulting latent representations are then dispatched back to the clients to perform task output. To enhance the general model's generalization, we introduce a gradient projection method and gradient rescaling based on historical gradient attention to reduce the negative impact of conflicting gradient components on gradient aggregation. The task output module on the client is tailored to specific task types but is generally concise due to the assumption of feature extraction having already been fulfilled. Latent representations extracted from the large-scale general model are usually low-level features for various tasks. Therefore, we propose a task adaptation module, which utilizes deformable convolutions and a self-attention mechanism to focus on low-level features in the task-specific region and perform task interactions. Our main contributions can be summarized as follows: * We propose FedBone, a novel federated multi-task learning framework via split learning for large-scale federated training on edge clients and heterogeneous task adaptation. * We propose GPAgregation to alleviate optimization challenges of the general model posed by task heterogeneity among clients, which rescales client gradients with historical gradients attention and merge gradient conflict between clients. * We conduct extensive experiments on two public multi-task datasets. The results show that our proposed FedBone outperforms existing federated learning algorithms in heterogeneous tasks with much smaller computational resource requirements. The experiments on 13 real-world Ophthalmic tasks reveal the potential capability of FedBone in real medical and healthcare applications. ## 2 Related Work ### Federated Multi-task Learning Multi-task learning methods can be divided into centralized and distributed computing methods according to the data collection method. The former collects the data in advance to a central node and then runs the model [16, 17], while the distributed method collects heterogeneous data from different tasks in a distributed way, but often faces the problems of high communication cost and privacy issues that prevent copying to the central node. Federated multi-task learning proposes to train different models directly on each node using knowledge sharing [14], focusing on the fault tolerance and privacy of the node datasets. The initial federated multi-task learning had a central node [14], and the latest research also achieved decentralized federated multi-task learning [15]. Regularization-based multi-task learning methods capture complex relationships between personalized models to achieve aggregation of different tasks but lose the ability to grasp complex relationships between tasks [13]. Ditto and other federated multi-task methods, although sacrificing the complexity of the regularization term to train more complex models, also lose the ability to capture complex relationships between tasks [12]. The FATHOM framework leveraged the attention mechanism to extract input features and learn a shared temporal representation across different devices, thereby achieving knowledge transfer and performance improvement [15]. FedMSplit was proposed to use a dynamic multi-view graph structure to address the modality incongruity problem among sensor devices and to promote local model relations through neighborhood message passing in the graph [15]. The SpreadGNN framework has solved the non-I.I.D. problem of graph data and uses a dynamic multi-task optimization method to ensure model convergence [10]. FedICT achieved personalized services for multi-task clients by using federated prior distillation and local knowledge adjustment[20]. However, none of the above approaches enables local training on clients with heterogeneous tasks, and they all require full model training and evaluating on local clients, which means that clients must equally have large computation capability when training large-scale models. ### Personalized federated learning In federated learning scenarios, a single global shared model faces the problem of significant differences in data distribution among clients. Personalized federated learning tries to solve this problem. There are two different strategies for personalized federated learning: personalization of the global model and individual personalized models. The former performs local adaptation for each client based on the trained global federated model to achieve personalized processing, while the latter trains individual personalized models on each client. In the data augmentation aspect, the self-balancing learning framework Astraea uses Z-score-based data augmentation and mediator-based multi-client rescheduling to mitigate the impact of data distribution differences [13]. In FedHome, each client performs personalized adaptation on a locally enhanced class-balanced dataset [20]. Some studies use the method of adding a local loss regularization term, such as FedProx introduced an approximation term for the local subproblem, taking into account the dissimilarity between the global FL model and the local model to adjust the impact of local updates [11], FedCL used the regularization term of elastic weight consolidation (EWC) from the continual learning domain[14]. Other methods such as transfer learning FedMM [11], meta-learning MAML [15], and so on are used to improve the performance of the global shared model trained on heterogeneous data in federated learning. The personalized solutions for clients mainly include methods such as parameter decoupling LG-FedAvg [10], model interpolation HeteroFL [14], and clustering pFedBayes [13]. Specifically, the importance of parameters in FedCurv was estimated by the Fisher information matrix, and a penalty step is performed to retain important parameters, which can reduce the catastrophic forgetting problem between multiple tasks [21]. The concept of personalized federated learning highlights the necessity of adapting a model for local data distribution. However, existing methods have failed to consider the potential for personalization in the event of heterogeneity of client tasks. ## 3 Method ### Problem Formulation We consider a set of federated clients \(\mathcal{K}=\{1,2,...,K\}\), each client \(k\in\mathcal{K}\) has a local dataset \(\mathcal{D}_{k}=\{(x_{k},y_{k})_{i},i=1,2,...,N_{k}\}\) and collaboratively trains models with other federated learning clients, with the goal of training personalized local models \(f_{k}\) that can adapt to the distinct local task. The goal is to solve the following optimization problems: \[\forall k\in\mathcal{K},\ \ \ \min_{f_{k}\in\mathcal{F}}\mathcal{L}_{k}(f_{k}) \tag{1}\] where \(\mathcal{F}\) denotes the set of all personalized local models, \(\mathcal{L}_{k}\) denotes local loss function. ### Overall Architecture Our proposed framework FedBone aims to enable the participation of heterogeneous task clients in federated learning, thereby facilitating federated training of large-scale models. To achieve this, we adopt a split federated learning approach [15], which involves the computation of a large-scale general model on the cloud server and lightweight computation of data embedding/task head output Figure 2: The workflow of FedBone framework. Clients perform patch embedding locally and (1) send embeddings to the cloud server for feature extraction using the general model, and the cloud server (2) sends extracted features back to clients. Clients complete the loss computation and (3) send backward intermediate results to the cloud server for backward propagation of the general model, and the cloud server (4) sends results back to clients. The clients can now update the local task adaptation module. For each client, the cloud server maintains a distinct general model, which is updated during every client’s mini-batch. When all clients finish a local training epoch, the cloud server will (5) aggregate these general models to finalize one communication round. on the edge clients. FedBone aggregates large-scale general models using a task gradients projection method, which prevents gradient conflicts and improves model generalization performance, as opposed to the direct federated averaging aggregation methods. In order to enhance the performance of client local tasks, we introduce a task adaptation module, which comprises the deformable convolution and self-attention mechanism, that adapts to irregularly shaped feature maps through deformable convolution and captures the task interaction features. The full framework is illustrated in Figure 2. In the following, we will outline the workflow of split federated multi-task learning and elaborate on the comprehensive design of federated aggregation via task gradient projection and task adaptation module. ### Split Federated Multi-Task Learning FedBone follows the split learning [14] approach, but it only requires one cloud server for high-performance computation and model aggregation. All clients perform patch embedding [13] computations in parallel and then send the local results to the cloud server for feature extraction using a large-scale general model. After receiving client results, the cloud server responds to the client with general latent representations. Using these representations, clients then complete the task adaptation module and task output head forward propagation and immediately begin backward propagation. After the cloud server receives the gradients of the general model, it stores them and then sends the subsequent gradients of the task adaptation module to the original client. Clients with complete gradients can now update the parameters of the local patch embedding, task adaptation module, and task output head. When all selected clients send the gradients of the general model, the cloud server aggregates the gradients and updates the parameters of the general model. The specific detail can be found in Algorithm 1. In Algorithm 1, clients compute patch embedding \(e(\cdot)\), task adaptation \(l(\cdot)\) and task output head \(o(\cdot)\) with parameters \(\zeta,\eta\) and \(\phi\). The patch embedding module transposes raw data patches to flatten patch embeddings with a single convolution operation. The task adaptation module is built with deformable convolution and multi-head self-attention, which will be described in more detail in Section 3.5. The task output head can vary for heterogeneous tasks, but it typically contains convolution, normalization, and deconvolution operations. Computation on clients yields relatively low resource requirements and can be conducted on low-power consumption edge devices. During clients update, the cloud server gradually gathers task gradients \(\nabla_{k}^{t}\) for gradients aggregation and general model update subsequently. ``` 0:Client set \(\mathcal{K}\) with local datasets \(\mathcal{D}_{k},\forall k\in\mathcal{K}\) 0:General model \(\theta\), client task-specific modules \(\zeta,\eta\), \(\phi\) 1:Server initializes \(\theta^{0}\), \(\forall\) Client \(k\in\mathcal{K}\) initializes \(\zeta_{k}^{0},\eta_{k}^{0},\phi_{k}^{0}\) 2:for round \(t=0,...,T-1\)do 3:for client \(k=1,...,K\)do 4:Client patch embedding \(x_{k,e}^{t}\gets e(x_{k};\zeta_{k}^{t})\) 5:Server feature extraction \(x_{k,h}^{t}\gets f(x_{k,e}^{t};\theta^{t})\) 6:\(\frac{\partial\mathcal{L}_{k}}{\partial f}\leftarrow\textsc{ClientUpdate}(x_{k,h}^{t})\) 7:\(\nabla_{k}^{t}=\frac{\partial\mathcal{L}_{k}}{\partial f}\frac{\partial f}{ \theta^{t}}\) 8:Server send \(\frac{\partial\mathcal{L}_{k}}{\partial e}\) to client 9:Client completes backward propagation 10:Client optimizes \(\zeta_{k}^{t+1},\eta_{k}^{t+1},\phi_{k}^{t+1}\) 11:endfor 12:Server gather \(\nabla_{\mathcal{K}}^{t}=\{\nabla_{1}^{t},\nabla_{2}^{t},...,\nabla_{ \mathcal{K}}^{t}\}\) 13:\(\nabla^{t}\leftarrow\textsc{GPAggregation}(\nabla_{\mathcal{K}}^{t},\nabla^{t- })\) 14:\(\theta^{t+1}\leftarrow\textsc{Optimizer}(\theta^{t},\nabla^{t})\) 15:endfor 16:functionClientUpdate(\(x_{k,h}^{t}\)) 17:Task adaptation \(x_{k,l}^{t}\gets l(x_{k,h}^{t};\eta_{k}^{t})\) 18:Task output \(\hat{y}_{k}^{t}\gets o(x_{k,l}^{t};\phi_{t}^{k})\) 19:Task specific loss computation \(\mathcal{L}_{k}(\hat{y}_{k}^{t},y_{k})\) 20:\(\frac{\partial\mathcal{L}_{k}}{\partial f}\leftarrow\textsc{Backward propagation to task adaptation}\) 21:return\(\frac{\partial\mathcal{L}_{k}}{\partial f}\) 22:endfunction ``` **Algorithm 1** FedBone ### Gradients Aggregation via Conflicting Gradients Projection The cloud server conducts gradient aggregation for optimizing parameters of the general model, which could integrate the knowledge of all client tasks and improve the generalization capability of the general model. Learning multiple tasks simultaneously is a challenging optimization problem that can sometimes lead to poorer model performance [21]. the optimization challenges are not well understood. In the federated learning scenario, things become even trickier, since most existing methods require access to raw data to build the relationship between tasks and determine the strategy for aggregating task gradients. To ease the need for raw data, one feasible approach is to attribute multi-objective optimization problems to the existence of gradient conflicts and described them as gradients from different tasks conflicting with one another among tasks [22] and solve them by correcting the gradients. We now define conflicting gradients formally. **Definition 1** (Conficting gradients).: _Define the angle between client \(i\) gradients \(\nabla_{i}\) and client \(j\) gradients \(\nabla_{j}\) as \(\omega_{ij}\), \(\nabla_{i}\) and \(\nabla_{j}\) are **conflicting gradients** when \(\cos\omega_{ij}<0\)._ As shown in Figure 3(a), gradients \(\nabla_{i}\) and \(\nabla_{j}\) have a negative impact on each other, and direct aggregation will cause a reduction in final gradients. An intuitive idea can be projecting one gradient \(\nabla_{i}\) onto the normal plane of another gradient \(\nabla_{j}\) with \[\nabla_{i}^{\prime}=\nabla_{i}-\frac{\nabla_{i}\cdot\nabla_{j}}{\|\nabla_{j} \|^{2}}\nabla_{j} \tag{2}\] to eliminate the opposite component, as shown in Figure 3(b). The method works well when the general model converges towards flatter minima, but certain clients may fall into a sharp valley and the weight of the clients should be decreased in the aggregation procedure. To dampen the influence of clients which converge towards sharp minima, we propose a novel gradients aggregation method **GPAggregation** by the use of historical aggregated gradients. We rescale gradients by calculating the attention values of historical aggregated gradients. A simple example is shown in Figure 3(c), the gradient \(\nabla_{i}\) is scaled by attention \(\alpha_{i}\) and then projected onto the normal plane of scaled gradient \(\nabla_{j}\). The gradient projection method is described in Algorithm 2. The task gradients \(\nabla_{k}\) are scaled by attention mechanism with historical aggregated gradients \(\nabla^{\prime}\): \[\nabla_{k}=softmax(\frac{\nabla_{k}\nabla^{\prime T}}{d_{\nabla}})\nabla_{k}. \tag{3}\] Iterate through gradients of all other clients and project onto every normal plate, we now get the de-conflicted task gradients which can be used for average aggregation. ### Heterogeneous Task Adaptation The large-scale general model can extract latent representations with sufficient information for handling heterogeneous tasks on the client side. In general multi-task learning, similar task output header structures are used for different tasks to reduce the complexity of optimizing model parameters [17]. In a federated multi-task learning scenario, distribution shifts exist among clients. more unevenly than centralized multi-task learning data. As a result, the latent representations by the general model are more generalized and decoupled from the specific distribution of client data. Relying solely on a lightweight task output head makes it challenging to extract further task-specific information from the general latent representations and apply it to accomplish tasks. leading to a more obvious distribution shift. Inspired by the successful Deformable Convolutional Network [16] and Convolutional Transformer joint structure [21], we propose a heterogeneous task adaptation module that adaptively captures unique receptive regions specific to each task and task interactions. The heterogeneous task adaptation module uses channel-wise pooling, spatial-wise sampling, and intra-task attention to learn relevant task-specific features. Utilizing the reconstructed feature representations enables the task output head to perform downstream tasks more effectively and efficiently. As shown in Figure 4, the heterogeneous task adaptation module mainly consists of \(1\times 1\) convolution, deformable convolution and self-attention mechanism. The module uses general latent representation \(x_{h}\) received from the cloud server which is initially fed into a linear layer to reduce the channel dimension. The feature map then employs \(1\times 1\) convolution to communicate between channels. Following the GELU activation, the resulting feature map is denoted as \(x^{\prime}_{h}\). Following [16], we first sample a regular grid \(R\) over the input feature map \(x^{\prime}_{h}\) and then the summation of sampled values weighted by \(w\). To generate the relative offsets with respect to the reference point \(\mathbf{p}\), the full feature map \(x^{\prime}_{h}\) is fed to the convolution operator to learn the corresponding offsets \(\delta_{\mathbf{p}}\). \[x_{d}(\mathbf{p})=\sum_{\delta_{\mathbf{p}}\in\mathcal{R}}w(\mathbf{p})\cdot x ^{\prime}_{h}(\mathbf{p}+\delta_{\mathbf{p}}). \tag{4}\] ## 4 Experiments ### Experimental Setup #### Implementation We evaluate the performance of FedBone on two multi-task dense prediction datasets and compare the results with common FL method FedAvg [15], personalized FL methods FedProx [15] and pFedMe [15], and multi-task FL method FedEM [14]. We determine the number of clients by the tasks of each dataset. For each task in the dataset, we randomly split it into 4 clients with equal data volumes, and partitioned the training and testing sets in a 8:2 ratio on the clients. For the FL methods used for comparison, we designed a fully convolutional [14] Figure 4: Illustration of task adaptation module Figure 3: The gradients projection process of two task gradients \(\nabla_{i}\) and \(\nabla_{j}\). (a) The two gradients with conflicting gradient directions are aggregated directly, which can lead to interference. (b) The gradients \(\nabla_{i}\) are firstly projected onto the normal vector of the gradients \(\nabla_{j}\), and then they are aggregated. (c) The two gradients \(\nabla_{i}\) and \(\nabla_{j}\) are scaled with attention \(\alpha\), and continue the projection-aggregation procedure. (d) The red, yellow, and green lines represent the aggregated gradients of the three cases, respectively. task-specific output head for each task. For every FL method, we set communication rounds to 200. For FedBone, FedAvg and FedEM, we use SGD as the optimizer. For FedProx and pFedMe, we have modified the SGD optimizer to fit the optimization process of the algorithm. The batch size is set to 16 and the learning rate is set to 0.01, scheduled to decay by a fraction of 0.1 every 50 epochs. All our experiments are conducted on the Pytorch framework with 8 NVIDIA A800 80GB GPUs and 1TB system memory. #### 3.3.1 Datasets We adopt two publicly accessible datasets NYUDv2 [21] and PASCAL-Context [15]. NYUDv2 contains 1,449 RGB images and provides dense labels for semantic segmentation, depth estimation, normal estimation, and boundary detection tasks. PASCAL-Context contains 10,180 training RGB images with dense labels for semantic segmentation, saliency estimation, normal estimation, and boundary detection tasks. Meanwhile, Xu [10] provides extra human parts annotations for 3,589 images, which act as the labels for the human part segmentation task. #### 3.3.2 Metrics The chosen datasets comprise a total of 5 different types of tasks. For semantic segmentation tasks (including human part segmentation), we use mean Intersection over Union (mIoU) as the metric. For normal estimation tasks, mean Error (mErr) is adopted, and for boundary detection tasks, optimal dataset scale F-measure (odsF) is used. For the depth and saliency estimation, the Root Mean Square Error (RMSE) and the maximum F-measure (maxF) are exploited respectively. #### 3.3.3 Backbones We employ Swin Transformer Small (Swin-S) [13] pre-trained on ImageNet-22K [16] as the backbone for all experiments except the analysis of computational resource requirements. In order to accommodate the demand of model scale in the production environment, we used a larger model Swin Transformer Base (Swin-B) as the backbone to analyze the computational and memory resources required by various FL methods on the client side. ### Dense Prediction Tasks Table 1 and Table 2 present the performance of FedBone on the NYUDv2 dataset and the PASCAL-Context dataset. These tables compare the performance of four different methods, namely FedAvg, FedProx, pFedMe, and FedEM, across multiple tasks. In certain tasks such as segmentation, human-part, saliency, and bound, higher values indicate better performance, whereas in tasks like depth and normal, lower values indicate superior performance. Our method, FedBone, consistently outperforms all the comparative methods in the segmentation, humanpart, saliency, and bound tasks, demonstrating its superiority with higher metric values. Conversely, for the depth and normal tasks, FedBone consistently achieves lower values, indicating its better performance compared to the comparative methods. These results highlight the effectiveness and generalization of FedBone across diverse tasks. Overall, FedBone exhibits clear advantages over the comparative methods in terms of average accuracy. While there are a few isolated cases where FedBone, trained on a single client, falls slightly short compared to the comparative methods, it consistently outperforms them when considering the overall average accuracy. This underscores the significant advantage of our approach in terms of overall performance. The findings demonstrate the effectiveness of FedBone in achieving superior performance across multiple tasks, validating its potential as a robust method in federated learning scenarios. ### Analysis #### 3.3.1 Ablation Study We conducted an ablation study of FedBone to evaluate the contribution of each component and setting. The results are \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Segmentation(mIoU)\(\uparrow\)} & \multicolumn{4}{c}{Depth(RMSE)\(\downarrow\)} & \multicolumn{4}{c}{Normal(Err)} & \multicolumn{4}{c}{Bound(odsF)\(\uparrow\)} \\ & 1 & 2 & 3 & 4 & Avg & 1 & 2 & 3 & 4 & Avg & 1 & 2 & 3 & 4 & Avg & 1 & 2 & 3 & 4 & Avg \\ \hline FedAvg & 38.97 & 38.29 & **43.30** & 37.48 & 39.51 & 0.4283 & 0.5294 & 0.5571 & 0.710 & 0.5580 & 27.01 & 25.52 & 25.64 & 28.40 & 26.89 & 62.57 & 62.23 & 63.67 & 59.86 & 62.08 \\ FedProx & 29.25 & 31.92 & 31.68 & 24.66 & 29.38 & 0.4846 & 0.6095 & 0.6155 & 0.7892 & 0.6241 & 27.68 & 27.33 & 26.12 & 28.75 & 27.47 & 61.01 & 61.27 & 62.70 & 58.15 & 60.78 \\ FedMe & 33.14 & 33.32 & 33.90 & 26.74 & 31.78 & 0.4340 & 0.5314 & 0.5466 & 0.7457 & 0.5644 & 27.39 & 25.97 & 22.76 & **27.50** & 25.04 & 62.29 & 62.92 & 64.57 & 60.77 & 62.63 \\ FedEM & 41.57 & 38.33 & 41.97 & 0.341 & 0.505 & **0.02432** & 0.5221 & **0.5346** & 0.7215 & 0.5451 & 26.44 & 26.41 & 25.11 & 28.16 & 25.53 & 62.81 & **63.91** & 63.82 & 60.43 & 62.74 \\ \hline **FedBone(ours)** & **42.92** & **40.73** & 43.22 & **42.47** & **42.34** & 0.4594 & **0.5190** & 0.5407 & **0.6136** & **0.5332** & **23.67** & **25.32** & **22.57** & 27.78 & **24.84** & **63.46** & 63.74 & **65.04** & **61.32** & **63.39** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of FL methods on NYUDv2 dataset, \(\uparrow\) means higher is better, and \(\downarrow\) means lower is better. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Segment} & HumanPart Saliency & Normal & Bound \\ & (mIOU)\(\uparrow\) & (mIOU)\(\uparrow\) & (maxF)\(\uparrow\) & (mErr)\(\downarrow\) \\ \hline FedAvg & 52.71 & 56.12 & 82.97 & 17.66 & 61.23 \\ FedProx & 61.69 & 53.21 & 81.48 & 15.69 & 62.32 \\ pFedMe & 59.16 & 57.04 & 80.90 & 15.67 & 66.59 \\ FedEM & 51.10 & 53.79 & 82.15 & 19.64 & 59.27 \\ \hline **FedBone(ours)** & **62.74** & **58.09** & **84.36** & **15.13** & **66.42** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of FL methods on PASCAL-Context dataset \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Methods} & Parameters & GFLOPS & Memory \\ & (m) & (G) & (GB) \\ \hline FedAvg & 95.04 & 123.30 & 33.32 \\ FedProx & 95.04 & 123.30 & 33.68 \\ pFedMe & 285.13 & 369.91 & 36.13 \\ **FedBone(ours)** & **1.92**(+88.67) & **11.74**(+87.05) & **3.31**(+32.12) \\ \hline \hline \end{tabular} \end{table} Table 3: Computational resources required by FL methods on the client-side when training the Swin-B model. The + number in brackets means the required resources on the cloud server. presented in Figure 5. The baseline FedAvg is shown in the first row, while the +GPA and +TA indicate the addition of the GPAggregation and heterogeneous task adaptation module to the baseline. The table shows that compared to FedAvg, the addition of either the GPAggregation or task adaptation module results in improved performance, with the task adaptation module providing a more significant gain. This finding supports that heterogeneity between different tasks is a critical factor to consider when applying federated learning across tasks. The last bar of Figure 5 shows the performance of the proposed FedBone. By integrating both the GPAggregation and task adaptation module, FedBone achieves the best performance among the different settings evaluated. **Computational Resource Requirements** We conduct an analysis of the computational resource requirements to compare FedBone with other FL methods. In Table 3, FL methods FedAvg, FedProx, and pFedMe, which employ the fully convolutional task-specific head, have a total parameter number similar to that of FedBone. However, FedBone utilizes the split learning paradigm, which places most computations on the cloud server, and thus, the majority of parameters are not stored locally, resulting in a vast disparity in local computation and memory usage during training. The FedEM implements ensemble learning and has triple the parameters of common FL methods. Nevertheless, the total memory usage is comparable since it trains sequentially in effect. ### Real-world Ophthalmic Tasks To further investigate the effectiveness of our proposed method FedBone in real-world applications, we collect 12,912 color fundus images and label the images according to ophthalmic diseases, including high myopia maculopathy (HMM), retinal vein occlusion (RVO), proliferative retinopathy (PR), diabetic macular edema (DME), pathological myopia (PM), hypertensive retinopathy (HR), glaucoma (G), macular epiretinal membrane (MEM), macular hole (MH). We label images that show potential pathological changes but could not be diagnosed as any specific disease, as needing further examination (FE). The ten diseases, combined with the normal fundus labels, form 10 binary classification (disease diagnosis) tasks. In addition to labeling for disease diagnosis, we also conduct labeling for two types of disease grading, i.e., age-related macular degeneration (AMD) grading and diabetic retinopathy (DR) grading. Together with Retinal-Lesions [11] retinal lesion segmentation dataset, we build up a 13-task real-world ophthalmic dataset, the results are shown in Table 4. All FL methods perform well on simple binary classification tasks in Table 4. Overall, personalized FL methods, including FedProx, pFedMe, and FedEM, perform better than the common FL method FedAvg. Additionally, our proposed FedBone achieved the best performance on the vast majority of tasks. For the ophthalmic semantic segmentation task LS, FedBone outperforms all other FL methods, which shows the potential of FedBone in real medical scenarios. ## 5 Conclusion In this paper, we proposed a novel federated multi-task learning framework FedBone via split learning for large-scale federated training on edge clients and heterogeneous task adaptation. To enhance the general model's generalization, we introduce an aggregation method GPAggregation, which rescales client gradients with attention to historical gradients and merges gradient conflict between clients. The extensive experiments show that FedBone outperforms existing federated learning algorithms in heterogeneous tasks with off-the-shelf computational resources on the client side. The real ophthalmic experiment also indicates a promising future in using FedBone for real medical and healthcare applications. In the future, we may further extend FedBone for more data modality and reduce the communication cost. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline Methods & HMM & RVO & RP & DME & PM & FE & HR & G & MEM & MH & AMD & DR & LS(mIOU) \\ \hline FedAvg & 94.33 & 93.72 & 96.07 & 93.52 & 91.27 & 78.27 & 94.35 & 94.22 & 92.25 & 94.56 & 79.63 & 63.86 & 49.71 \\ FedProx & 97.91 & 97.86 & 94.3 & 94.61 & 92.36 & 80.2 & 93.20 & 95.88 & 94.58 & 97.64 & 88.47 & 92.98 & 50.69 \\ pFedMe & 98.62 & 96.57 & 94.94 & 98.01 & 92.88 & 81.3 & 94.17 & 96.08 & 95.99 & 97.68 & 90.31 & 92.06 & 52.63 \\ FedEM & 98.79 & **99.41** & 96.14 & 96.22 & 91.25 & 83.3 & 95.23 & 96.22 & 95.92 & 97.03 & 90.15 & **94.68** & 54.21 \\ \hline **FedBone(Ours)** & **98.87** & 98.91 & **99.24** & **99.13** & **93.71** & **84.57** & **95.90** & **96.75** & **96.15** & **98.93** & **91.18** & 94.57 & **55.82** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of FL methods on real ophthalmic dataset Figure 5: The overview of our proposed framework ## Acknowledgments This work is supported by Beijing Municipal Science & Technology Commission (No.Z2211000002722009), National Natural Science Foundation of China (No.62202455), Youth Innovation Promotion Association CAS, and the Science Research Foundation of the Joint Laboratory Project on Digital Ophthalmology and Vision Science (No. SZYK202201).
2307.16452
A continuous Structural Intervention Distance to compare Causal Graphs
Understanding and adequately assessing the difference between a true and a learnt causal graphs is crucial for causal inference under interventions. As an extension to the graph-based structural Hamming distance and structural intervention distance, we propose a novel continuous-measured metric that considers the underlying data in addition to the graph structure for its calculation of the difference between a true and a learnt causal graph. The distance is based on embedding intervention distributions over each pair of nodes as conditional mean embeddings into reproducing kernel Hilbert spaces and estimating their difference by the maximum (conditional) mean discrepancy. We show theoretical results which we validate with numerical experiments on synthetic data.
Mihir Dhanakshirur, Felix Laumann, Junhyung Park, Mauricio Barahona
2023-07-31T07:20:26Z
http://arxiv.org/abs/2307.16452v1
# A continuous Structural Intervention Distance to compare Causal Graphs ###### Abstract Understanding and adequately assessing the difference between a true and a learnt causal graphs is crucial for causal inference under interventions. As an extension to the graph-based structural Hamming distance and structural intervention distance, we propose a novel continuous-measured metric that considers the underlying data in addition to the graph structure for its calculation of the difference between a true and a learnt causal graph. The distance is based on embedding intervention distributions over each pair of nodes as conditional mean embeddings into reproducing kernel Hilbert spaces and estimating their difference by the maximum (conditional) mean discrepancy. We show theoretical results which we validate with numerical experiments on synthetic data. ## 1 Introduction In causal learning settings, we assume that data are generated according to a Structural Causal Model (SCM). The directional relationships between variables in an SCM originate from an underlying Directed acyclic graph (DAG) under the causal Markov assumption (Peters et al., 2017, Section 6.5). The data-generating DAG may thus be called the _true_ DAG. The task in any causal learning problem is to derive (or learn) this true DAG given access to the observational data generated by the underlying SCM. Hence, we call the result of the effort to derive the causal relationships embedded in the observational data the _learnt_ DAG. In the present work, we are concerned with the problem of estimating the performance of a causal structure learning, or causal discovery algorithm by measuring its ability to accurately resemble the true DAG, including its potentially varying edge weights. Many widely used metrics exist (Peyrard and West, 2020; Acharya et al., 2018; Singh et al., 2017; Garant and Jensen, 2016; Peters and Buhlmann, 2015; Acid and de Campos, 2003). However, the most prominent ones, the Structural Hamming Distance and the Structural Intervention Distance, are dominated by graph properties only and do not directly take the underlying data into account. The Structural Hamming Distance (SHD) is the square of the Frobenius norm of the difference between the two (binary) adjacency matrices (of the true and learnt DAGs), i.e., it counts the number of edges in the learnt DAG that need to be added and removed so it is equal to the true DAG. On the other hand, the Structural Intervention Distance (SID) counts the number of pairwise interventional distributions on which the true DAG and the learnt DAG differ. Our proposed distance, the _continuous Structural Intervention Distance_ (contSID), is based on both the graph and data properties by computing the distance between each pairwise interventional distribution implied by the observational distribution in the true and learnt DAGs. The continuous SID has advantages over the SHD and SID, that are: 1. Advantage over SHD: The goal of estimating a DAG from observational data is to later use it to estimate effects under interventions. However, the SHD merely calculates the number of changes in edges that are required to transform one DAG to another. Hence, two DAGs having the same SHD may still differ significantly in the interventional effects they imply. 2. Advantage over SID: The SID is computed based on a binary count (whether there is a difference in the effect or not) and cannot quantify the difference in interventional distributions inferred by the two DAGs--important when weights are expected to vary across edges. This poses a problem when practitioners are interested in the quantitative discrepancies between interventions. The effect of an intervention beyond a binary count cannot be assessed without observational data, which we have access to because the original causal structure learning is conducted on observational data. We demonstrate the issues of the SHD and SID by considering the following introductory example. We assume that data are synthetically generated by a linear model with additive Gaussian noise (1) according to the DAG \(\mathcal{G}_{1}\) (Figure 0(a)). \[V_{1},V_{2}\sim\mathcal{N}(0,1) \tag{1}\] \[V_{3}\sim\mathcal{N}(10V_{1}+V_{2},1)\] The edge connecting \(V_{1}\) and \(V_{3}\) has a mean "weight" of 10. Now, suppose \(\mathcal{G}_{2}\) (Figure 0(b)) and \(\mathcal{G}_{3}\) (Figure 0(c)) are two learnt DAGs (they could be the outcomes of two different causal discovery algorithms). We benchmark the quality of the learnt DAGs by comparing them across different metrics: Table 1 describes the SHD, SID and contSID evaluated for the pair of DAGs \((\mathcal{G}_{1},\mathcal{G}_{2})\) and \((\mathcal{G}_{1},\mathcal{G}_{3})\). Intuitively, missing the edge \(V_{1}\to V_{3}\) should be penalized more than missing the edge \(V_{2}\to V_{3}\) since an intervention on \(V_{1}\) would lead to a larger difference in the distribution of \(V_{3}\) than the same intervention on \(V_{2}\) (see Table 1). Hence, an appropriate metric should indicate that \(\mathcal{G}_{2}\) is a more accurate approximation of \(\mathcal{G}_{1}\) than \(\mathcal{G}_{3}\). However, both the SHD and the SID weigh missing the edges \(V_{1}\to V_{3}\) and \(V_{2}\to V_{3}\) equally. For a pair of DAGs, contSID quantifies the pairwise difference in the interventional distributions by using the observational distribution (via the valid adjustment set/backdoor \begin{table} \begin{tabular}{l l l} \hline Metric & \(d(\mathcal{G}_{1},\mathcal{G}_{2})\) & \(d(\mathcal{G}_{1},\mathcal{G}_{3})\) \\ \hline SHD & 1 & 1 \\ SID & 1 & 1 \\ contSID & 0.23 & 0.39 \\ \hline \end{tabular} \end{table} Table 1: SHD, SID and contSID calculated on \(d(\mathcal{G}_{1},\mathcal{G}_{2})\) and \(d(\mathcal{G}_{1},\mathcal{G}_{3})\). set formula) as a mean embedding, that is, a unique representation of the interventional distribution in a reproducing kernel Hilbert space (RKHS). As previously described in Peters and Buhlmann (2015), the SHD does not take into account the importance of the edge in terms of impact on the interventional distributions whereas the SID does. However, the SID of \((\mathcal{G}_{1},\mathcal{G}_{2})\) and \((\mathcal{G}_{1},\mathcal{G}_{3})\) are still equivalent although missing the edge \(V_{1}\to V_{3}\) is clearly more influential on the resulting interventional distribution of \(V_{3}\) than missing \(V_{2}\to V_{3}\). We structure the paper as follows. After this Introduction, we provide sufficient Background in Section 2 to understand how we can use intervention mean embeddings (Section 3) to derive the Continuous Structural Intervention Distance in Section 4. We demonstrate numerically the validity of our proposed metric (Section 5) and conclude with a brief discussion (Section 6). ## 2 Background We consider a finite collection of random variables \(X_{1},\ldots,X_{D}\) with an index set \(\mathbf{V}=\{1,\ldots,D\}\). A graph \(\mathcal{G}=(\mathbf{V},\mathcal{E})\) then consists of nodes \(\mathbf{V}\) and edges \(\mathcal{E}\subseteq\mathbf{V}\times\mathbf{V}\). We identify a node \(V_{j}\in\mathbf{V}\) with its corresponding random variable \(X_{j}\). We denote the parent set of a node \(X_{i}\) by \(\mathbf{PA}_{i}:=\{X_{j}|(V_{i},V_{j})\in\mathcal{E},1\leq j\leq D\}\). We will use variables, nodes and vertices interchangeably depending on the context. We assume that the observational data \(\mathcal{D}=\{x_{1}^{(n)},\ldots,x_{D}^{(n)}\}_{n=1}^{N}\) are sampled from a distribution \(P\) which has a density \(p(\cdot)\) with respect to the Lebesgue or counting measure. Additionally, we require that the distribution is Markov with respect to the graph \(\mathcal{G}\). **Definition 2.1** (Causal Markov assumption (Peters et al. (2017), Definition 6.21)).: _The distribution \(P\) is Markov with respect to a DAG \(\mathcal{G}\) if \(\textbf{A}\perp\!\!\!\perp_{\mathcal{G}}\textbf{B}|\textbf{C}\implies\textbf{A} \perp\!\!\!\perp\textbf{B}|\textbf{C}\) for all disjoint vertex sets \(\textbf{A},\textbf{B},\textbf{C}\), where \(\perp\!\!\!\perp_{\mathcal{G}}\) denotes \(d\)-separation (Peters et al., 2017, Definition 6.1)._ The converse of the causal Markov assumption is known as the faithfulness assumption which links conditional independence in \(P\) to d-separation in \(\mathcal{G}\). Both assumptions together imply the required intrinsic link between the existence of edges in a causal DAG and the joint distribution of the observed variables. **Definition 2.2** (Faithfulness assumption (Peters et al. (2017), Definition 6.33)).: _If two random variables are (conditionally) independent in the observed distribution \(P\), then they are d-separated in the underlying DAG \(\mathcal{G}\)._ We also assume causal sufficiency, i.e., there are no hidden, or unobserved, variables that play a causal role in the system. ### Interventional distribution and _do_-calculus Given random variables \(X_{i}\) and \(X_{j}\) where \(i\neq j\), we try to estimate the distribution \(P_{X_{j}|do(X_{i})=\hat{x}_{i}}\), where \(do(X_{i})=\hat{x}_{i}\) represents an intervention on \(X_{i}\) whose value is set to \(\hat{x}_{i}\). This distribution is not directly observed since we are usually only given observational data. The _do_-calculus (Pearl, 2009) enables us to estimate interventional distributions from observational distributions using a known DAG through valid adjustment sets (Peters and Buhlmann, 2015). **Definition 2.3** (Valid adjustment set).: _Let \(X_{j}\notin\mathbf{PA}_{i}\) (otherwise we have \(P_{X_{j}|do(X_{i})}=P_{X_{j}}\), meaning interventions have no effect). We call a set \(\mathbf{Z}\subseteq\mathbf{V}\setminus\{V_{i},V_{j}\}\) a valid adjustment set for the ordered pair \((X_{i},X_{j})\) if_ \[p(x_{j}|do(X_{i})=\hat{x}_{i})=\int_{\mathbf{z}}p(x_{j}|\hat{x}_{i},\mathbf{z })p(\mathbf{z}). \tag{2}\] For discrete distributions, Equation (2) becomes a summation instead of an integration. We can characterize valid adjustment sets using the following theorem. **Theorem 2.4** (Characterization of valid adjustment sets (Peters and Buhlmann, 2015; Shpitser et al., 2012)).: _Consider a pair of variables \((X_{i},X_{j})\) and a subset \(\mathbf{Z}\subseteq\mathbf{V}\setminus\{V_{i},V_{j}\}\). Suppose \(\mathbf{Z}\) satisfies the following property: In \(\mathcal{G}\), no \(Z\in\mathbf{Z}\) is a descendant of any \(X_{k}\) which lies on a directed path from \(X_{i}\) to \(X_{j}\)(except for any descendants of \(X_{i}\) that are not on a directed path from \(X_{i}\) to \(X_{j}\)) and \(\mathbf{Z}\) blocks all non-directed paths from \(X_{i}\) to \(X_{j}\). Then_ * _If_ \(\mathbf{Z}\) _satisfies this property with respect to_ \((\mathcal{G},X_{i},X_{j})\)_, then_ \(\mathbf{Z}\) _is a valid adjustment set for_ \(P_{X_{j}|do(X_{i})}\)_._ * _If_ \(\mathbf{Z}\) _does not satisfy this property with respect to_ \((\mathcal{G},X_{i},X_{j})\)_, then there exists a distribution_ \(P^{\prime}\) _(not necessarily equal to_ \(P\)_), with density_ \(p^{\prime}\)_, that is Markov with respect to_ \(\mathcal{G}\) _and leads to_ \(p^{\prime}(x_{j}|do(X_{i}=\hat{x}_{i})\neq\int_{\mathbf{z}}p^{\prime}(x_{j}|x_{ i},\mathbf{z})p^{\prime}(\mathbf{z})\)_, i.e.,_ \(\mathbf{Z}\) _is not a valid adjustment set._ Note that for a pair of nodes \((X_{i},X_{j})\) there exist many valid adjustment sets. The parent adjustment set, formed by taking \(\mathbf{Z}\) to be the set of parents \(\mathbf{PA}_{i}\) of \(X_{i}\) is a valid adjustment set that can be easily read off from a graph. ### Conditional mean embeddings and the MCMD A mean embedding is a mapping of a probability distribution into an RKHS by a kernel k. This mapping is one-to-one if the kernel is characteristic (Fukumizu et al., 2007). We adopt the measure-theoretic approach to kernel conditional mean embeddings (Park and Muandet, 2020), rather than the definition based on operators between RKHSs as introduced by (Song et al., 2009). The measure-theoretic approach has the advantage of not relying on stringent assumptions for the population version of the embedding to exist, and comes with a natural regression interpretation for empirical estimates. The maximum (conditional) mean discrepancy (MMD) is a measure of discrepancy between distributions that is widely-used in the machine learning community due to its elegance, attractive theoretical properties and ease of empirical estimation, and forms the backbone of our approach in this paper; however, we do note that there are many other measures of discrepancy between distributions, and leave it as interesting future research direction to investigate how those can be utilised for the problem we tackle in this paper. In this section, we present the preliminaries of the conditional mean embedding and discuss its empirical estimates in Section 2.3. The results presented here hold generally--we adapt them to our setting in Section 3. As in Park and Muandet (2020), let \((\Omega,\mathcal{F},\mathcal{P})\) be the underlying probability space, let \((\mathcal{X},\mathfrak{X})\) and \((\mathcal{Z},\mathfrak{Z})\) be separable measurable spaces, and let \(X:\Omega\to\mathcal{X}\) and \(Z:\Omega\to\mathcal{Z}\) be random variables with distributions \(P_{X}\) and \(P_{Z}\). Let \(\mathcal{H}_{\mathcal{X}}\) be a vector space of \(\mathcal{X}\to\mathbb{R}\) functions endowed with a Hilbert space structure via an inner product \(\langle\cdot,\cdot\rangle_{\mathcal{H}_{\mathcal{X}}}\). A symmetric function \(k_{\mathcal{X}}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is a reproducing kernel of \(\mathcal{H}_{\mathcal{X}}\) if and only if (i) \(\forall x\in\mathcal{X},k_{\mathcal{X}}(x,\cdot)\in\mathcal{H}_{\mathcal{X}}\); and (ii) \(\forall x\in\mathcal{X}\) and \(\forall f\in\mathcal{H}_{\mathcal{X}},f(x)=\langle f,k_{\mathcal{X}}(x,\cdot) \rangle_{\mathcal{H}_{\mathcal{X}}}\). **Definition 2.5** (Kernel mean embedding).: _Given a distribution \(P_{X}\) on \(\mathcal{X}\) and assuming \(\mathbb{E}_{X}[k_{\mathcal{X}}(X,X)]<\infty\), we define the kernel mean embedding of \(P_{X}\) as \(\mu_{P_{X}}(\cdot)=\mathbb{E}_{X}[k_{\mathcal{X}}(X,\cdot)]\)_ **Definition 2.6** (Characteristic kernel).: _A positive definite kernel \(k_{\mathcal{X}}\) is characteristic to a set \(\mathcal{P}\) of probability measures on \(\mathcal{H}_{\mathcal{X}}\) if the map \(\mathcal{P}\to\mathcal{H}_{\mathcal{X}}:P_{X}\mapsto\mu_{P_{X}}\) is injective._ Popular kernels like the Gaussian and Laplacian kernel are characteristic. The RKHS associated with a characteristic kernel is rich enough to enable us to distinguish between different distributions using their embeddings. In other words, we can define the MMD, on \(\mathcal{P}\): for \(P_{X},P_{X^{\prime}}\in\mathcal{P}\), let \(||\mu_{P_{X}}-\mu_{P_{X^{\prime}}}||\) be their MMD. **Definition 2.7** (Conditional mean embedding (Park and Muandet, 2020)).: _Suppose \(X\) satisfies \(\mathbb{E}_{X}[k_{\mathcal{X}}(X,X)]<\infty\). Then, we define the conditional mean embedding of \(X\) given \(Z\) as:_ \[\mu_{P_{X|Z}}\coloneqq\mathbb{E}_{X|Z}\left[k_{\mathcal{X}}(X,\cdot)|Z\right] \tag{3}\] The conditional mean embedding \(\mu_{P_{X|Z}}\) is a \(Z\)-measurable random variable taking values in \(\mathcal{H}_{\mathcal{X}}\). The following theorem is used in estimating the conditional mean embedding (CME) of the conditional distribution \(P_{X|Z}\). **Theorem 2.8** (Deterministic function of conditional mean embedding (Park and Muandet, 2020)).: _Denote the Borel \(\sigma\)-algebra of \(\mathcal{H}_{\mathcal{X}}\) by \(\mathcal{B}(\mathcal{H}_{\mathcal{X}})\). Then we can write \(\mu_{P_{X|Z}}=F_{P_{X|Z}}\circ Z\), where \(F_{P_{X|Z}}:\mathcal{Z}\to\mathcal{H}_{\mathcal{X}}\) is some deterministic function, measurable with respect to \(\mathfrak{Z}\) and \(\mathcal{B}(\mathcal{H}_{\mathcal{X}})\)._ For \(z\in\mathcal{Z}\), \(F_{P_{X|Z}}(z)=\mathbb{E}_{X}[k_{\mathcal{X}}(X,\cdot)|Z=z]=\mu_{P_{X|Z=z}}\) which is the kernel mean embedding of the distribution \(P_{X|Z=z}\). Consider the random variables \(X^{\prime}:\Omega\to\mathcal{X}\) and \(Z^{\prime}:\Omega\to\mathcal{Z}\) with \(E_{X^{\prime}}[k_{\mathcal{X}}(X^{\prime},X^{\prime})]<\infty\). By Theorem 2.8, \(\mu_{P_{X^{\prime}|Z^{\prime}}}=F_{P_{X^{\prime}|Z^{\prime}}}\circ Z^{\prime}\). The analog to the MMD for conditional distributions \(P_{X|Z}\) and \(P_{X^{\prime}|Z^{\prime}}\), the maximum conditional mean discrepancy (MCMD), is defined below: **Definition 2.9** (Maximum conditional mean discrepancy (Park and Muandet, 2020)).: _The maximum conditional mean discrepancy (MCMD) between \(P_{X|Z}\) and \(P_{X^{\prime}|Z^{\prime}}\) is the function from \(\mathcal{Z}\to\mathbb{R}\) defined by_ \[\text{MCMD}_{P_{X|Z},P_{X^{\prime}|Z^{\prime}}}(z)=||F_{P_{X|Z}}(z)-F_{P_{X^{ \prime}|Z^{\prime}}}(z)||_{\mathcal{H}_{\mathcal{X}}} \tag{4}\] Note that the MCMD at \(z\in\mathcal{Z}\) is equal to the MMD between the distributions \(P_{X|Z=z}\) and \(P_{X^{\prime}|Z^{\prime}=z}\). We use this later in section 2.3 to construct a plug-in estimate of the MMD. ### Empirical estimates By Theorem 2.8, the task of estimating \(\mu_{P_{X|Z}}\) has been simplified to estimating \(F_{P_{X|Z}}:\mathcal{X}\to\mathcal{H}_{\mathcal{X}}\). This is precisely the setting of vector-valued regression with input space \(\mathcal{X}\) and output space \(\mathcal{H}_{\mathcal{X}}\). The problem of estimating \(F_{P_{X|Z}}\) can be reformulated as finding the vector-valued function that minimizes the loss \(\mathcal{E}_{X|Z}(F)\coloneqq E_{Z}\left[||F_{P_{X|Z}}(Z)-F(Z)||_{\mathcal{H}_ {\mathcal{X}}}^{2}\right]\) among all \(F\in\mathcal{G}_{\mathcal{X}\mathcal{Z}}\), where \(\mathcal{G}_{\mathcal{X}\mathcal{Z}}\) is a vector-valued RKHS of functions \(\mathcal{Z}\to\mathcal{H}_{\mathcal{X}}\). For simplicity, we endow \(\mathcal{G}_{\mathcal{X}\mathcal{Z}}\) with a kernel \(l_{\mathcal{X}\mathcal{Z}}(z,z^{\prime})=k_{\mathcal{Z}}(z,z^{\prime})\)\(I^{\prime}\) where \(k_{\mathcal{Z}}(\cdot,\cdot)\) is a scalar kernel on \(\mathcal{Z}\) and \(I^{\prime}\) is the identity operator. We cannot minimize \(\mathcal{E}_{X|Z}\) directly, since we do not observe samples from \(\mu_{P_{X|Z}}\), but only the pairs \((x_{i},z_{i})\) from \((X,Z)\). We bound this with a surrogate loss \(\tilde{\mathcal{E}}_{X|Z}\) that has a sample-based version: \[\mathcal{E}_{X|Z}(F) =E_{Z}\left[||E_{X|Z}\left[k_{\mathcal{X}}(X,\cdot)-F(Z)|Z\right] ||_{\mathcal{H}_{\mathcal{X}}}^{2}\right]\] \[\leq E_{Z}E_{X|Z}\left[||k_{\mathcal{X}}(X,\cdot)-F(Z)||_{\mathcal{ H}_{\mathcal{X}}}^{2}|Z\right]\] \[=E_{X,Z}\left[||k_{\mathcal{X}}(X,\cdot)-F(Z)||_{\mathcal{H}_{ \mathcal{X}}}^{2}\right]\] \[=:\tilde{\mathcal{E}}_{X|Z}(F)\] For details regarding the use of the surrogate loss function and its meaning, see Park and Muandet (2020). We empirically estimate the surrogate population loss \(\tilde{\mathcal{E}}_{X|Z}\) using a regularized loss function \(\tilde{\mathcal{E}}_{X|Z,N,\lambda}\) for \(\{(x^{(n)},z^{(n)})\}_{n=1}^{N}\) from the joint distribution \(P_{XZ}\), \[\tilde{\mathcal{E}}_{X|Z,N,\lambda}(F)\coloneqq\frac{1}{N}\sum_{n=1}^{N}||k_{ \mathcal{X}}(x^{(n)},\cdot)-F(z^{(n)})||_{\mathcal{H}_{\mathcal{X}}}^{2}+ \lambda||F||_{\mathcal{G}_{\mathcal{X}}z}^{2}\, \tag{5}\] where \(\lambda\) is a regularization parameter. We use the following theorem. **Theorem 2.10** (Loss function (Micchelli and Pontil, 2005)).: _Suppose we want to perform regression with input space \(\mathcal{Z}\) and output space \(\mathcal{H}\), by minimizing_ \[\frac{1}{N}\sum_{n=1}^{N}||h^{(n)}-F(z^{(n)})||_{\mathcal{H}}^{2}+\lambda||F||_{ \mathcal{G}}^{2}\] _where \(\lambda>0\) is a regularization parameter, \(\mathcal{G}\) is an \(\mathcal{H}\)-valued RKHS on \(\mathcal{Z}\) with \(\mathcal{H}\)-kernel \(\Gamma\) and \(\{(z^{(n)},h^{(n)}):n=1,\ldots,N\}\subseteq\mathcal{Z}\times\mathcal{H}\). If \(\tilde{F}\) minimizes the above equation in \(\mathcal{G}\), it is unique and has the form \(\tilde{F}=\sum_{n=1}^{N}\Gamma(\cdot,z^{(n)})(u^{(n)})\) where the coefficients \(\{u^{(n)}:n=1,\ldots,N\}\subseteq\mathcal{H}\) are the unique solution of the linear equations \(\sum_{n^{\prime}=1}^{N}\left(\Gamma(z^{(n)},z^{(n^{\prime})})+N\lambda\delta _{n,n^{\prime}}\right)(u^{(n^{\prime})})=h^{(n)},n=1,\ldots,N\) (\(\delta_{n,n^{\prime}}\) is the Kronecker delta)._ Our loss function matches the form in the Theorem 2.10. Therefore, by the Theorem 2.10, the minima \(\hat{F}_{P_{X|Z,N,\lambda}}\) of \(\tilde{\mathcal{E}}_{X|Z,N,\lambda}\) is \(\hat{F}_{P_{X|Z,N,\lambda}}(\cdot)=\mathbf{k}_{Z}^{T}(\cdot)\mathbf{f}\) where \(\mathbf{k}_{Z}(\cdot)\coloneqq(k_{\mathcal{Z}}(z^{(1)},\cdot),\ldots,k_{ \mathcal{Z}}(z^{(N)},\cdot))^{T}\), \(\mathbf{f}\coloneqq(f^{(1)},\ldots,f^{(N)})^{T}\) and the coefficients \(f^{(n)}\in\mathcal{H}_{\mathcal{X}}\) are the unique solutions of the linear equations \((\mathbf{K}_{Z}+N\lambda\mathbf{I})\mathbf{f}=\mathbf{k}_{X}\), where \(\left[\mathbf{K}_{Z}\right]_{ij}\coloneqq k_{\mathcal{Z}}(z^{(i)},z^{(j)})\), \(\mathbf{k}_{X}\coloneqq(k_{\mathcal{X}}(x^{(1)},\cdot),\ldots,k_{\mathcal{X}} (x^{(N)},\cdot))^{T}\) and \(\mathbf{I}\) is the \(N\times N\) identity matrix. Hence, the coefficients are \(\mathbf{f}=\mathbf{W}\mathbf{k}_{X}\), where \(\mathbf{W}=(\mathbf{K}_{Z}+N\lambda\mathbf{I})^{-1}\). Finally, we get \[\hat{F}_{P_{X|Z,N,\lambda}}(\cdot)=\mathbf{k}_{Z}^{T}(\cdot)\mathbf{W}\mathbf{ k}_{X}\in\mathcal{G}_{X\mathcal{Z}}\] We now construct the empirical estimator of the MCMD between the distributions \(P_{X|\mathcal{Z}}\) and \(P_{X^{\prime}|Z^{\prime}}\). Given samples \(\{(x^{(n)},z^{(n)})\}_{n=1}^{N}\),\(\{(x^{(n)},z^{(n)})\}_{n=1}^{N}\) from distributions \(P_{XZ},P_{X^{\prime}Z^{\prime}}\), we estimate the MCMD as \[\widehat{\mathrm{MCMD}}_{P_{X|Z},P_{X^{\prime}|Z^{\prime}}}(\cdot) =||\hat{F}_{P_{X|Z},N,\lambda}(\cdot)-\hat{F}_{P_{X^{\prime}|Z^{ \prime}},N,\lambda}(\cdot)||_{\mathcal{H}_{X}} \tag{6}\] \[=\left(\mathbf{k}_{Z}^{T}(\cdot)\mathbf{W}_{Z}\mathbf{K}_{X} \mathbf{W}_{Z}\mathbf{k}_{Z}(\cdot)+\mathbf{k}_{Z^{\prime}}^{T}(\cdot)\mathbf{ W}_{Z^{\prime}}\mathbf{K}_{X^{\prime}}\mathbf{W}_{Z^{\prime}}\mathbf{k}_{Z^{\prime}}(\cdot)\right.\] \[\left.-2\mathbf{k}_{Z}^{T}(\cdot)\mathbf{W}_{Z}\mathbf{K}_{XX^{ \prime}}\mathbf{W}_{Z^{\prime}}\mathbf{k}_{Z^{\prime}}(\cdot)\right)^{1/2}\] where \([\mathbf{K}_{X}]_{st}=k_{\mathcal{X}}(x^{(s)},x^{(t)})\), \([\mathbf{K}_{X^{\prime}}]_{st}=k_{\mathcal{X}}(x^{\prime(s)},x^{\prime(t)})\), \([\mathbf{K}_{XX^{\prime}}]_{st}=k_{\mathcal{X}}(x^{(s)},x^{\prime(t)})\), \([\mathbf{K}_{Z^{\prime}}]_{st}=k_{\mathcal{Z}}(z^{\prime(s)},z^{\prime(t)})\), \(\mathbf{k}_{Z^{\prime}}(\cdot)=(k_{\mathcal{Z}}(z^{\prime(1)},\cdot),\ldots,k_ {\mathcal{Z}}(z^{\prime(N)},\cdot))\), \(\mathbf{W}_{Z}=(\mathbf{K}_{Z}+N\lambda\mathbf{I})^{-1}\) and \(\mathbf{W}_{Z^{\prime}}=(\mathbf{K}_{Z^{\prime}}+N\lambda\mathbf{I})^{-1}\). ## 3 Intervention mean embeddings ### Definition We derive the mean embedding for the interventional distribution given in Equation (1). Recall that \(X_{d}:\Omega\to\mathcal{X}_{d},1\leq d\leq D\) are random variables where \((\mathcal{X}_{d},\mathfrak{X}_{d})\) are separable measurable spaces. For \(1\leq d\leq D\), \(\mathcal{H}_{\mathcal{X}_{d}}\) denotes the RKHS of functions on \(\mathcal{X}_{d}\) with reproducing kernel \(k_{\mathcal{X}_{d}}(\cdot,\cdot)\). For an intervened node \(X_{i}\), target node \(X_{j}\) and a valid adjustment set \(\mathbf{Z}\) for the pair \((X_{i},X_{j})\), \(j\neq i\), let \(\mu_{P_{X_{j}|d\mathcal{X}_{(i)}=i_{i}}}\) denote the intervention mean embedding (IME) corresponding to the interventional distribution \(P_{X_{j}|d\mathcal{X}_{(i)}=\hat{x}_{i}}\). Let \(\mu_{P_{X_{j}|X_{i},\mathbf{Z}}}=\mathbb{E}_{X_{j}|X_{i},\mathbf{Z}}[k_{ \mathcal{X}_{j}}(X_{j},\cdot)|X_{i},\mathbf{Z}]\). Then, by Theorem 2.8, we can write \(\mu_{P_{X_{j}|X_{i},\mathbf{Z}}}=F_{P_{X_{j}|X_{i},\mathbf{Z}}}\circ(X_{i}, \mathbf{Z})\), where \(F_{P_{X_{j}|X_{i},\mathbf{Z}}}:\mathcal{X}_{i}\times\boldsymbol{\mathcal{Z}} \to\mathcal{H}_{\mathcal{X}_{j}}\) is some deterministic function measur able with respect to \(\mathbf{\tilde{x}}_{i}\times\mathbf{\tilde{3}}\) and \(\mathcal{B}(\mathcal{H}_{\mathcal{X}_{j}})\). \[\mu_{P_{X_{j}|do(X_{i})=\hat{x}_{i}}} \coloneqq\int_{\mathcal{X}_{j}}k_{\mathcal{X}_{j}}(x_{j},\cdot)p( x_{j}|do(X_{i})=\hat{x}_{i})dx_{j} \tag{7}\] \[=\int_{\mathcal{X}_{j}}k_{\mathcal{X}_{j}}(x_{j},\cdot)\left(\int _{\mathbf{\mathcal{Z}}}p(x_{j}|\hat{x}_{i},\mathbf{z})p(\mathbf{z})d\mathbf{z} \right)dx_{j}\] (8) \[=\int_{\mathbf{\mathcal{Z}}}\left(\int_{\mathcal{X}_{j}}k_{\mathcal{ X}_{j}}(x_{j},\cdot)p(x_{j}|\hat{x}_{i},\mathbf{z})dx_{j}\right)p(\mathbf{z})d \mathbf{z}\] (9) \[=\int_{\mathbf{\mathcal{Z}}}F_{P_{X_{j}|X_{i},\mathbf{z}}}(\hat{x}_{ i},\mathbf{z})p(\mathbf{z})d\mathbf{z}\] (10) \[=\mathbb{E}_{\mathbf{Z}}\left[F_{P_{X_{j}|X_{i},\mathbf{z}}}( \hat{x}_{i},\mathbf{Z})\right] \tag{11}\] Equation (7) follows from the definition of mean embedding of a distribution in Equation (3), Equation (8) follows from the expression for interventional distribution in Equation (2), Equation (9) involves interchanging the order of integration and Equation (10) follows from Theorem 2.8. Let \(G_{P_{X_{j}|do(X_{i})}}(\cdot)=\mathbb{E}_{\mathbf{Z}}[F_{P_{X_{j}|X_{i}, \mathbf{z}}}(X_{i},\mathbf{Z})]\), then \(G_{P_{X_{j}|do(X_{i})}}:\mathcal{X}_{i}\rightarrow\mathcal{H}_{\mathcal{X}_{ j}}\) is a measurable, deterministic function and maps each possible intervention \(\hat{x}_{i}\in\mathcal{X}_{i}\) to the embedding of its interventional distribution \(P_{X_{j}|do(X_{i})=\hat{x}_{i}}\), i.e., it is the family of embeddings of interventional distributions. Let \(P_{X_{j}|do(X_{i})}\) and \(P^{\prime}_{X_{j}|do(X_{i})}\) be the interventional distributions for two different valid adjustment sets (as is the case when we consider the distribution of \(X_{j}\) after intervening on \(X_{i}\) in two different DAGs). The MCMD between these distributions is \(\text{MCMD}_{P_{X_{j}|do(X_{i})},P^{\prime}_{X_{j}|do(X_{i})}}(\cdot)=||G_{P_ {X_{j}|do(X_{i})}}(\cdot)-G_{P^{\prime}_{X_{j}|do(X_{i})}}(\cdot)||_{\mathcal{ H}_{\mathcal{X}_{j}}}\) where \(\text{MCMD}_{P_{X_{j}|do(X_{i})},P^{\prime}_{X_{j}|do(X_{i})}}(\cdot):\mathcal{ X}_{i}\rightarrow\mathbb{R}\). ### Empirical estimate First we compute the empirical estimate for \(F_{P_{X_{j}|X_{i},\mathbf{z}}}\). This follows based on the derivation in section 2.3 where instead of conditioning only on one variable, we condition on \(X_{i}\) and \(\mathbf{Z}\). We aim to find the minima of the loss function \(\mathcal{E}_{X_{j}|X_{i},\mathbf{z}}(F)=\mathbb{E}_{X_{i},\mathbf{Z}}\left[||F (X_{i},\mathbf{Z})-F_{P_{X_{j}|X_{i},\mathbf{z}}}(X_{i},\mathbf{Z})||^{2}_{ \mathcal{H}_{\mathcal{X}_{j}}}\right]\) among all \(F\in\mathcal{G}_{\mathcal{X}_{j},\mathcal{X}_{i}\mathbf{\mathcal{Z}}}\) where \(\mathcal{G}_{\mathcal{X}_{j},\mathcal{X}_{i}\mathbf{\mathcal{Z}}}\) is the RKHS of functions from \(\mathcal{X}_{i}\times\mathbf{\mathcal{Z}}\) to \(\mathcal{H}_{\mathcal{X}_{j}}\). We endow \(\mathcal{G}_{\mathcal{X}_{j},\mathcal{X}_{i}\mathbf{\mathcal{Z}}}\) with the kernel \(l_{\mathcal{X}_{j},\mathcal{X}_{i}\mathbf{\mathcal{Z}}}((x_{i},\mathbf{z}),(x^{ \prime}_{i},\mathbf{z}^{\prime}))=k_{\mathcal{X}_{i}\mathbf{\mathcal{Z}}}((x_{i}, \mathbf{z}),(x^{\prime}_{i},\mathbf{z}^{\prime}))\mathbf{Id}\) where \(k_{\mathcal{X}_{i}\mathbf{\mathcal{Z}}}\) is a kernel on \(\mathcal{X}_{i}\times\mathbf{\mathcal{Z}}\) (see Remark 3.1). \[\mathcal{E}_{X_{j}|X_{i},\mathbf{Z}}(F) =\mathbb{E}_{X_{i},\mathbf{Z}}\left[||\mathbb{E}_{X_{j}|X_{i}, \mathbf{Z}}\left[k_{\mathcal{X}_{j}}(X_{j},\cdot)-F(X_{i},\mathbf{Z})\right]| X_{i},\mathbf{Z}||^{2}_{\mathcal{H}_{\mathcal{X}_{j}}}\right]\] \[\leq\mathbb{E}_{X_{i},\mathbf{Z}}\mathbb{E}_{X_{j}|X_{i},\mathbf{ Z}}\left[||k_{\mathcal{X}_{j}}(X_{j},\cdot)-F(X_{i},\mathbf{Z})||^{2}_{ \mathcal{H}_{\mathcal{X}_{j}}}|X_{i},\mathbf{Z}\right]\] \[=\mathbb{E}_{X_{i},X_{j},\mathbf{Z}}\left[||k_{\mathcal{X}_{j}}(X _{j},\cdot)-F(X_{i},\mathbf{Z})||^{2}_{\mathcal{H}_{\mathcal{X}_{j}}}\right]\] \[=:\tilde{\mathcal{E}}_{X_{j}|X_{i},\mathbf{Z}}(F)\] Since we do not observe samples from \(\mu_{P_{X_{j}|X_{i},\mathbf{z}}}\), instead of directly finding the minima of \(\mathcal{E}_{X_{j}|X_{i},\mathbf{z}}\), we solve for the minima of the surrogate loss function \(\tilde{\mathcal{E}}_{X_{j}|X_{i},\mathbf{Z}}\). The empirical regularized version of the surrogate loss function is given by \(\tilde{\mathcal{E}}_{X_{j}|X_{i},\mathbf{Z},N,\lambda}(F)\coloneqq\frac{1}{N} \sum_{n=1}^{N}||k_{\mathcal{X}_{j}}(x_{j}^{(n)},\cdot)-F(x_{i}^{(n)},\mathbf{z} ^{(n)})||^{2}_{\mathcal{H}_{\mathcal{X}_{j}}}+\lambda||F||^{2}_{\mathcal{G}_{ \mathcal{X}_{j},\mathcal{X}_{i}\mathbf{\mathcal{Z}}}}\) where \(\{x_{i}^{(n)},x_{j}^{(n)},\mathbf{z}^{(n)}\}_{n=1}^{N}\) are samples from the joint distribution \(P_{X_{i}X_{j}\mathbf{Z}}\). From Theorem 2.10, the minima \(\hat{F}_{P_{\mathcal{X}_{j}|X_{i},\mathbf{z},N,\lambda}}\) of \(\hat{\mathcal{E}}_{X_{j}|X_{i},\mathbf{Z},N,\lambda}\) is \(\hat{F}_{P_{X_{j}|X_{i},\mathbf{z},N,\lambda}}(\cdot,\cdot)=\mathbf{k}_{X_{i} \mathbf{Z}}^{T}(\cdot,\cdot)\mathbf{f}\) where \[\mathbf{k}_{X_{i}\mathbf{Z}}(\cdot,\cdot)\coloneqq(k_{\mathcal{X}_{i}\mathbf{ \mathcal{Z}}}((x_{i}^{(1)},\mathbf{z}^{(1)}),(\cdot,\cdot)),\ldots,k_{ \mathcal{X}_{i}\mathbf{\mathcal{Z}}}((x_{i}^{(N)},\mathbf{z}^{(N)}),(\cdot,\cdot)) )^{T} \tag{12}\] \(\mathbf{f}\coloneqq(f^{(1)},\ldots,f^{(N)})^{T}\) and \(f^{(i)}\in\mathcal{H}_{\mathcal{X}_{j}}\) are unique solutions of the linear equation \[(\mathbf{K}_{X_{i}Z}+N\lambda\mathbf{I})\mathbf{f}=\mathbf{k}_{X_{j}}\] where \([\mathbf{K}_{X_{i}\mathbf{Z}}]_{st}\coloneqq k_{\mathcal{X}_{i}\mathbf{Z}}((x_{ i}^{(s)},\mathbf{z}^{(s)}),(x^{(t)},\mathbf{z}^{(t)}))\) and \(\mathbf{k}_{X_{j}}\coloneqq(k_{\mathcal{X}_{j}}(x_{j}^{(1)},\cdot),\ldots,k_{ \mathcal{X}_{j}}(x_{j}^{(N)},\cdot))^{T}\). Hence \(\mathbf{f}=\mathbf{W}\mathbf{k}_{X_{j}}\) where \(\mathbf{W}=(\mathbf{K}_{X_{i}\mathbf{Z}}+N\lambda\mathbf{I})^{-1}\). Therefore, \(\hat{F}_{P_{X_{j}|X_{i},\mathbf{z},N,\lambda}}(\cdot,\cdot)=\mathbf{k}_{X_{i} \mathbf{Z}}(\cdot,\cdot)\mathbf{W}\mathbf{k}_{X_{j}}\). Using \(\hat{F}_{P_{X_{j}|Ac_{i},\mathbf{z},N,\lambda}}\), we obtain the empirical estimate for \(G_{P_{X_{j}|do(X_{i})}}:\mathcal{X}_{i}\to\mathcal{H}_{\mathcal{X}_{j}}\). \[\hat{G}_{P_{X_{j}|do(X_{i})}}(\cdot)=\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{ i}\mathbf{Z}}^{T}(\cdot,\mathbf{z}^{(n)})\mathbf{W}\mathbf{k}_{X_{j}}\] If \(P_{X_{j}|do(X_{i})}\) and \(P^{\prime}_{X_{j}|do(X_{i})}\) are the interventional distributions for two different valid adjustment sets \(Z\) and \(Z^{\prime}\), their MCMD can be computed as follows: given samples \(\{(x_{i}^{(n)},x_{j}^{(n)},z^{(n)})\}_{n=1}^{N}\) and \(\{(x_{i}^{(n)},x_{j}^{(n)},z^{\prime(n)})\}_{n=1}^{N}\) from \(P_{X_{i}X_{j}Z}\) and \(P_{X_{i}X_{j}Z^{\prime}}\), the MCMD can be estimated as: \[\widehat{MCMD}_{P_{X_{j}|do(X_{i})},P^{\prime}_{X_{j}|do(X_{i})}} (\cdot)=||\hat{G}_{P_{X_{j}|do(X_{i})}}(\cdot)-\hat{G}_{P^{\prime} _{X_{j}|do(X_{i})}}(\cdot)||_{\mathcal{H}_{X_{j}}}\] \[=\left[\left(\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{i}\mathbf{Z} }^{T}(\cdot,\mathbf{z}^{(n)})\right)\mathbf{W}_{\mathbf{Z}}\mathbf{K}_{X_{j} }\mathbf{W}_{\mathbf{Z}}\left(\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{i} \mathbf{Z}}(\cdot,\mathbf{z}^{(n)})\right)\right.\] \[\left.+\left(\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{i}\mathbf{Z} ^{\prime}}^{T}(\cdot,\mathbf{z}^{\prime(n)})\right)\mathbf{W}_{\mathbf{Z}^{ \prime}}\mathbf{K}_{X_{j}}\mathbf{W}_{\mathbf{Z}^{\prime}}\left(\frac{1}{N} \sum_{n=1}^{N}\mathbf{k}_{X_{i}\mathbf{Z}^{\prime}}(\cdot,\mathbf{z}^{\prime(n )})\right)\right.\] \[\left.-2\left(\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{i}\mathbf{Z} }^{T}(\cdot,\mathbf{z}^{(n)})\right)\mathbf{W}_{\mathbf{Z}}\mathbf{K}_{X_{j} }\mathbf{W}_{\mathbf{Z}^{\prime}}\left(\frac{1}{N}\sum_{n=1}^{N}\mathbf{k}_{X_{ i}\mathbf{Z}^{\prime}}(\cdot,\mathbf{z}^{\prime(n)})\right)\right]^{1/2} \tag{13}\] where \([\mathbf{K}_{X_{j}}]_{st}=k_{\mathcal{X}_{j}}(x_{j}^{(s)},x_{j}^{(t)})\), \(\mathbf{W}_{\mathbf{Z}}=(\mathbf{K}_{X_{i}\mathbf{Z}}+N\lambda\mathbf{I})^{-1}\), \(\mathbf{W}_{\mathbf{Z}^{\prime}}=(\mathbf{K}_{X_{i}\mathbf{Z}^{\prime}}+N \lambda\mathbf{I})^{-1}\), \([\mathbf{K}_{X_{i}\mathbf{Z}^{\prime}}]_{st}\coloneqq k_{\mathcal{X}_{i} \mathbf{Z}}((x_{i}^{(s)},\mathbf{z}^{\prime(s)}),(x^{(t)},\mathbf{z}^{\prime (t)}))\) and \(\mathbf{k}_{X_{i}\mathbf{Z}^{\prime}}(\cdot,\cdot)\coloneqq(k_{\mathcal{X}_{i} \mathbf{Z}}((x_{i}^{(1)},\mathbf{z}^{\prime(1)}),(\cdot,\cdot))),\ldots,\) **Remark 3.1** (Product kernels).: _We can choose \(k_{\mathcal{X}_{i}\mathbf{Z}}\) to be the product kernel:_ \[k_{\mathcal{X}_{i}\mathbf{Z}}((x_{i},\mathbf{z}),(x_{i}^{\prime},\mathbf{z}^{ \prime}))=k_{\mathcal{X}_{i}}(x_{i},x_{i}^{\prime})k_{\mathbf{Z}}(\mathbf{z}, \mathbf{z}^{\prime}) \tag{14}\] _Let \(|Z|=M\) so that \(\mathbf{Z}=\{X_{i_{1}},\ldots,X_{i_{M}}\}\). Given reproducing kernels \(k_{\mathcal{X}_{d}}\) of RKHSs \(\mathcal{H}_{\mathcal{X}_{d}}\), \(1\leq d\leq D\), we can also choose \(k_{\mathbf{Z}}\) to be the product kernel:_ \[k_{\mathbf{Z}}(\mathbf{z},\mathbf{z}^{\prime})=k_{\mathcal{X}_{i_{1}}}(x_{i_{1} },x_{i_{1}}^{\prime})\ldots k_{\mathcal{X}_{i_{M}}}(x_{i_{M}},x_{i_{M}}^{\prime}) \tag{15}\] ## 4 Continuous structural intervention distance Consider the setting where we have a true DAG \(\mathcal{G}_{1}=(\mathbf{V},\mathcal{E}_{\mathcal{G}_{1}})\), a learnt DAG \(\mathcal{G}_{2}=(\mathbf{V},\mathcal{E}_{\mathcal{G}_{2}})\) and observational data \(\mathcal{D}\) sampled from an unknown distribution \(P\) with density \(p(\cdot)\) that is Markov with respect to \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) (see Definition 2.1). Note that the true and learnt DAGs have a common set of vertices but differ in their edges. Let \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}\) and \(P_{X_{j}|do(X_{i});\mathcal{G}_{2}}\) denote the interventional distribution corresponding to intervening on \(X_{i}\) and observing \(X_{j}\) in the true DAG \(\mathcal{G}_{1}\) and the learnt DAG \(\mathcal{G}_{2}\), respectively. The densities of both these distributions can be calculated from \(p(\cdot)\) using the adjustment formula (2) and taking \(\mathbf{Z}\) to be \(\mathbf{PA}_{i}\), the parent set of \(X_{i}\). First, we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG and the learnt DAG. For each pair \((X_{i},X_{j})\in\mathcal{G}_{1}\), we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG and the learnt DAG. For each pair \((X_{i},X_{j})\in\mathcal{G}_{1}\), we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG. For each pair \((X_{i},X_{j})\in\mathcal{G}_{1}\), we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG. For each pair \((X_{i},X_{j})\in\mathcal{G}_{1}\), we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG. For each pair \((X_{i},X_{j})\in\mathcal{G}_{1}\), we generate the set \(\mathbf{V}^{2}\coloneqq(\mathbf{V}\times\mathbf{V})\), which consists of all ordered pairs of nodes from the common vertex set of the true DAG and the learnt DAG. \(\mathbf{V}^{2},i\neq j\), we compare the distribution of \(X_{j}\) obtained by intervening on \(X_{i}\) in \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) (this can be extended to multiple simultaneous interventions--see Remark 4.1). Unless otherwise stated, we use the observational data of \(X_{i}\) as our interventions while comparing the interventional distributions between the true DAG and the learnt DAG (one may specify a different distribution on the interventions--see Remark 4.2). We record the difference in a function \(d:\tilde{\mathbf{V}}^{2}\rightarrow\mathbb{R}_{\geq 0}\) which we describe below by examining various possible cases. _Case 1:_ There is no directed path from \(X_{i}\) to \(X_{j}\) in DAGs \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) (in Algorithm 1 denoted as "checkDirectedPath\((X_{i},X_{j},\mathcal{G})\)"). In the absence of a directed path from the intervened node to the target node, an intervention has no effect on the target node. So, in \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) the distribution of \(X_{j}\) obtained by intervening on \(X_{i}\) is equal to the observational distribution of \(X_{j}\), i.e., \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}=P_{X_{j}|do(X_{i});\mathcal{G}_{2}}=P_{X _{j}}\). This in turn implies \(d(X_{i},X_{j})=0\). _Case 2:_ There is a directed path from \(X_{i}\) to \(X_{j}\) in \(\mathcal{G}_{1}\) but not in \(\mathcal{G}_{2}\). The same argument used in Case 1 can be applied here to obtain \(P_{X_{j}|do(X_{i});\mathcal{G}_{2}}=P_{X_{j}}\). Intervening on \(X_{i}\) has an effect on \(X_{j}\) in \(\mathcal{G}_{1}\) due to the presence of the directed path \(X_{i}\to X_{j}\) and the resulting distribution can be computed by adjusting for the parent set of \(X_{i}\) in \(\mathcal{G}_{1}\), i.e., \(\mathbf{PA}_{i,\mathcal{G}_{1}}\). We compare the two distributions \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}\) and \(P_{X_{j}}\) by computing the average over their MMDs for each observed \(x_{i}\). We then divide by the norm of the embedding of the observational distribution \(X_{j}\) to make contSID scale-invariant. The resulting distance \(d\) is defined as we state in Equation (16), where we denote \(\sum_{m,m^{\prime}=1}^{N}k_{\mathcal{X}_{j}}(x_{j}^{(m)},x_{j}^{(m^{\prime})})\) by \(C_{X_{j}}\). \[\begin{split} d(X_{i},X_{j})&=\frac{1}{N}\sum_{n=1}^ {N}||\tilde{\mu}_{P_{X_{j}|do(X_{i})=x_{i}^{(n)};\mathcal{G}_{1}}}-\tilde{\mu} _{P_{X_{j}}}||_{\mathcal{H}_{X_{j}}}\\ &=\frac{1}{N}\sum_{n=1}^{N}||\frac{1}{N}\sum_{m=1}^{N}\mathbf{k} _{X_{i}\mathbf{P}\mathbf{A}_{i,\mathcal{G}_{1}}}^{T}(x_{i}^{(n)},\mathbf{p} \mathbf{a}_{i,\mathcal{G}_{1}}^{(m)})\mathbf{W}_{\mathcal{G}_{1}}\mathbf{k}_{ X_{j}}(\cdot)-\frac{1}{N}\sum_{m^{\prime}=1}^{N}k_{\mathcal{X}_{j}}(x_{j}^{(m^{ \prime})},\cdot)||_{\mathcal{H}_{X_{j}}}\\ &=\frac{1}{N\sqrt{C_{X_{j}}}}\sum_{n=1}^{N}\left[\left(\sum_{m=1} ^{N}\mathbf{k}_{X_{i}\mathbf{P}\mathbf{A}_{i,\mathcal{G}_{1}}}^{T}(x_{i}^{(n )},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{1}}^{(m)})\right)\mathbf{W}_{ \mathcal{G}_{1}}\mathbf{K}_{X_{j}}\mathbf{W}_{\mathcal{G}_{1}}\left(\sum_{m=1} ^{N}\mathbf{k}_{X_{i}\mathbf{P}\mathbf{A}_{i,\mathcal{G}_{1}}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{1}}^{(m)})\right)\right.\\ &\left.+C_{X_{j}}-2\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{P }\mathbf{A}_{i,\mathcal{G}_{1}}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i, \mathcal{G}_{1}}^{(m)})\right)\mathbf{W}_{\mathcal{G}_{1}}\left(\sum_{m=1}^{N }\mathbf{k}_{X_{j}}(x_{j}^{(m)})\right)\right]^{1/2}\end{split} \tag{16}\] Similarly, if there is a directed path from \(X_{i}\) to \(X_{j}\) in \(\mathcal{G}_{2}\) but not in \(\mathcal{G}_{1}\), the resulting distance \(d\) is: \[\begin{split} d(X_{i},X_{j})&=\frac{1}{N\sqrt{C_{X_{j }}}}\sum_{n=1}^{N}\left[\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{P}\mathbf{A }_{i,\mathcal{G}_{2}}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{2}}^ {(m)})\right)\mathbf{W}_{\mathcal{G}_{2}}\mathbf{K}_{X_{j}}\mathbf{W}_{ \mathcal{G}_{2}}\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{P}\mathbf{A}_{i, \mathcal{G}_{2}}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{2}}^{(m)} )\right)\right.\\ &\left.+C_{X_{j}}-2\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{P }\mathbf{A}_{i,\mathcal{G}_{2}}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i, \mathcal{G}_{2}}^{(m)})\right)\mathbf{W}_{\mathcal{G}_{2}}\left(\sum_{m=1}^{N} \mathbf{k}_{X_{j}}(x_{j}^{(m)})\right)\right]^{1/2}\end{split} \tag{17}\] _Case 3:_ There is a directed path from \(X_{i}\) to \(X_{j}\) in DAG \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\). The distribution of \(X_{j}\) after intervening on \(X_{i}\) in \(\mathcal{G}_{1}\) can be computed by adjusting for the parent set of \(X_{i}\) in \(\mathcal{G}_{1}\) - \(\mathbf{PA}_{i;\mathcal{G}_{1}}\). Similarly, we obtain the interventional distribution of \(X_{j}\) in \(\mathcal{G}_{2}\) by adjusting for the parent set of \(X_{i}\) in \(\mathcal{G}_{2}\) - \(\mathbf{PA}_{i;\mathcal{G}_{2}}\). 1. If \(\mathbf{PA}_{i;\mathcal{G}_{1}}\) is a valid adjustment set (Definition 2.3) in \(\mathcal{G}_{2}\) or \(\mathbf{PA}_{i;\mathcal{G}_{2}}\) is a valid adjustment set in \(\mathcal{G}_{1}\), then by (2), \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}=P_{X_{j}|do(X_{i});\mathcal{G}_{2}}\), hence \(d(X_{i},X_{j})=0\).1 Footnote 1: In general, the above condition is not necessary for \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}=P_{X_{j}|do(X_{i});\mathcal{G}_{2}}\). It is sufficient that there is a common valid adjustment set—not just a parent adjustment set—for the pair \((X_{i},X_{j})\) in \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\). However, it is not straightforward and beyond the scope of this article to compare the validity of an adjustment in different DAGs. Thus, we resort to the simple and inexpensive graphical task of checking if the parent sets in one DAG are valid adjustment sets in the other DAG. 2. If \(\mathbf{PA}_{i;\mathcal{G}_{1}}\) is not a valid adjustment set in \(\mathcal{G}_{2}\) or \(\mathbf{PA}_{i;\mathcal{G}_{2}}\) is not a valid adjustment set in \(\mathcal{G}_{1}\), then the interventional distributions \(P_{X_{j}|do(X_{i});\mathcal{G}_{1}}\) and \(P_{X_{j}|do(X_{i});\mathcal{G}_{2}}\)_may not_ be equal. To assess the difference, we compute the average over their MMDs for each \(x_{i}\sim\mathcal{D}_{i}\). We divide by the norm of the embedding of the observational distribution \(X_{j}\) to make contSID scale-invariant. The resulting distance \(d\) is defined as we state in Equation (18). \[d(X_{i},X_{j}) =\frac{1}{N}\sum_{n=1}^{N}||\tilde{\mu}_{P_{X_{j}|do(X_{i}=x_{i}^ {(n)});\mathcal{G}_{2}}}-\tilde{\mu}_{P_{X_{j}|do(X_{i}=x_{i}^{(n)});\mathcal{G }_{1}}}||_{\mathcal{H}_{X_{j}}}\] \[=\frac{1}{N}\sum_{n=1}^{N}||\frac{1}{N}\sum_{m=1}^{N}\mathbf{k}_ {X_{i}\mathbf{PA}_{i},\mathcal{G}_{2}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{ i,\mathcal{G}_{2}}^{(m)})W_{\mathcal{G}_{2}}\mathbf{k}_{X_{j}}\] \[\qquad\qquad\qquad-\frac{1}{N}\sum_{m^{\prime}=1}^{N}\mathbf{k}_ {X_{i}\mathbf{PA}_{i},\mathcal{G}_{1}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{ i,\mathcal{G}_{1}}^{(m^{\prime})})W_{\mathcal{G}_{1}}\mathbf{k}_{X_{j}}||_{ \mathcal{H}_{X_{j}}}\] \[=\frac{1}{N^{2}}\sum_{n=1}^{N}\left[\left(\sum_{m=1}^{N}\mathbf{ k}_{X_{i}\mathbf{PA}_{i},\mathcal{G}_{2}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{ i,\mathcal{G}_{2}}^{(m)})\right)\mathbf{W}_{\mathcal{G}_{2}}\mathbf{K}_{X_{j}} \mathbf{W}_{\mathcal{G}_{2}}\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{PA}_{ i},\mathcal{G}_{2}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{2}}^{(m)})\right)\right.\] \[\left.+\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{PA}_{i}, \mathcal{G}_{1}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{1}}^{(m) })\right)\mathbf{W}_{\mathcal{G}_{1}}\mathbf{K}_{X_{j}}\mathbf{W}_{\mathcal{G} _{1}}\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{PA}_{i},\mathcal{G}_{1}}^{T}( x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{1}}^{(m)})\right)\right.\] \[\left.-2\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{PA}_{i}, \mathcal{G}_{2}}^{T}(x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{2}}^{(m) })\right)\mathbf{W}_{\mathcal{G}_{2}}\mathbf{K}_{X_{j}}\mathbf{W}_{\mathcal{G} _{1}}\left(\sum_{m=1}^{N}\mathbf{k}_{X_{i}\mathbf{PA}_{i},\mathcal{G}_{1}}^{T} (x_{i}^{(n)},\mathbf{p}\mathbf{a}_{i,\mathcal{G}_{1}}^{(m)})\right)\right]^{1/2}\] (18) We summarise the various cases and the applicable equations in Algorithm 1. In Algorithm 2, we describe that the contSID is calculated over each ordered pair \((X_{i},X_{j})\in\mathbf{V}^{2},i\neq j\). **Remark 4.1** (Interventions on multiple variables).: _As in Peters and Buhlmann (2015), we have considered intervening on single variables only. However, the contSID can be extended to account for interventions on multiple variables as well. Since the union of parent sets of the intervened variables is not necessarily a valid adjustment set, one would need to define a valid adjustment set for the intervened variables and the observed variable. Then, using a modified version of Equation (2), we can compute the interventional distribution and its corresponding embedding. This can be achieved by replacing the one intervened variable \(X_{i}:\Omega\to\mathcal{X}\) with the set of variables \(\boldsymbol{X}_{i}:\Omega\to\boldsymbol{X}_{i}\) that we intervene on, and defining the corresponding kernel \(l_{\boldsymbol{X}_{i}}:\boldsymbol{X}_{i}\times\boldsymbol{X}_{i}\to\mathbb{R}\)._ **Remark 4.2** (Prior distribution on interventions).: _Unless specified, the computation of the contSID uses the empirical distribution of \(X_{i}\) to compute the average of the MMDs in Equations (16), (17) and (18). If required, however, one may specify an alternative distribution on the intervention, e.g., assigning measure 1 to a single intervention, and evaluate the contSID with that interventional distribution._ ``` 1:Input: Intervened node \(X_{i}\), target node \(X_{j}\), true DAG \(\mathcal{G}_{1}=(\mathbf{V},E_{\mathcal{G}_{1}})\), learnt DAG \(\mathcal{G}_{2}=(\mathbf{V},E_{\mathcal{G}_{2}})\) and the observational data \(\mathcal{D}\) 2:\(c_{\mathcal{G}_{1}}\leftarrow\text{checkDirectedPath}(X_{i},X_{j},\mathcal{G}_{1})\) 3:\(c_{\mathcal{G}_{2}}\leftarrow\text{checkDirectedPath}(X_{i},X_{j},\mathcal{G}_{2})\) 4:if\(c_{\mathcal{G}_{1}}==\text{False}\) and \(c_{\mathcal{G}_{2}}==\text{False}\)then 5:return\(0\) 6:else 7:\(Z_{\mathcal{G}_{1}}\leftarrow\mathbf{PA}_{i,\mathcal{G}_{1}}\) 8:\(Z_{\mathcal{G}_{2}}\leftarrow\mathbf{PA}_{i,\mathcal{G}_{2}}\) 9:\(K\leftarrow\sum_{m,m^{\prime}}^{N}k(x_{j}^{(m)},x_{j}^{(m^{\prime})})\) 10:if\(c_{\mathcal{G}_{1}}==\text{True}\) and \(c_{\mathcal{G}_{2}}==\text{False}\)then 11:return (16) 12:elseif\(c_{\mathcal{G}_{1}}==\text{False}\) and \(c_{\mathcal{G}_{2}}==\text{True}\)then 13:return (17) 14:else 15:if\(Z_{\mathcal{G}_{1}}\) is a valid adjustment set in \(\mathcal{G}_{2}\) or \(Z_{\mathcal{G}_{2}}\) is a valid adjustment set in \(\mathcal{G}_{1}\)then 16:return 0 17:else 18:return (18) 19:endif 20:endif ``` **Algorithm 1**\(d(X_{i},X_{j},\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{D})\) ``` 1:True DAG \(\mathcal{G}_{1}=(\mathbf{V},E_{\mathcal{G}_{1}})\), learnt DAG \(\mathcal{G}_{2}=(\mathbf{V},E_{\mathcal{G}_{2}})\) and the observational data \(\mathcal{D}\) 2:\(\text{sum}\gets 0\) 3:for\((X_{i},X_{j})\in\mathbf{V}^{2},\quad i\neq j\)do 4:\(\text{sum}=\text{sum}+d(X_{i},X_{j},\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{ D})\) 5:endfor 6:return sum ``` **Algorithm 2**\(\text{contSID}(\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{D})\) Experiments For each number of nodes \(p\in\{5,10,20\}\), we generate 100 DAGs by an Erdos-Renyi model with the probability of the existence of an edge equal to 0.25. 100 _iid_ samples \(\mathcal{D}\in\mathbb{R}^{p}\) are generated for each DAG according to a linear SEM with non-Gaussian (exponential) noise. Linear coefficients are sampled uniformly from the interval \([-10,10]\) and the exponential noise has scale \(\beta=1\). For each simulated DAG, we obtain predicted DAGs by running the PC (constraint-based), GES (score-based) and ICALiNGAM (function-based causal discovery algorithms) (Spirtes et al., 2000; Chickering, 2002; Shimizu et al., 2006, respectively) on the synthetically generated data. We compute the average SHD, SID and contSID values as well as their standard deviation for each true and learnt DAG pair. The ICALiNGAM algorithm outperforms PC and GES algorithms across all nodes and all metrics (SHD, SID and contSID). However, while both SHD and SID indicate that the GES algorithm outperforms the PC algorithm (for \(p=10,20\)), contSID suggests the opposite, namely, that the PC algorithm is more accurate than the GES algorithm.
2309.14931
Interaction-Aware Sampling-Based MPC with Learned Local Goal Predictions
Motion planning for autonomous robots in tight, interaction-rich, and mixed human-robot environments is challenging. State-of-the-art methods typically separate prediction and planning, predicting other agents' trajectories first and then planning the ego agent's motion in the remaining free space. However, agents' lack of awareness of their influence on others can lead to the freezing robot problem. We build upon Interaction-Aware Model Predictive Path Integral (IA-MPPI) control and combine it with learning-based trajectory predictions, thereby relaxing its reliance on communicated short-term goals for other agents. We apply this framework to Autonomous Surface Vessels (ASVs) navigating urban canals. By generating an artificial dataset in real sections of Amsterdam's canals, adapting and training a prediction model for our domain, and proposing heuristics to extract local goals, we enable effective cooperation in planning. Our approach improves autonomous robot navigation in complex, crowded environments, with potential implications for multi-agent systems and human-robot interaction.
Walter Jansma, Elia Trevisan, Álvaro Serra-Gómez, Javier Alonso-Mora
2023-09-26T13:36:45Z
http://arxiv.org/abs/2309.14931v2
# Interaction-Aware Sampling-Based MPC ###### Abstract Motion planning for autonomous robots in tight, interaction-rich, and mixed human-robot environments is challenging. State-of-the-art methods typically separate prediction and planning, predicting other agents' trajectories first and then planning the ego agent's motion in the remaining free space. However, agents' lack of awareness of their influence on others can lead to the freezing robot problem. We build upon Interaction-Aware Model Predictive Path Integral (IA-MPPI) control and combine it with learning-based trajectory predictions, thereby relaxing its reliance on communicated short-term goals for other agents. We apply this framework to Autonomous Surface Vessels (ASVs) navigating urban canals. By generating an artificial dataset in real sections of Amsterdam's canals, adapting and training a prediction model for our domain, and proposing heuristics to extract local goals, we enable effective cooperation in planning. Our approach improves autonomous robot navigation in complex, crowded environments, with potential implications for multi-agent systems and human-robot interaction. Dataset, Prediction Model, Video and Code available at: autonomousrobots.nl/pubpage/IA_MPPI_LBM.html ## I Introduction Cities characterized by dense networks of urban canals, such as Amsterdam, could greatly benefit from deploying Autonomous Surface Vessels (ASVs) for various tasks including deliveries, transportation of people, and garbage collection [1]. However, navigating autonomously in urban canals amidst mixed human-robot crowds presents a significant challenge. Urban canals are typically narrow, frequently congested, and lack the structured nature of roads. While not as strictly enforced as on roads, navigation principles like right-of-way and right-hand conventions should still be considered. Thus, akin to autonomous ground robots among pedestrian crowds, successful navigation in urban canals relies on cooperation and awareness of interactions [2]. Recently, a sampling-based Model Predictive Control (MPC) called Interaction-Aware Model Predictive Path Integral (IA-MPPI) control has been developed for generating cooperative motion plans in urban canals among multiple non-communicating vessels while maintaining awareness of navigation rules [3]. This algorithm assumes rational and homogeneous agents, exact sensing of states, and knowledge of local goals. In real-time, the algorithm samples thousands of input sequences to approximate the optimal input sequence that enables all agents to progress toward their goals cooperatively. In scenarios where the local goals of other vessels are unavailable, such as in mixed human-robot environments or due to lack of communication, this previous approach has approximated these goals using a constant velocity model over a given horizon. However, in narrow and crowded environments, vessels often need to execute complex maneuvers to navigate tight intersections and avoid collisions while adhering to navigation rules. In such situations, relying solely on a constant velocity approximation can lead to inaccurate predictions, which can adversely affect the performance of the motion planner in terms of deadlocks, collisions, navigation rule violations, traveled distance, and travel time. In this paper, we present a framework (see Fig. 1) that utilizes a learning-based trajectory prediction method to improve the estimation of agents' intended destinations. We introduce heuristics to extract local goals from the predicted trajectories and provide the motion planner with the flexibility to influence the behavior of other agents while expecting cooperation in collision avoidance. ### _Related Work_ Robot motion planning in dynamic environments is a challenging problem for which a series of classical and Fig. 1: Overview of the proposed framework. Firstly, the prediction model utilizes information from all elements in the scene to forecast trajectories for obstacle agents. Meanwhile, the global planner, equipped with the map, start, and goal positions, generates a path for the ego agent. Subsequently, the local goal extractor leverages this information to determine appropriate local goals for the motion planner. With inputs derived from the scene and the local goals, the Interaction-Aware Model Predictive Path Integral (IA-MPPI) algorithm simultaneously plans and predicts input sequences for all agents in the scene. The first input of the sequence is then assigned to the ego agent, and the algorithm iterates. heuristic-based approaches have been developed [4], such as the Dynamic Window Approach [5] or Reciprocal Velocity Obstacles [6, 7]. Despite their successful applications, e.g. to non-holonomic robots [8] or vessels in open waters [9], the motions planned by this class of methods are often reactive. This, especially in crowded environments, can lead to unsafe and unpredictable behaviors. Model Predictive Control (MPC) has become a popular approach to trajectory planning for autonomous vehicles [10] because of its ability to optimize accounting for the system's dynamics and constraints. Moreover, by planning over a sufficiently large horizon, MPC can anticipate dynamic obstacles resulting in trajectories that are less reactive. To anticipate other agents, however, the free space over the entire planning horizon needs to be computed [11], which requires knowledge about other agents' positions in the future. If all the agents in the environment are autonomous, communication and distributed optimization can be used to plan trajectories in multi-agent environments [12]. In mixed human-robot environments, however, such communication is not possible and predictions of the future motion of the other agents have to be employed. For instance, recent work on MPC for rule-aware navigation in urban canals uses constant velocity to model the future behavior of other vessels [13]. In interaction-rich scenarios, however, constant velocity can be an inaccurate approximation which may lead to unsafe motion plans [14]. Therefore, several works rely on learning-based models to predict the future motion of other agents [15] and can include prediction confidence [16] and multimodality [17]. These methods, however, decouple prediction and planning which, in high-interaction environments, may lead the ego agent to wrongly assume that no collision-free path exists [18]. To avoid the so-called freezing robot problem the robot has to expect cooperation in collision avoidance from the other agents [19]. Coupled prediction and planning can be done with MPC by modeling the interacting agents as a system, but it quickly becomes expensive to solve via constrained optimization leading to long computation times and short planning horizons [20, 21]. Building upon a novel sampling-based Model Predictive Control (MPC) framework [22], Interaction-Aware Model Predictive Path Integral (IA-MPPI) control [3] has successfully demonstrated decentralized coupled predictions and planning in real-time, accommodating long prediction horizons, nonlinear dynamics, and discontinuous cost functions in multi-agent environments. While IA-MPPI has exhibited superior performance compared to optimization-based MPC approaches that rely on fixed predictions of other agents' motion, it necessitates knowledge of their near-term local goals, which can either be communicated or estimated. ### _Contribution_ This paper presents a novel framework for interaction-aware decentralized motion planning in urban canals without relying on communication. Our framework encompasses the following contributions: * Realistic Dataset: We generate and publish a realistic dataset of simulated rule-abiding vessel trajectories in real sections of Amsterdam's urban canals. * Learning-Based Trajectory Prediction: We adapt a pedestrian prediction model [23] to vessels and train it specifically for urban canals. This approach enables us to generate trajectory predictions for other agents. * Local Goal Extraction: We propose heuristics to extract local goals from the predicted trajectories, thereby providing the motion planner with information about where agents intend to go. * Communication-Free Coupled Prediction and Planning: By combining the local goal extraction with the IA-MPPI control [3], we achieve coupled prediction and planning without the need for communication. This approach ensures that the ego agent can influence the behavior of other agents while anticipating cooperation in collision avoidance. We validate our planning framework through extensive simulated experiments, comparing it against baseline approaches and providing insights into the benefits of coupled prediction and planning over decoupled methods. The framework can be adapted to other robot types beyond vessels. ## II Interaction-Aware MPPI In this section, we introduce the main ideas of IA-MPPI [3], upon which our proposed framework is built. For details on the method, models used and cost function please refer to the original paper. For insights on the underlying sampling-based MPC, one can refer to the work on Information-Theoretic MPC [22]. In short, IA-MPPI assumes that all the agents are homogenous and rational, i.e. have the same model and cost function. Under this assumption, we can create a large multi-agent system and plan input sequences resulting in cooperative trajectories for the ego agent as well as all the obstacle agents. This being a decentralized planning framework, we then apply the first input of the sequence to our ego agent, observe the environment and plan again. In more detail, IA-MPPI models the ego-agent \(i\) as a discrete-time dynamical system, \[\mathbf{q}_{i,t+1}=\mathcal{F}(\mathbf{q}_{i,t},\mathbf{u}_{i,t}) \tag{1}\] where \(\mathbf{q}_{i,t}\) and \(\mathbf{u}_{i,t}\) are, respectively, the state and the input of the ego-agent at timestep \(t\). The state \(\mathbf{q}_{i,t}=[\mathbf{p}_{i,t},\mathbf{v}_{i,t}]\) contains the position and velocity of the agent. IA-MPPI assumes that all agents in the environment are homogenous. The state and the input of the multi-agent system consisting of the ego-agent and the obstacle agents can therefore be stacked, resulting in, \[\mathbf{q} =\begin{bmatrix}\mathbf{q}_{i}^{\top}&\mathbf{q}_{j}^{\top} \end{bmatrix}^{\top},\quad\forall j\in\mathcal{M}\setminus i, \tag{2}\] \[\mathbf{u} =\begin{bmatrix}\mathbf{u}_{i}^{\top}&\mathbf{u}_{j}^{\top} \end{bmatrix}^{\top},\quad\forall j\in\mathcal{M}\setminus i,\] where \(\left(.\right)_{j}\) is a variable that the ego-agent \(i\) estimates of agent \(j\) and \(\mathcal{M}=\{0,1,...,m\}\) is the set of all agents in the scene. By also stacking the state transition functions \(\mathcal{F}\) over all agents, we obtain a model for the multi-agent system \(\mathbf{q}_{t+1}=\mathcal{G}(\mathbf{q}_{t},\mathbf{u}_{t})\). Given a planning horizon \(T\) and a prior input sequence \(\mathbf{U}=[\mathbf{u}_{0},\mathbf{u}_{1},\dots,\mathbf{u}_{T-1}]\), IA-MPPI samples \(K\) input sequences for the entire multi-agent system, \[\tilde{\mathbf{U}}_{k}=[\tilde{\mathbf{u}}_{0,k},\tilde{\mathbf{u}}_{1,k},\dots,\tilde{\mathbf{u}}_{T-1,k}],\quad\tilde{\mathbf{u}}_{t,k}=\mathcal{N}( \mathbf{u}_{t},\nu\mathbf{\Sigma}) \tag{3}\] with \(k=1,\dots,K\), variance \(\Sigma\) and scaling parameter \(\nu\). At the first iteration, the prior input sequence \(\mathbf{U}\) is initialized at zero. By the end of this section, it will become clear how this prior input sequence is updated in subsequent iterations. Having a model for the multi-agent system, we can forward simulate the \(K\) input sequences into \(K\) state trajectories \(\mathbf{Q}_{k}\) for the multi-agent system, \[\mathbf{Q}_{k}=\big{[}\mathbf{q}_{0},\,\mathcal{G}(\mathbf{q}_{0},\tilde{ \mathbf{u}}_{k,0}),\,\dots,\,\mathcal{G}(\mathbf{q}_{k,T-1},\,\tilde{\mathbf{ u}}_{k,T-1})\big{]}. \tag{4}\] Each of the resulting state trajectories is evaluated with respect to both an agent-centric cost as well as a system-wide cost, resulting in a total sample cost \(S_{k}\). The reader can refer to the original publication for details on the cost function [3]. For the scope of our paper, it is important to know that the agent-centric cost includes a tracking cost to encourage progress towards a local goal \(p_{g}\) computed as, \[C_{\text{tracking}}=k_{\text{tracking}}\frac{||\mathbf{p}_{g}-\mathbf{p}_{t}|| _{2}}{||\mathbf{p}_{g}-\mathbf{p}_{t_{0}}||_{2}}, \tag{5}\] where \(p_{t}\) is the position of the agent at timestep \(t\), \(p_{t_{0}}\) is the position of the agent at the beginning of the planning horizon and \(k_{tracking}\) is a tuning parameter. Notice that we need to know the position of the local goal of each agent. For the ego agent, the local goal is extracted from a global plan. For all the other agents, the local goal has to be either communicated or estimated. We propose in the following section how this goal can be estimated. Once \(S_{k}\), \(\forall k\in[1,\dots,K]\) has been computed, importance sampling weights \(w_{k}\) can be calculated as, \[w_{k}=\frac{1}{\eta}\exp\biggl{(}\frac{-1}{\lambda}(S_{k}-S_{min})\biggr{)}, \quad\sum_{k=0}^{K-1}w_{k}=1, \tag{6}\] where \(S_{min}\) is the minimum sampled cost, \(\eta\) a normalization factor and \(\lambda\) a tuning parameter. We then compute an approximation of the optimal control sequence through a weighted average of the sampled control sequences, \[\mathbf{U}^{*}=\sum_{k=0}^{K-1}w_{k}\tilde{\mathbf{U}}_{k} \tag{7}\] and apply the first input \(\mathbf{u}_{t,0}^{*}\) to the ego-agent. We can now use a time-shifted version of \(\mathbf{U}^{*}\) as the prior input sequence \(\mathbf{U}\) to warm-start the sampling strategy at the next iteration. ## III Predicting goal positions In Fig. 1 we provide an overview of the proposed framework. In Section III-A, we outline the prediction model. In Section III-B, we describe the dataset we have collected to train a prediction model that is interaction and rule-aware. In Section III-C, we present the steps taken to port the prediction model to urban vessel environments. In Section III-D, we propose a heuristic to extract a local goal suitable for IA-MPPI using the predicted trajectories. ### _Interaction-aware trajectory prediction method_ Our approach leverages interaction-aware trajectory prediction for goal estimation. We employ an adapted version of _Social-VRNN_[23], which was originally designed for pedestrians, to obtain trajectory predictions. However, we remark that our framework is agnostic to the choice of trajectory predictor as long as it accounts for obstacles and interactions between agents in the environment. Social-VRNN [23] is an interaction-aware trajectory prediction method that leverages a generative model based on Variational Recurrent Neural Networks (VRNNs) [24]. The model combines three types of contextual cues to define a joint representation of an agent's current state: information on the past trajectory of the agent of interest, environment context, and agent-agent interactions. The input to predict the trajectory of agent \(i\) is denoted as: \[\mathbf{x}=\{\mathbf{v}_{-T_{0}:0}^{i},\mathbf{O}_{env}^{i},\mathbf{O}_{int}^ {-i}\}, \tag{8}\] where \(\mathbf{v}_{-T_{0}:0}^{i}\) corresponds to the sequence of velocity states over the previous observed horizon \(T_{O}\) of the agent of interest \(i\). The environment information \(\mathbf{O}_{env}^{i}\) is represented in the form of a grid map extracted around the agent of interest. Then, \(\mathbf{O}_{int}^{-i}\) represents the information on agent-agent interactions. It is a vector with the relative positions and velocities of all other agents from agent \(i\)'s perspective, listed in ascending order based on the absolute distance to it. The output of the model is a sequence of velocity probability distributions represented by \(T_{H}\) diagonal gaussian distributions \(\mathcal{N}(\mu_{\mathbf{v},k},\text{diag}(\sigma_{\mathbf{v},k}^{2}))\). For details on the method and its architecture, please refer to the original paper [23]. ### _Artificial Dataset_ In the absence of a publicly available dataset for short-term vessel trajectory prediction, an artificial dataset of vessel interactions is collected in a simulation environment. In order to obtain trajectories that resemble those of real vessels in urban canals, four real canal section maps in Amsterdam: the Herengracht (HG), the Prinsengracht (PG) and the Bloemgracht (BG) are used to collect data. The Open Crossing (OC) environment is created to collect vessel interactions in open water. Data on an additional environment, the Amstel (AM), is included only for testing our framework's generalization to environments not seen during training. Figure 2 depicts two of these canal sections. The yellow rectangles correspond to the areas in which start and goal locations are randomly initialized. These areas are placed around the entire map and in each canal section to improve the diversity of the trajectories and interactions. To collect the data, more than four thousand experiments are conducted by initializing up to four vessels simultaneously in the mentioned environments. Each vessel is assigned a randomized start and goal location in one of the predefined areas. All sampled locations are ensured to be collision-free. The vessels run a centralized IA-MPPI to sail toward their respective goals while accounting for navigation rules. This ensures that the recorded trajectories are safe, interaction-aware, and mostly rule-abiding. ### _Model Training and Adaptation_ We adapt the variational inference architecture presented in [23] to generate unimodal trajectory probability predictions of vessels. In contrast to humans, vessels are slower and have lower-order dynamics, which results in less reactive behaviors and smoother trajectories. To take this into account and avoid overfitting to the dataset, we reduce the dimensionality of the method's latent space. We also add an L2-regularization term to the loss function and weight it with a hyperparameter we define as \(\gamma\). #### Iii-C1 Hyperparameters The model is trained using backpropagation through time and the RMSProp [25] optimizer. With a time step of \(\Delta T=0.4\) seconds, the prediction horizon is set to \(T_{H}=24\) steps (9.6 seconds) and the previous horizon to \(T_{O}=14\) steps (5.6 seconds). Furthermore, we employ learning rate starting at \(\alpha\) = \(1\mathrm{e}{-4}\) that decays by a factor of 0.9 after every gradient step. The regularization weight is kept at \(\gamma=0.0001\). Finally, the model is trained for \(4e4\) training steps, using early stopping. ### _Local Goal Extraction_ In eq. (5) we show that the IA-MPPI needs to know the local goal \(p_{g}\) of each agent. There are two requirements for a goal to be suitable: it has to lie within a radius \(r_{p_{g}}\) from the agent it corresponds to and cannot be in space occupied by static obstacles. Therefore, we first search the predicted trajectory backward until we obtain a position \(p_{\leq r_{p_{g}}}\) within the desired radius. If \(p_{\leq r_{p_{g}}}\) is in collision with a static obstacle, we construct a circle centered on the agent's position \(p_{a}\) with radius \(p_{a}-p_{\leq r_{p_{g}}}\) and find the point on the circle closest to \(p_{\leq r_{p_{g}}}\) which is not in collision with static obstacles. This goal extraction method is illustrated in Fig. 3. Once the goals for all agents are predicted, IA-MPPI can plan interaction-aware trajectories in a decentralized fashion. ## IV Experiments The experiments are conducted in real maps of Amsterdam's canals, namely the Herengracht (HG), Bloemgracht (BG), Prinsengracht (PG), and the Amstel (AM). In addition, experiments are conducted in an Open Crossing (OC) map without static obstacles. In Section IV-A we evaluate the prediction model, in Section IV-B we show the performances of the proposed framework for motion planning, and in Section IV-C we highlight the benefits of coupled prediction and planning with respect to a decoupled approach. ### _Prediction Accuracy_ In Fig. 4 we compare the proposed Learning-Based Model (LBM) to a Constant Velocity Model (CVM) on test data. We evaluate the methods against the displacement error at each prediction step, which is defined as the Euclidean distance between a prediction and the ground truth. In all maps the LBM outperforms the CVM, showing a lower average displacement error and a smaller standard deviation. Note that the Amstel map was previously unseen during training, demonstrating generalization capabilities. ### _Interaction-Aware Motion Planning with Predictions_ In this study, we evaluate the performance of the proposed decentralized framework that uses a Learning-Based prediction Model to extract local goals (IA-MPPI-LBM), by comparing it against a decentralized approach that extracts \begin{table} \begin{tabular}{r c c c} \hline \hline **Scenario** & **Exp.** & **Frames** & **Vessels** \\ \hline Herengracht & 1000 & 406229 & 2499 \\ Prinsengracht & 1247 & 420285 & 4122 \\ Bloemgracht & 1188 & 372173 & 3564 \\ Open Crossing & 1182 & 417515 & 3544 \\ Amstel & 79 & 23468 & 316 \\ **Total** & 4696 & **1639670** & **14045** \\ \hline \hline \end{tabular} \end{table} TABLE I: Specifications of the artificial dataset. _Exp._ refers to the number of experiments done in each scenario. _Frames_ and _Vessels_ refer to the total number of frames and vessels present in the data set, respectively. All data is recorded at a rate of \(10\,\mathrm{Hz}\). Fig. 3: A visual illustration of how the local goal is extracted from a colliding trajectory prediction. Fig. 2: Canal sections of the Bloemgracht and Prinsengracht. The black areas are the canals. The yellow rectangles correspond to the initialization areas in which goals and starting locations were randomly initialized for each agent during the simulations. local goals from a Constant Velocity Model (IA-MPPI-CVM) and decentralized with communication (IA-MPPI-w/comm.), which assumes perfect knowledge of other agents' local goals. It is important to stress that, in similar experiments, the IA-MPPI-CVM which serves as the communication-free baseline in our comparisons has already been demonstrated to outperform an optimization-based Model Predictive Control (MPC) approach that relies on fixed predictions [3]. In the simulated experiments taking place in real sections of the canals of Amsterdam, we randomize the initial positions and goals of four interacting agents, all running the same algorithm. To challenge each method, we design regions within which each agent's start and goal position are randomly initialized in a way that forces all four agents to interact in a narrow section of the map. These _high-interaction_ scenarios are discussed in Section IV-B1. For completeness, we also design experiments where agents' starting and goal positions are randomized across much larger spaces. In these experiments, however, vessels don't often interact and usually have larger free spaces to avoid each other. These _low-interaction_ scenarios are discussed in Section IV-B2. An example of experiments in low- and high-interaction scenarios is shown in Fig. 5. The IA-MPPI plans with a time horizon \(T\) of 100 time steps with step size \(\delta T=0.1s\) and \(K=4500\) samples. Each method is evaluated on the same set of randomly initialized experiments. For fairness, metrics such as rule violations, goal displacement error, total traveled distance, and time are only displayed for experiments that ended successfully with all methods. #### Iv-B1 High-Interaction Scenario The experiments in high-interaction scenarios are conducted in narrow intersections in the Bloemgracht, Herengracht, and Prinsengracht. Since the Amstel canal is very wide and the Open Crossing has no static map constraints, it is difficult to generate experiments with high-interactions, and thus these two maps are excluded from this experiment section. The results of the experiments are summarized in Table II and Figure 6. It can be seen that in these high-interaction scenarios, the LBM consistently outperforms the CVM in terms of the goal displacement error (Goal DE). As a consequence, the motion planning framework that estimates other agents' local goals using predictions from the LBM outperforms the framework that uses the CVM on all the metrics. Moreover, we demonstrate our framework with the LBM has negligible performance losses compared to the method with perfect communication. #### Iv-B2 Low-Interaction Scenarios Table III and Fig. 7 summarize the results in low-interaction scenarios. Note that we here also test on the Open Crossing maps and the Amstel, \begin{table} \begin{tabular}{c l l l l} \hline \hline & \multicolumn{1}{c}{**Method**} & \multicolumn{1}{c}{**Succ. / Deadl.**} & \multicolumn{1}{c}{**Rule Viol.**} & \multicolumn{1}{c}{**Goal DE**} \\ \hline \multirow{4}{*}{_Deep_} & IA-MPPI-CVM & 18 / 0 / 2 & 16 & 5.82 m \\ & IA-MPPI-LBM (ours) & 19 / 1 / 0 & 16 & 5.26 m \\ \cline{2-5} & IA-MPPI-w/comm. & 20 / 0 / 0 & 16 & \\ \hline \multirow{4}{*}{_Deep_} & IA-MPPI-CVM & 19 / 0 / 1 & 11 & 7.30 m \\ & IA-MPPI-LBM (ours) & 19 / 1 / 0 & 5 & 4.11 m \\ \cline{2-5} & IA-MPPI-w/comm. & 20 / 0 / 0 & 5 & \\ \hline \multirow{4}{*}{_Deep_} & IA-MPPI-CVM & 17 / 0 / 3 & 7 & 4.98 m \\ & IA-MPPI-LBM (ours) & 20 / 0 / 0 & 4 & 4.13 m \\ \cline{1-1} \cline{2-5} & IA-MPPI-w/comm. & 20 / 0 / 0 & 3 & \\ \hline \hline \end{tabular} \end{table} TABLE II: Successes (Succ.), Deadlocks (Deadl.), Collisions (Coll.), Rule Violations (Rule Viol.) and Goal Displacement Error (Goal DE) for all methods in high-interaction scenarios per canal sections. Fig. 4: The displacement error of the predictions from CVM and the LBM over the prediction horizon for each canal section. The solid line represents the mean error and the shaded area represents 30% of the standard deviation. Fig. 5: Examples of experiments in the low-interaction scenario (left) and high-interaction scenario (right). Fig. 6: This figure displays the distribution of the total traveled distance and total traveled time of the vessels during the experiments in the high-interaction scenarios. The results are displayed per map and for each method. which the LBM has not previously seen in training. The results show that also when the start and goal positions of all agents are randomly initialized over large areas, our proposed communication-free framework with the LBM performs just as well as the baseline with full communication, even in a map unseen in training. However, perhaps unsurprisingly, the framework that approximates the local goals with a CVM can also achieve the same performance as the framework with full communication. Intuitively, in low-interaction scenarios where agents mostly navigate straight to their goal, CVM is a reasonably good approximator. ### _Decoupled Prediction and Planning_ The framework we proposed utilizes a Learning-Based Model (LBM) to predict trajectories for obstacle agents and extract local goals while employing Interaction-Aware Model Predictive Path Integral (IA-MPPI) for coupled predictions and planning. To assess the advantages of this framework, we compare it to a planner without interaction awareness (MPPI-LBM), which decouples prediction and planning. Like other state-of-the-art methods, MPPI-LBM treats the predicted future trajectories of obstacle agents as occupied space and plans the ego agent's motion without considering interaction awareness. This approach reduces the system size and computational burden by minimizing the space to be sampled. However, apart from this difference, MPPI-LBM shares the same sampling strategy and cost function as the proposed IA-MPPI-LBM. We conducted 100 low-interaction experiments across the Amstel, Bloemgracht, Herengracht, Open Crossing, and Prinsengracht, comparing different methods. Table IV presents the outcomes, including total successes, dead-locks, collisions, and rule violations. Again, the proposed IA-MPPI-LBM shows similar performances to the method with communication (IA-MPPI-w/comm). However, MPPI-LBM exhibited a significantly lower success rate and a higher number of rule violations. The LBM, while trained to be somewhat rule- and interaction-aware in its predictions, occasionally struggles to capture complex reciprocal collision avoidance maneuvers when agents are in close proximity. This, combined with the motion planner's unawareness of the ego agent's influence on other agents' motion and their cooperation in collision avoidance, often led the MPPI-LBM to wrongly assume that no feasible solution existed. Consequently, this resulted in agents drifting into collisions due to their large inertia. ## V Conclusions In this paper, we introduced a framework that combines a learning-based trajectory prediction model with Interaction Aware MPPI, enabling decentralized and communication-free coupled prediction and planning. Our experimental results demonstrated the superiority of our Learning-Based Model (LBM) over the Constant Velocity Model (CVM) in accurately predicting the trajectories of interacting vessels, even in unseen maps. Through simulated experiments in Amsterdam's canals, we showed that our motion planning framework achieved comparable performance to a method with ground truth knowledge of local goals, which was shown to outperform classical optimization-based MPC approaches with decoupled prediction and planning in previous work [3]. Additionally, we highlighted the limitations of the CVM in tight environments with multiple interacting agents. Finally, by comparing our approach with a non-interactive planner, we emphasized the advantages of coupled planning and predictions. \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & **Succ. / Deadl. / Coll.** & **Rule Viol.** \\ \hline MPPI-LBM. & 72 / 1 / 27 & 40 \\ IA-MPPI-LBM (ours) & 97 / 1 / 2 & 30 \\ IA-MPPI-w/comm. & 98 / 0 / 2 & 28 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Successes (Succ.), Deadlocks (Deadl.), Collisions (Coll.), and Rule Violations (Rule Viol.) for the non-interactive MPPI and the IA-MPPI baseline in low-interaction scenarios. \begin{table} \begin{tabular}{l l l l l} \hline \hline & \multirow{2}{*}{**Method**} & \multicolumn{1}{c}{**Succ. / Deadl.**} & \multirow{2}{*}{**Rule Viol.**} & \multirow{2}{*}{**Goal DE**} \\ \cline{3-3} \cline{5-6} & & **/ Coll.** & & & \\ \hline \multirow{3}{*}{**LBM**} & IA-MPPI-LVM & 38 / 0 / 2 & 22 & 5.31 m \\ & IA-MPPI-LBM (ours) & 37 / 0 / 3 & 26 & 5.42 m \\ \cline{2-6} & IA-MPPI-w/comm. & 37 / 2 / 1 & 26 & \\ \hline \multirow{3}{*}{**LBM**} & IA-MPPI-CVM & 38 / 0 / 2 & 14 & 3.12 m \\ & IA-MPPI-LBM (ours) & 38 / 0 / 2 & 13 & 2.94 m \\ \cline{2-6} & IA-MPPI-w/comm. & 38 / 0 / 2 & 14 & \\ \hline \multirow{3}{*}{**LBM**} & IA-MPPI-CVM & 38 / 0 / 2 & 17 & 5.36 m \\ & IA-MPPI-LBM (ours) & 39 / 0 / 1 & 20 & 4.93 m \\ \cline{2-6} & IA-MPPI-w/comm. & 40 / 0 / 0 & 20 & \\ \hline \multirow{3}{*}{**LBM**} & IA-MPPI-CVM & 40 / 0 / 0 & 27 & 3.98 m \\ & IA-MPPI-LBM (ours) & 40 / 0 / 0 & 29 & 4.04 m \\ \cline{2-6} & IA-MPPI-w/comm. & 40 / 0 / 0 & 27 & \\ \hline \multirow{3}{*}{**LBM**} & IA-MPPI-CVM & 40 / 0 / 0 & 22 & 3.12 m \\ & IA-MPPI-LBM (ours) & 39 / 1 / 0 & 18 & 3.49 m \\ \cline{1-1} \cline{2-6} & IA-MPPI-w/comm. & 40 / 0 / 0 & 21 & \\ \hline \hline \end{tabular} \end{table} TABLE III: Successes (Succ.), Deadlocks (Deadl.), Collisions (Coll.), Rule Violations (Rule Viol.), and Goal Displacement Error (Goal DE) for all methods in the various canal sections. Fig. 7: This figure displays the distribution of the total traveled distance and total traveled time by vessels during the experiments in the low-interaction scenarios. The results are displayed per map and for each method.
2306.00131
Assessment of a Physics-based Retrieval of Exoplanet Atmospheric Temperatures from Infrared Emission Spectra
Atmospheric temperatures are to be estimated from thermal emission spectra of Earth-like exoplanets orbiting M-stars as observed by current and future planned missions. To this end, a line-by-line radiative transfer code is used to generate synthetic thermal infrared (TIR) observations. The range of 'observed' intensities provides a rough hint of the atmospheric temperature range without any a priori knowledge. The equivalent brightness temperature (related to intensities by Planck's function) at certain wavenumbers can be used to estimate the atmospheric temperature at corresponding altitudes. To exploit the full information provided by the measurement we generalize Chahine's original approach and infer atmospheric temperatures from all spectral data using the wavenumber-to-altitude mapping defined by the weighting functions. Chahine relaxation allows an iterative refinement of this 'first guess'. Analysis of the 4.3{\mu}m and 15{\mu}m carbon dioxide TIR bands enables an estimate of atmospheric temperatures for rocky exoplanets even for low signal to noise ratios of 10 and medium resolution. Inference of Trappist-1e temperatures is, however, more challenging especially for CO2 dominated atmospheres: the 'standard' 4.3{\mu}m and 15{\mu}m regions are optically thick and an extension of the spectral range towards atmospheric window regions is important. If atmospheric composition (essentially CO2 concentration) is known temperatures can be estimated remarkably well, quality measures such as the residual norm provide hints on incorrect abundances. In conclusion, temperature in the mid atmosphere of Earth-like planets orbiting cooler stars can be quickly estimated from thermal IR emission spectra with moderate resolution.
Franz Schreier, J. Lee Grenfell, Fabian Wunderlich, Thomas Trautmann
2023-05-31T19:12:00Z
http://arxiv.org/abs/2306.00131v1
Assessment of a Physics-based Retrieval of Exoplanet Atmospheric Temperatures from Infrared Emission Spectra ###### Abstract Atmospheric temperatures are to be estimated from thermal emission spectra of Earth-like exoplanets orbiting M-stars as observed by current and future planned missions. To this end, a line-by-line radiative transfer code is used to generate synthetic thermal infrared (TIR) observations. The range of "observed" intensities provides a rough hint of the atmospheric temperature range without any a priori knowledge. The equivalent brightness temperature (related to intensities by Planck's function) at certain wavenumbers can be used to estimate the atmospheric temperature at corresponding altitudes. To exploit the full information provided by the measurement we generalize Chahine's original approach and infer atmospheric temperatures from all spectral data using the wavenumber-to-altitude mapping defined by the weighting functions. Chahine relaxation allows an iterative refinement of this "first guess". Analysis of the 4.3 \(\mu\)m and 15 \(\mu\)m carbon dioxide TIR bands enables an estimate of atmospheric temperatures for rocky exoplanets even for low signal to noise ratios of 10 and medium resolution. Inference of Trappist-1e temperatures is, however, more challenging especially for CO\({}_{2}\) dominated atmospheres: the "standard" 4.3 \(\mu\)m and 15 \(\mu\)m regions are optically thick and an extension of the spectral range towards atmospheric window regions is important. If atmospheric composition (essentially CO\({}_{2}\) concentration) is known temperatures can be estimated remarkably well; quality measures such as the residual norm provide hints on incorrect abundances. In conclusion, temperature in the mid atmosphere of Earth-like planets orbiting cooler stars can be quickly estimated from thermal IR emission spectra with moderate resolution. keywords: Astrobiology - Radiative transfer - Techniques: spectroscopic - Planets and satellites: atmospheres - Infrared: planetary systems; Methods: data analysis ## 1 Introduction A quarter century after the detection of the first planet orbiting a main sequence star (Mayor & Queloz, 1995) exoplanetary science has developed astonishing quickly and is likely to continue on this path (Mfarand et al., 2021). Some 5000 exoplanets are known today1, several dedicated space missions are already in orbit for detection (Kepler/K2, Transiting Exoplanet Survey Satellite (TESS),...), and characterisation (CHaracterising Exoplanet Satellite (CHEOPS, Benz et al., 2018)), and some others are in development, e.g. PLAnetary Transits and Oscillations of stars (PLATO, Rauer et al., 2014), Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL, Tinetti et al., 2018) or the proposed Large Interferometer for Exoplanets (LIFE, Defrere et al., 2018; Quanz et al., 2022, 2022). The successfully launched James Webb Space Telescope (JWST) with its several infrared (IR) instruments will considerably expand our knowledge for a wide range of exoplanets (Greene et al., 2016). Footnote 1: [http://exoplanet.eu/](http://exoplanet.eu/) Atmospheric characterisation by means of microwave, IR, or ultraviolet spectroscopy is performed routinely for Earth as well as for the Solar System planets and moons (Hanel et al., 2003) and its feasibility has also been demonstrated for extrasolar planets, mostly hot Jupiters (e.g. Madhusudhan & Seager, 2009). For atmospheric remote sensing "optimal estimation" (Rodgers, 1976, 2000) is by far the most common inversion technique and this Bayesian method has also been used in an exoplanet context, e.g. Irwin et al. (2008); Lee et al. (2012); Barstow et al. (2013, 2016) or more recently Shulyak et al. (2019). Bayesian methods such as optimal estimation (OE) heavily rely on a priori knowledge that is in general readily available for Earth's atmosphere as well as many Solar System bodies, but which is less well-known for exoplanets. Grid based search methods have become a standard approach for exoplanet characterisation; because of the need to perform millions to billions of radiative transfer forward model runs, sophisticated methods based on Monte Carlo Markov Chains and Nested Sampling are exploited to speed up the search (for recent reviews see e.g. Madhusudhan, 2018; Barstow & Heng, 2020; MacDonald & Batalha, 2023). Line et al. (2013) have performed an intercomparison of three retrieval codes utilising OE and two Monte Carlo methods and concluded that for good measurements (high spectral resolution and little noise) the estimates agree quite well. More recently, Barstow et al. (2020) compared three retrieval codes CHIMERA (Line et al., 2012), NEMESIS (Irwin et al., 2008), and Tau-REx (Waldmann et al., 2015) and reported mostly consistent results but emphasised the important role of radiative transfer modeling since the same inverse problem solver (nested sampling MultiNest, Feroz et al., 2009) is used by all three codes; hence the testing of additional inverse solvers is desirable. Likewise, the critical impact of the forward model was also emphasised by Barstow et al. (2022) who presented an intercomparison of five codes (ARCIS, Min et al., 2020, NEMESIS, Pyrat BAY, Cubillos and Blecic, 2021, Tau-REx, and POSEIDON, MacDonald and Madhusudhan, 2017). Pressure, temperature, and molecular concentrations depend on space and time), and a discretisation is mandatory for numerical analysis. The latitudinal and longitudinal dependencies are largely ignored (for a discussion of 2D or 3D effects on emission and transmission see e.g. Feng et al., 2016; Blecic et al., 2017; Taylor et al., 2020 and Caldas et al., 2019; MacDonald et al., 2020, respectively; see also Pluriel, 2023 for a recent review). Whereas assuming molecular concentrations which are constant in altitude is likely acceptable as a first step, the assumption of isothermal temperature profiles (e.g. Barstow et al., 2022) is problematic. Layer-by-layer representations (standard for Earth remote sensing) have been criticised as troublesome due to the limited information content (Line et al., 2013; Parmentier and Guillot, 2014), and parameterised representations of the vertical dependence of temperature are quite common (e.g. Madhusudhan and Seager, 2009; von Paris et al., 2013; Morley et al., 2014). A novel function expansion approach along with a standard nonlinear least squares solver has been studied by Schreier et al. (2020). Barstow and Heng (2020) recommended to "conduct retrievals... with a variety of temperature structures and... to investigate alternative approaches." Here we examine the feasibility of temperature sounding by means of an iterative relaxation developed in the late sixties by Chahine (1968, 1970, 1972) that is presented in several textbooks on atmospheric radiation (e.g. Goody and Yung, 1989; Liou, 1980; Hanel et al., 2003; Zdunkowski et al., 2007) but rarely used today. According to "Subsection 6.5.2 -- A physical approach to retrieval" in Goody and Yung (1989) it is a "simple idea easy to visualize and to extend to new circumstances" and exploits the properties of the weighting functions (the derivatives of the transmission w.r.t. altitude). Extensions and/or refinements of this approach were presented by Smith (1970) and Twomey et al. (1977). These relaxation methods have been used for analysis of IR and microwave observations of Earth's atmosphere, and for temperature sounding of Venus (e.g. Taylor et al., 1980), Mars (Lellouch et al., 1991, 1991; Haus and Titov, 2000), and Jupiter (Gautier et al., 1977, 1979). The organisation of the paper is as follows: The following section describes our methodology (radiative transfer, Chahine relaxation,...) along with the code and data. We demonstrate the feasibility to estimate the temperature for Earth-like exoplanets in Section 3: Computationally fast estimates exploiting a "mapping" of the wavenumber space to the altitude space as well as iterative refinements based on a comparison of observed and model spectra are presented. We continue with a discussion in Section 4 and give our conclusions in Section 5. ## 2 Theory ### Forward model -- infrared radiative transfer In a gaseous atmosphere with local thermodynamic equilibrium the upwelling intensity (radiance) \(I\) at wavenumber \(\nu\) is described by the Schwarzschild equation of radiative transfer (Goody and Yung, 1989; Hanel et al., 2003) \[I(\nu) = \mathcal{T}(\nu,0)\ B(\nu,T_{\mathrm{surf}})\ +\ \int_{0}^{\tau(\nu,z)}\ B(\nu,T(\tau^{\prime}))\ \exp\left(-\tau^{\prime}(\nu)\right)\mathrm{d}\tau^{\prime}\] \[= \mathcal{T}(\nu,0)\ B(\nu,T_{\mathrm{surf}})\ -\ \int_{0}^{\infty}B(\nu,T(\tau^{ \prime}))\ \frac{\partial\mathcal{T}(\nu,z^{\prime})}{\partial z^{\prime}}\ \mathrm{d}z^{\prime} \tag{2}\] where \(B=2hc^{2}\nu^{3}/\left[\exp\left(hc\nu/k_{\mathrm{B}}T\right)-1\right]\) is Planck's function at temperature \(T\) (\(h\), \(c\), \(k_{\mathrm{B}}\) are the Planck constant, speed of light, and Boltzmann constant, respectively). For the surface contribution \(I_{\mathrm{surf}}(\nu)\equiv\mathcal{T}(\nu,0)\ B(\nu,T_{\mathrm{surf}})\) we assume a Planck black body emission attenuated by the intermediate atmosphere and a temperature identical to the bottom of the atmosphere (BoA) temperature, i.e. \(T_{\mathrm{surf}}=T_{\mathrm{BoA}}=T(z=0)\). The monochromatic transmission \(\mathcal{T}\), closely related to the optical depth \(\tau\), between observer and altitude \(z\) is given by Beer's law \[\mathcal{T}(\nu,z) = \exp(-\tau(\nu,z))\] \[= \exp\left[-\int_{z}^{\infty}\sum_{m}k_{m}\big{(}\nu,p(z^{\prime}),T(z^{\prime})\big{)}\ n_{m}(z^{\prime})\ dz^{\prime}\right]\,\] with \(n_{m}\) the density of molecule \(m\), and \(k_{m}\) the pressure and temperature dependent absorption cross section obtained by summing over the contributions from many lines. For simplicity a vertical path is assumed; for a faint path with angle \(\theta\) in a plane-parallel atmosphere replace \(z^{\prime}\longrightarrow z^{\prime}/\cos(\theta)\). The finite spectral resolution of the instrument is taken into account by convolution of the monochromatic intensity (1) (or transmission (3)) with a spectral response function (SRF, e.g. Gaussian). The upper atmosphere has a low abundance of absorbers and is therefore almost transparent (i.e., transmission close to one); with an increasingly longer atmospheric path (decreasing \(z\)) attenuation becomes stronger especially at wavenumbers with strong absorption in the band or line center (Fig. 1 top-right). Here path length \(s\) refers to the distance to the observer, essentially at "infinity", in practice at top-of-atmosphere (ToA), and is linked to altitude via \(s=z_{\mathrm{ToA}}-z\), cf. Eq. (3). Viewed as a function of altitude the transmission decays rapidly to zero for these wavenumbers (Fig. 1 top-left), i.e. photons from the lower atmosphere cannot penetrate to space and the ToA radiation arises mainly from the upper atmospheric layers. The so-called weighting function,2 the partial derivative \(K(\nu,z)\equiv\partial\mathcal{T}(\nu,z)/\partial z\) in (2), quantifies the dominant contribution (or weight) of an altitude layer to the outgoing radiation (Fig. 1 bottom panels, see also Liou (1980, Fig. 7.6), Goody and Yung (1989, Fig. 6.17), and Hanel et al. (2003, Fig. 8.2.1)). Note that the weighting function is similar, but not identical to the temperature Jacobian \(\partial I(\nu)/\partial\mathcal{T}(z)\) that measures the radiation's sensitivity to changes of temperature (see Schreier et al., 2020, Fig. 10). Both the weighting functions and the Jacobian clearly demonstrate that the radiation carries little information of the lowermost and upper atmospheric layers. Footnote 2: A note on terminology: different terms are used for the derivative \(\partial\mathcal{T}/\partial z\): weighting functions Liou (1980); Hanel et al. (2003); Zdunkowski et al. (2007), kernel functions Goody and Yung (1989), or contribution functions. Moreover, "Jacobian" and “weighting function” are often used interchangeably; however, for temperature retrievals using nonlinear least squares the Jacobian is the partial derivative of the radiance w.r.t. the state vector \(\vec{x}\), \(\partial I/\partial x\), where \(\vec{x}\) is a discrete representation of the temperature profile. ### Inversion -- Chahine relaxation According to Rodgers (1976) "the intensity to be measured is \(\ldots\) a weighted mean of the Planck function profile with the weighting function". The bell-shape of the weighting function (Fig. 1 bottom-left) can be exploited for analysis of TIR spectra. Assuming a delta-function-like weighting function the Schwarzschild equation (1) reduces (Hanel et al., 2003) to \[I(\tilde{\nu})\ \approx\ I_{\rm surf}(\tilde{\nu})\ +\ B(\tilde{\nu},T(\tilde{ z}_{\nu})) \tag{4}\] with \(\tilde{z}_{\nu}\) the altitude where \(K(\nu,z)\) has a maximum for a given \(\nu\). A first approximation of the atmospheric temperature can thus be inferred from the observed Equivalent Brightness Temperature (EBT), i.e. \[T(\tilde{z}_{\nu})\ \approx\ T_{\rm B}(\tilde{\nu}) \equiv\ B^{-1}\Big{(}I_{\rm obs}(\tilde{\nu})\ -\ I_{\rm surf}(\tilde{\nu})\Big{)} \tag{5}\] \[=\ \frac{hc\tilde{\nu}/k_{\rm B}}{\log\Bigl{(}2hc^{2}\tilde{\nu}^{ 3}/\big{(}I_{\rm obs}(\tilde{\nu})\ -\ I_{\rm surf}(\tilde{\nu})\big{)}\Bigr{)}}\] This estimate can be iteratively improved using a relaxation scheme originally proposed by Chahine (1968, 1970) \[T_{i+1}(\tilde{z}_{\nu})\ \approx\ B^{-1}\left(\frac{I_{\rm obs}(\tilde{\nu})\ -\ I_{\rm surf}(\tilde{\nu})}{I_{\rm mod}(\tilde{\nu},T_{i})\ -\ I_{\rm surf}(\tilde{\nu})}\ B(\tilde{\nu},T_{i}(\tilde{z}_{\nu}))\right) \tag{6}\] where \(T_{i}\) denotes the temperature for iteration \(i\) and \(I_{\rm mod}(\tilde{\nu},T_{i})\) the corresponding modelled radiance according to (1). ### Implementation For our forward model we use Python for Computational ATmospheric Spectroscopy (Py4CAIS, Schreier et al., 2019, available at [https://atmos.eec.dlr.de/tools/py4cats/](https://atmos.eec.dlr.de/tools/py4cats/)), a Python implementation of the Generic Atmospheric Radiation Line-by-line Infrared Code (Schreier et al., 2014). GARLIC has been thoroughly verified by intercomparison with other codes (e.g. Schreier et al., 2018) and validated by comparison to effective height spectra (Schreier et al., 2018) generated from Earth observations of the Atmospheric Chemistry Experiment -- Fourier transform spectrometer (ACE-FTS, Bernath, 2017). Py4CAIS and GARLIC compute molecular absorption cross sections \(k_{m}\) assuming a (default) Voigt line shape (Schreier, 2018), where the wavenumber grid point spacing is adjusted automatically for each molecule, pressure and temperature to a fraction of the typical line width. Next, cross sections scaled by molecular number densities are summed up to absorption coefficients; then standard quadrature schemes are used to compute optical depths and radiances. Both observed and modeled spectra are convolved with a Gaussian spectral response function of constant width (with a default sampling of 4 points per half width at half maximum (HWHM)). Py4CAIS makes heavy use of Numpy (van der Walt et al., 2011; Harris et al., 2020) (and occasionally SciPy (Virtanen et al., 2020) and MatPlotLib (Hunter, 2007)). The synthetic measurement spectrum is generated by adding Figure 1: Transmission and weighting functions in the longwave TIR (CO\({}_{2}\)\(v_{2}\) band at 15 \(\mu\)m, madir view, Earth’s midlatitude summer (MLS) atmosphere (Anderson et al., 1986), Gaussian response function of half width \(\Gamma=0.25\,{\rm cm}^{-1}\) corresponding to a resolution \(R\approx 2800\).). The upper panels show the transmission as a function of wavenumber for several atmospheric paths (right) and transmission vs. altitude (related to path length \(s=\tau_{\rm leak}-z\)) for selected wavenumbers (left). The lower panel shows individual weighting functions for selected wavenumbers (left) and a contour plot. Numbers in the left legends indicate the wavenumber [\({\rm cm}^{-1}\)] and the corresponding peak altitude [km] (bottom-left). generic Gaussian noise (generated by the Numeric Python numpy.random.randn function) independent of wavenumber. The equivalent brightness temperature spectrum is obtained by "inversion" (5) of Planck's function and depicted in Fig. 2 for noise-free simulated observations (see next subsection). ### Data Atmospheric data for Earth are taken from the "AFGL Atmospheric Constituent Profiles" atmospheres (Anderson et al., 1986) providing pressure and temperature vs. altitude along with concentration profiles for 28 gases including water vapor, carbon dioxide (with 360 ppm volume mixing ratio (VMR)), ozone, methane. Atmospheric data for the assumed Earth-like planets around M-dwarfs (henceforth called "M-Earths") are taken from Wunderlich et al. (2019). These temperatures and concentrations are inferred from a 1D photochemistry model (Gebauer et al., 2018) coupled to a climate model and are defined on 64 levels with a ToA pressure of about 0.08 mb. The former work assumed hypothetical planets with Earth's properties orbiting different M-dwarf stars placed at the location where the planet receives modern Earth's instellation. For all M-Earths the surface temperature is approximately 288 K by appropriate selection of the orbital radius, and the CO\({}_{2}\) VMR is about 355 ppm in the lower atmosphere. Data for Trappist-1e from Wunderlich et al. (2020) are derived from the Berlin 1D steady-state, cloud-free, radiative-convective photochemical model 1D-TERRA. This dataset comprises dry&dead, wet&dead, and wet&live scenarios for CO\({}_{2}\) surface partial pressures of \(10^{-3},10^{-2},10^{-1},\) and \(10^{6}\) bar (corresponding to VMRs of approximately \(10^{-3},10^{-2},10^{-1},\) and \(0.5\cdot 10^{6}\) (see Table 11 in Wunderlich et al. (2020) and is indicated by the exponent in subsection 3.7). For comparison, the Trappist-1 planet of the M-Earth dataset (Wunderlich et al., 2019) has a VMR of 355 ppm in the lower atmosphere. Note that this planet is a purely hypothetical "Earth" orbiting Trappist-1, whereas the Trappist-1e scenarios are based on orbital and stellar data (see Section 3.2 and Table 9 in Wunderlich et al. 2020). The Chaining approach delivers temperatures only at a small set of altitudes, but data on a moderately dense grid from BoA to ToA are required for the radiative transfer modeling and we will use function expansion for inter/extrapolation (see subsection 3.5). For the generation of the singular vectors to be used as base vectors for this expansion we use the set of 42 Earth atmospheric profiles collected by Garand et al. (2001) augmented with the eleven M-Earth temperatures regridded to a uniform altitude grid with 2 km Figure 3: ToA intensities for M-Earths. The gray-shaded area indicates the Planck function \(B(\nu,T)\) for the minimum and maximum atmospheric temperatures \(T_{\rm min}\approx 171\) K for GJ 832 and \(T_{\rm max}\approx 286\) K by “construction”. (Computed with GARLIC using atmospheric data from 1D-TERRA (Wunderlich et al., 2020); Gaussian response function with resolution \(R=1000\).) Figure 2: “Observed” equivalent brightness temperatures \(T_{\rm B}\) (5) of hypothetical Earth-like planets orbiting the M-dwarfs indicated in the legend. Earth’s MLS atmosphere is shown for comparison (see subsection 2.4). Gaussian response function of HWHM \(\Gamma=0.25\) cm\({}^{-1}\) (longwave, left) and \(\Gamma=1.0\) cm\({}^{-1}\) (shortwave, right). For clarity no noise has been added. The TIR-LW and TIR-SW spectra comprise 1441 and 281 pixels, respectively. The numbers in the legend list the minimum and maximum atmospheric temperature [K]. Wavelengths are given at the top (numerically \(\lambda[\mu\)m] = \(10^{4}/\nu[\) cm\({}^{-1}]\)). steps (Schreier et al., 2020, subsection 3.2). Note that the first six Garand atmospheres correspond to the AFGL data. Molecular line parameters are taken from the Hiran database; instead of the most recent data (Gordon et al., 2022) (clearly mandatory for analysis of real observations) we use data from the initial 1986 release (Rothman et al., 1987) to speed-up the computations. (See the further discussion in subsection 4.3.) Only the main IR absorbers are considered, i.e. CO\({}_{2}\) and the interfering species H\({}_{2}\)O, CH\({}_{4}\), and O\({}_{3}\). See also Schreier et al. (2020) for more details on atmospheric and molecular data and a discussion of some of our approximations and assumptions. ## 3 Results ### First preliminary constraints Inspection of the Schwarzschild equation (1) can be used for a first estimate of the range of atmospheric temperatures. Replacing the height-dependent temperature \(T(z)\) in the Planck function by the minimum atmospheric value \(T_{\rm min}\), the equation simplifies to \(I(\nu)=B_{BoA}{\rm e}^{-\tau}+\int B{\rm e}^{-\tau}{\rm d}\tau^{\prime}\geq B _{min}\left({\rm e}^{-\tau}+(1-{\rm e}^{-\tau^{\prime}})\right)=B_{\rm min}\), hence the ToA radiance is greater (or equal) to the minimum Planck function. The upper limit can be derived in a similar manner, hence \(B_{\rm min}\leq I(\nu)\leq B_{\rm max}\). The intensities shown in Fig. 3 confirm these constraints, i.e. the lower and upper "Planck envelope" can be used as a preliminary estimate of the temperature range. The minimum and maximum atmospheric temperatures can then be readily estimated from the corresponding EBT minima and maxima. The EBT spectra in Fig. 2 are, except for resolution, essentially a zoom-in on the intensity spectra of Fig. 3 transformed via Eq. (5). In addition to the strong absorption at 15 \(\mu\)m due to CO\({}_{2}\) the ozone fundamental band at 1042 cm\({}^{-1}\) (9.6 \(\mu\)m) is clearly visible in Fig. 3. ### Mapping Wavenumbers to Altitudes Before estimating atmospheric temperatures from IR spectra according to the recipe of Eq. (5) several issues have to be addressed. First a set of appropriate wavenumber-altitude pairs has to be identified. Suitable wavenumbers are conveniently searched for in absorption band(s) of a molecule with well-known concentration and ideally little variability, e.g. the shortwave or longwave TIR bands of CO\({}_{2}\). The corresponding altitudes are given by the location of the weighting function maxima, hence depend on the properties of the transmission \(\mathcal{T}\) and as a consequence depend on atmospheric temperature, pressure, and composition. Obviously these data are unknown for exoplanets, but fortunately weighting function of Earth-like exoplanets (with N\({}_{2}\)-O\({}_{2}\) dominated atmospheres) are rather similar and closely resemble weighting functions of typical Earth climates, see Fig. 4. Accordingly we will use the \(\nu\leftrightarrow z\) mapping of Earth's atmosphere, e.g. for midlatitude and subarctic summer or winter (MLS, MLW, SAS, SAW) in the \(\nu_{2}\) longwave (LW, wavelength \(\lambda\approx 15\,\mu\)m) and \(\nu_{3}\) shortwave (SW, \(\lambda\approx 4.3\,\mu\)m) bands of CO\({}_{2}\). Note that these spectra also depend on resolution (compare Figures 1 and 2 of Schreier et al. (2020)), hence a lower resolution leads to smoother spectra and reduces the sensitivity to upper atmospheric layers (See the discussion in subsection 4.1). ### First Guesses -- Selected Data in TIR-LW Having identified the translation from wavenumber to altitude space (henceforth called "mapping") it appears to be straightforward to infer the temperatures from the observed spectrum using Eq. (5). However, both real spectra as well as our theoretical spectra simulating planned instrumental measurements (see Fig. 2) are contaminated by noise, and exploiting single data pairs (\(\bar{\nu},T_{\rm B}\)) is likely to lead to a "noisy" temperature profile. This is confirmed by Fig. 5 where the temperature retrieval for a hypothetical Earth-like planet orbiting AD Leo (Wunderlich et al., 2019) is illustrated: the inset shows the eight temperature values taken from the observed noise-free observation (intensity spectrum \(I(\nu)\) converted to EBT \(T_{\rm B}(\nu)\) according Figure 4: Weighting function maxima in the longwave TIR (CO\({}_{2}\)\(\nu_{2}\) band at 15 \(\mu\)m with Gaussian response function of half width \(\Gamma=0.25\,\)cm\({}^{-1}\), left) and shortwave TIR (CO\({}_{2}\)\(\nu_{3}\) band at 4.3 \(\mu\)m with \(\Gamma=1.0\)cm\({}^{-1}\), right). The length of these spectra (i.e. number of data points) is identical to those of Fig. 2. The black dots indicate manually selected \(\nu\leftrightarrow z\) pairs (as also shown in Fig. 1 lower left). to (5)). However, estimating these temperatures from the noisy EBT spectrum would clearly lead to a zigzag temperature profile. To compensate for the noise the average of some neighboring pixels from the observed EBT spectrum can be used instead. Figure 6 depicts temperature estimates for the eleven M-Earth atmospheres (taken from Wunderlich et al. (2019) and already used in Schreier et al. (2020)) and clearly demonstrates that averaging EBTs from larger windows (e.g. 10 or 20 pixels) leads to the inferred temperature profile becoming smoother. Averaging 20 pixels leads to a significantly reduced zigzag, however, profiles estimated from two different measurements (model spectrum contaminated by two randomly generated noise vectors) are still distinct. For the M-Earths these first guess temperatures are encouraging, for some cases almost "perfect" (e.g. AD Leo, GJ 644, and GJ 832 even without averaging), but in a few other cases (e.g. GJ 551, GJ 876, and Trappsi-1) deviations to the true profile are clearly visible. Moreover, temperatures of Trappsi-1 in the lowermost altitudes are significantly underestimated. The profiles of GJ 551, GJ 876, and Trappsi-1 show oscillations of up to a few Kelvin. The cold trap of Trappsi-1 is not very strong, i.e. the minimum temperature is only 20 K cooler than the maximum temperature (288 K at BOA). The tropopause temperatures of GJ 551 and GJ 876 are slightly cooler (258 K, and 251 K, respectively). Moreover, the local temperature maximum of Trappsi-1 in the mid atmosphere at about 38 km is only modest and not reproduced by the EBT estimate. Overall however, the results are rather encouraging. ### First Guesses -- Exploiting the entire TIR-LW Despite the promising results there are some caveats. Exploiting just a few data points implies that the majority of data remain unused. Moreover, the hand-picked selection of \(\nu\leftrightarrow z\) mapping pairs is somewhat arbitrary, and the estimated atmospheric temperatures are therefore likely to change with different mappings. Consider for example the mapping pairs (677 cm\({}^{-1}\), 18 km), (698 cm\({}^{-1}\), 12 km), (711 cm\({}^{-1}\), 7 km) (cf. Fig. 4): One issue is whether the mapping wavenumber should be chosen to lie in the valley or on the peak of the absorption line. Figure 5: Estimate of AD Leo atmospheric temperature using a \(\nu\leftrightarrow z\) mapping with 8 pairs for MLS (see Fig. 4). The main plot shows the ideal noise-free “observed” EBT (blue), the noise contaminated EBT spectrum (\(S/N=10\), cyan), and selected EBT values taken from the noise-free spectrum; the inset compares the estimated temperatures with the true profile. (TIR-LW 660–750 cm\({}^{-1}\); Gaussian with \(\Gamma=0.25\) cm\({}^{-1}\).) Figure 6: First estimate of M-Earths’ atmospheric temperatures using a \(\nu\leftrightarrow z\) mapping with 8 pairs for MLS (see Fig. 4). The red and magenta curves labeled “avg=20” are identical except for the different noise vector. (Spectra as in Fig. 5) Figure 7: Estimate of AD Leo atmospheric temperatures using all \(\nu\leftrightarrow z\) pairs for MLS. The colors and symbols for the \(S/N=10\) observation indicate the altitude range where radiation is mainly coming from according to the weighting functions. (For example green and cyan triangles for the lowest and highest altitude. Inset similar to Fig. 5, spectra as above.) Choosing a point somewhere in-between is less sensitive to resolution and less likely sensitive to noise. Furthermore, the number of pixels which are used for averaging to account for the noise is limited, in order to avoid overlap of spectral regions. This is especially the case near the band center. Figure 4 indicates that (except for the bottom and top altitudes) several wavenumbers are sensitive to a particular altitude. For example, the atmospheric layer around 32 km influences the radiance at several wavenumbers in the 660 - 670 cm\({}^{-1}\) interval and near 720 cm\({}^{-1}\) (for some planets only). In order to address this issue, we therefore use the mean EBT of all pixels contributing to a particular altitude according to the weighting function peak height spectrum (Fig. 4). Because these peak altitudes rarely coincide exactly to a given grid point we accept all pixels within a given _tolerance interval_\(\delta z\), i.e. to estimate the atmospheric temperature at an altitude \(z\) we take the average of all EBTs at wavenumbers with a weighting function peak height in the interval \([z-\delta z,z+\delta z]\). This concept is illustrated in Fig. 7 for a 5 km tolerance: Altitudes beyond 40 km are seen only in a narrow interval around 667 cm\({}^{-1}\), and the average of all EBT's in this interval is interpreted as atmospheric temperature in the 40-50 km altitude range. Likewise, temperatures for altitudes below 10 km are estimated from EBT's at wavenumbers beyond 700 cm\({}^{-1}\) (except for the peak at 720 cm\({}^{-1}\)). Figure 8 compares results for all M-Earths and for various \(\delta z\)-intervals. In accordance with Fig. 4 no mappings are found for the upper atmosphere, and the highest altitude point estimated depends on the magnitude of the tolerance \(\delta z\). In the "middle atmosphere" the estimated temperatures are roughly equivalent. In some cases (e.g. GJ 664 or GJ 832) where temperatures were slightly underestimated using 8 (\(\nu,z\)) pairs only, the deviation is reduced or eliminated. For the generalised Chahine case the retrieved temperatures are often smoother compared to Fig. 6. Moreover, the extended retrievals are less sensitive to noise: temperatures estimated with the 5 km tolerance from two observations with different noise vectors are largely identical except for the highest values (at 35 km) of GJ 551 and Trappist-1 (compare the two "avg=20" estimates in Fig. 6). The warm M-Earths (GJ 551, GJ 876, and Trappist-1) remain problematic with clear oscillations above about 30 km and considerably underestimate near BoA especially for the 1 and 2 km tolerances, but otherwise the retrieved and true temperature profiles are in good agreement with deviations to \(T_{\rm true}\) less than ten Kelvin. The oscillations of the profiles estimated with the 1 or 2 km tolerances can be interpreted as follows: A closer look to Fig. 1 shows that the bell-shaped weighting functions have a finite width of several kilometers in the lower atmosphere and almost 10 kilometers in mid to the upper atmosphere. Hence retrieving temperatures with one kilometer resolution is questionable. ### Iterative Refinements -- TIR-Lw The temperature estimates presented in the previous subsection are not always satisfactory, but they can be used as initial guesses for the Chahine relaxation scheme (6) or other iterative solvers like nonlinear least squares (e.g. Schreier et al., 2020). Clearly the main advantage is the computational speed, i.e. the whole "retrieval" is simply the inversion (5) of the Planck function and therefore can be performed within fractions of a second. However, because the peak heights of the weighting functions do not cover the entire altitude range, this initial guess temperature values cannot readily be used as an input for radiative transfer modeling. Moreover, with the 4 or 5 km tolerances only few temperature values are estimated in the mid atmosphere. Extrapolation appears to be a tempting solution, but this is well known to be problematic. In Schreier et al. (2020) (Fig. 5) we have shown that Earth-like temperature profiles can be represented as a linear combination of some base vectors resulting from a singular value decomposition (SVD) of a large matrix comprising "representative" temperatures (comprising the 42 Garand atmospheres and the 11 M-Earth atmospheres introduced in subsection 2.4). Hence we will use here a linear least squares fit to determine the expansion coefficients for the "Chahine initial guess" profile and then "extrapolate" this profile to the entire altitude range. In addition this is used for interpolation to a dense altitude grid in the lower and mid atmosphere appropriate for radiative transfer modeling. (For brevity this will be called "extrapolation" henceforth.) Before starting the iterations according to (6) one more issue has to be discussed: how to stop the process. For nonlinear least squares solvers such as MINPACK (More, 1978) or NL2SOL (Dennis, Jr. et al., 1981, 1982) two convergence criteria are usually employed: firstly the change of the estimated state vector (here temperature) is small and the change of the residual norm (the norm of the model minus observed signal vector, here the radiance spectrum) is small (where "small" should be related to \(S/N\)). Inspection of the radiance residual norm is clearly a natural choice for least squares. Exploiting the deviation of the fitted temperature to the true temperature or the deviation of the model to the "true" spectrum is clearly impossible for analysis of true observations and is hence not used as a convergence criterions. For the analysis of the TIR-LW spectra using iterative Chahine relaxation (6) we start with, e.g., Earth's SAW atmospheric data; the Figure 8: Estimate of M-Earths’ atmospheric temperatures using all MLS \(\nu\leftrightarrow 2\) pairs. For the 1.0 km and 5.0 km tolerance mappings 18 and 4 altitudes have been found with \(5\leq z\leq 39\) km and \(5\leq z\leq 35\) km, respectively. (Spectra as above.) SAW temperature is needed to compute the atmospheric transmission \(\mathcal{T}\) and surface emission \(I_{\text{surf}}\) in Eq. (5). The following stopping criteria have been used: a maximum change of the updated temperature less than 5 K or a maximum change of the EBT of less than 5 K. For all M-Earths the relaxation stops after two or three iterations (see Fig. 9 for an illustrative example). The results (Fig. 10) are consistent with those of Fig. 8: for most planets the temperature is retrieved quite well for altitudes below about 40 km, but GJ 551 and Trappist-1 (and to a lesser extent GJ 1214) appear to be problematic. Nevertheless, for all planets the cold trap temperatures closely resemble the true one, and even for Trappist-1 the absolute temperature differences are small. Further runs with other Earth model data (MLS, MLW, SAS, and tropical) essentially confirm these findings, see Fig. 10. For planets with very low minimum temperatures (AD Leo, GJ 644, or GJ 832) all models lead to almost identical temperatures with small differences only in the upper atmosphere. Differences are clearly visible for GJ 1214, GJ 551, GJ 876, and Trappist-1, i.e. planets with relatively high minimum temperatures. Although the most favorable model is unknown for real observations we can nevertheless use the minimum radiance residuum norm \(\|\Delta I\|=\|I_{\text{obs}}-I_{\text{mod}}\|\) as a hint for selection of the "best" fit. In all but two cases this selected solution corresponds to the optimum solution according to the minimum norm of the EBT difference spectrum \(\Delta T_{\text{B}}\): for GJ 551 and GJ 876 MLS give the smallest EBT difference, whereas SAS and TRO yield the smallest radiance difference (however, both norms are identical within four digits). Using the mean EBT difference as criterion gives also different solutions for GJ 581. In addition to the retrievals using the five Earth models Fig. 10 also shows the temperature fitted with the correct exoplanet pressure and composition (clearly unknown in case of real observations); the initial guess temperature required to compute the weighting functions and total atmospheric transmission \(\mathcal{T}\left(\nu,0\right)\) is set to the mean EBT. Interestingly the norm of the radiance difference with the correct exoplanet composition is smaller than the norm with the best Earth composition only for some of the planets. However, Table 1 indicates that the variability of these residual norms is always small, the difference of the largest and smallest norm is less than a few percent, and \begin{table} \begin{tabular}{l c c c c c c c} \hline & & \multicolumn{3}{c}{\(S/N=10\)} & \multicolumn{3}{c}{\(S/N=100\)} \\ & \(\Delta T\) & corr. & min & max & corr. & min & max \\ \hline AD Leo & 100.6 & 249.7 & 245.3 & 249.5 & 42.3 & 41.2 & 49.2 \\ GJ1214 & 35.1 & 330.1 & 328.2 & 331.7 & 58.94 & 60.4 & 71.5 \\ GJ176 & 62.0 & 28.33 & 282.9 & 285.5 & 40.73 & 43.8 & 51.6 \\ GJ436 & 56.7 & 286.8 & 286.4 & 288.7 & 32.14 & 36.6 & 47.4 \\ GJ551 & 27.3 & 348.4 & 350.1 & 351.3 & 71.32 & 72.8 & 83.1 \\ GJ581 & 51.5 & 304.7 & 302.7 & 305.1 & 34.03 & 39.0 & 48.7 \\ GJ644 & 106.9 & 249.7 & 245.9 & 250.1 & 47.77 & 45.0 & 54.7 \\ GJ667c & 69.8 & 281.3 & 276.4 & 282.1 & 36.33 & 34.8 & 48.5 \\ GJ832 & 114.2 & 255.8 & 248.4 & 258.7 & 60.31 & 47.8 & 67.6 \\ GJ876 & 35.2 & 313.2 & 312.4 & 313.7 & 55.08 & 57.8 & 67.7 \\ Trappist1 & 20.2 & 371.1 & 372.0 & 376.1 & 90.76 & 92.8 & 101.8 \\ \hline \end{tabular} \end{table} Table 1: Comparison of the radiance residuum norms (in radiance units \(\text{erg}/\text{s}/\text{(cm}^{2}\,\text{sr}\,\text{cm}^{-1})\)) for iterative Chahine relaxation with the correct atmospheric pressure and concentrations with fits using one of Earth’s model atmospheres. The ”min” and ”max” columns give the range of residuum norms for these fits. The second column lists the range of atmospheric temperatures, i.e. \(\max\left(T\right)-\min\left(T\right)\) in Kelvin. Figure 10: Comparison of atmospheric temperatures using Chahine relaxation for all Earth data (specified in the legend). The red solid line indicates the best fit according to the norm \(\|\Delta I\|\) of the residual radiance spectrum (in units \(\text{erg}/\text{s}/\text{(cm}^{2}\,\text{sr}\,\text{cm}^{-1})\), see legend). The green temperature shows the fit using the correct atmospheric densities. The last subplot shows the temperature differences (true – fit) for the best Earth model. (All \(\nu\leftrightarrow z\) pairs with 5 km tolerance, extrapolation with 4 base vectors, TIR-LW spectra as above.) Figure 9: Chahine relaxation for AD Leo starting with SAW pressure, temperature (green), and concentrations (\(S/N=100\), TIR-LW). The cyan crosses show the initial guess (5) (similar to Fig. 7) and the cyan dotted line its extrapolation. Intermediate temperature profiles (6) are shown in yellow, the numbers in the legend indicate the maximum temperature change (\(T_{t+1}-T_{1}\)) and maximum EBT deviation (observed - model). The red curve shows the final temperature profile with the maximum EBT deviation in the legend. the residual of all fits is very large because of the significant noise of the synthetic intensities (i.e. the residual is dominated by noise). For synthetic measurements with less noise the residual norm of fits with different atmospheric models shows larger variations up to 41% for \(S/N=100\) (because of the smaller noise the stopping criterium has been tightened to 3 K temperature change). Note that the variation of the norms is especially large for planets with a large range of atmospheric temperatures. Again the correct atmosphere does not always deliver the best fit, but the best Earth model and the correct densities always give similar temperature profiles. Table 1 shows that the correct densities do not yield a model spectrum closer to the observed spectrum for the four M-Earths with the largest temperature gradients \(\Delta T\) (AD Leo, GJ 644, GJ 667c, and GJ 832). At first glance this "failure" appears to be quite disturbing and we interpret this as follows: The residual norm is clearly the essential number characterising least squares, i.e. the quantity to be minimised. However, it is not the decisive quantity for Chapline relaxation; obviously the ratio of observed to model spectra in (6) should finally be close to one, but there is no uniquely defined number for the progress of the relaxation and quality of the solution. These results may also be considered as a hint that Chapline relaxation relies on a strong simplification, i.e. the intensity (1) at a particular wavenumber can be approximated by the Planck emission at a particular altitude according to (4). This assumption clearly ignores the shape of the weighting functions (compare Fig. 1 bottom-left) and is apparently problematic when the temperature differences are large. Furthermore, large temperature differences make the extrapolation more difficult. Nevertheless, the last subplot of Fig. 10 demonstrates that in the mid atmosphere (about 15 to 35 km) the temperature can be estimated within \(\pm 5\) K. ### Tir-Sw The shortwave TIR appears to be less favourable for temperature retrieval for several reasons: The radiance values in the TIR-LW are higher compared to the TIR-SW (Fig. 11 in Schreier et al. (2020)), the TIR-LW is more favourable because of the higher star-planet contrast, and the TIR-LW is also less affected by scattering. Moreover, the TIR-SW weighting function peak heights do not cover altitudes above 30 km (Fig. 4), and for shorter wavelengths reflection of thermal radiation at the surface is becoming increasingly important. On the other hand, the TIR-SW weighting functions indicate some more sensitivity to the lowest atmosphere, and in fact IR instruments of meteorological satellites used for sounding of Earth's temperature (Menzel et al., 2018), e.g. AIRS (Chahine et al., 2006) and IASI (Hilton et al., 2012), exploit both regions. Figure 11 depicts the temperature inferred from the equivalent brightness temperatures (5) in the TIR-SW using all \(\nu\leftrightarrow z\) pairs for MLS. Considering altitudes within one or two kilometers around a weighting function peak height gives zigzag temperature profiles as in the longwave analysis, cf. Fig. 8. For the 5 km tolerance the profiles are smoother, however only four temperature values for the upper troposphere and lower stratosphere are estimated. Compared to the TIR-LW estimates, Fig. 8, the profiles appear to be somewhat smoother and closer to the true temperature. Trappist-1, however, is reasonable only in the 10 - 20 km range. For other Earth mappings (MLW, SAS, SAW, tropical) temperatures are almost identical (not shown). In Fig. 12 results from temperature estimates using the SW and LW interval individually are compared with those from the combined spectrum (with 1722 data points, cf. Fig. 2). The "data fusion" product clearly benefits from the sensitivity of the longwave spectrum beyond 30 km; however, it also inherits the underestimated temperature in the lowest atmospheric levels, especially for GJ 1214, GJ 551, GJ 876, and Trappist-1. A more sophisticated data fusion approach might possibly be able to avoid the shortcomings of the TIR-SW in the upper atmosphere and TIR-LW in the lower atmosphere. ### Trappist-1e The Trappist-1 planetary system (Gillon et al., 2017) orbiting a nearby M-dwarf has attracted considerable attention because several terrestrial-type exoplanets lie in the circumstellar habitable zone (e.g. Barstow and Irwin, 2016; Grimm et al., 2018; Krissansen-Totton et al., 2018; Lustig-Yaeger et al., 2019; Fauchez et al., 2020; Krissansen-Totton and Fortney, 2022). Planets e and f have been the object of numerous studies (e.g. Barstow and Irwin, 2016; Morley et al., 2017; Mikal-Evans, 2021). Recently, Wunderlich et al. (2020) have used the newly developed radiation-convection-photochemistry model 1D-TERRA to study the feasibility of atmospheric characterisation, in particular the possibility of finding any evidence for an ocean or biosphere. Here we use these dry&dead, wet&dead, and wet&live scenarios with varying CO\({}_{2}\) levels (see subsection 2.4) for further tests of the Chapline methodology. We generate synthetic observations in the same way as for the M-Earths (Fig. 2), i.e. monochromatic intensity spectra according to (1) are convolved with a Gaussian and contaminated with noise. Figure 13 is similar to Fig. 12 and shows atmospheric temperatures retrieved from the TIR-LW and TIR-SW equivalent brightness Figure 11: Estimate of atmospheric temperatures using all \(\nu\leftrightarrow z\) pairs for MLS and TIR-SW (2350 – 2420 cm\({}^{-1}\), \(S/N=10\), Gaussian with \(\Gamma=1.0\) cm\({}^{-1}\).) temperatures (5) using the MLS weighting function peak altitudes (cf. Fig. 4) with a 4 km tolerance (without iteration). The LW and SW estimates are quite similar and relatively smooth. For the CO\({}_{2}\) VMR = \(10^{-3}\) atmospheres (bottom row) the temperature near BoA is somewhat underestimated, but in the middle atmosphere up to about 40 km the estimates are close to the truth. However, for the atmospheres with larger CO\({}_{2}\) concentrations the temperatures are clearly too cool. Due to the large mixing ratios the atmospheres are optically thick in the spectral windows considered here, and radiation (photons) from the lower warm altitudes cannot propagate upwards to ToA (and the observer). This interpretation is confirmed by the range of equivalent brightness temperatures given in the plot, i.e. the warm low atmosphere does not show up in the EBT spectra. Analysis of effective height spectra (Fig. 14) that might be available from primary transit spectroscopy also indicates that the TIR-LW interval (660 - 750 cm\({}^{-1}\)) is insensitive to the lower atmosphere. Furthermore, the success of the temperature estimates for the VMR = \(10^{-3}\) cases suggests that the \(\nu\leftrightarrow z\) mappings of Earth's atmospheres are not adequate for Trappist-1e atmospheres with more carbon dioxide. Fig. 15 shows that with higher CO\({}_{2}\) concentrations the location of the weighting function maxima move upwards by 5 or even 10 km, whereas the "nature" of the planet (wet vs. dry, dead vs. alive) does not have a big impact. In summary, Figures 14 and 15 suggest that atmospheres with abundant CO\({}_{2}\) are not well reproduced from TIR-LW and TIR-SW spectra especially in the lower regions (Fig. 13). Therefore we also considered an extended spectral range along with the correct weighting function peak heights (i.e. weighting functions in the extended TIR-LW computed with the true concentrations) which considerably improved the estimate (esp. in the mid atmosphere, Fig. 13 red dots; however, the sensitivity to the lower atmosphere is lost, i.e. no mappings with altitudes below 10 or 20 km for the 10% and 50% CO\({}_{2}\) planets). Having demonstrated that atmospheric temperatures can be estimated from equivalent brightness temperatures (5) using an extended TIR-LW interval if the composition is known, we now examine iterative Chahine relaxation (6). The assumption of known abundances appears reasonable since some objects will have information from transmission spectra. In particular quantifying CO\({}_{2}\) might be possible due to e.g. its dominant absorption bands (clearly visible around 660 cm\({}^{-1}\) and 2400 cm\({}^{-1}\) in Fig. 14); concentration estimates for other gases however might be more difficult. First we examine whether we can then distinguish the temperature profiles for the three scenarios (wet vs. dry, dead vs. live), i.e. we compare retrievals using different surface and lower atmosphere scenarios (for example, for the wet&live planet "observation" (right column in Fig. 16) we also investigate fits using boundary parameters of the dead scenarios with the correct CO\({}_{2}\)). Figure 12: Comparison of atmospheric temperatures estimated from the TIR-SW, the TIR-LW, and the concatenated spectrum. (All \(\nu\leftrightarrow z\) pairs with a 4 km tolerance for MLS, \(S/N=10\)) Figure 13: Trappist-1e temperatures from TIR-LW (660 – 750 cm\({}^{-1}\) with \(\Gamma=0.25\) cm\({}^{-1}\), cyan diamonds) and TIR-SW (2350 – 2420 cm\({}^{-1}\) with \(\Gamma=1.0\) cm\({}^{-1}\), green squares) with entire \(\nu\leftrightarrow z\) MLS map. Third estimate with an extended TIR-LW (LWX, 660 – 850 cm\({}^{-1}\), red circles) and the correct map (i.e. correct CO\({}_{2}\)). (All spectra with \(S/N=10\)). The number \(n\) immediately following the planetary scenario in each subplot title indicates the CO\({}_{2}\) surface partial pressure, i.e. \(p_{\rm CO2}\approx 10^{-n}\) bar. (Regarding notation see subsection 2.4.) Minimum and maximum atmospheric temperatures are listed in the title, the range of the EBTs of the noise-free LW, SW, and LWX spectra are given inside the plot. The gray shaded area shows the EBT range for the SW and LWV combination. Figure 16 (top) shows that the effect of changing the scenario is largest for the high CO\({}_{2}\) abundance: in the lower atmosphere temperature is significantly underestimated for the two dead scenarios which is likely related to the fact that the effective heights (Fig. 14) and weighting functions (Fig. 15) do not show any sensitivity to the lowest atmosphere (esp. for the wet cases). For the observation of the wet&live VMR = 0.5 planet (top right) both wet models yield an only moderately underestimated temperature. The difficulty of the high CO\({}_{2}\) atmospheres is also demonstrated by the increased number of iterations required for convergence. Fits with large residuum norm show the largest deviations between true and retrieved temperature in the mid atmosphere. For moderate and low CO\({}_{2}\) (other rows in Fig. 16) the impact of the scenarios is only weak, and only one or two iterations are necessary. Results here are close to the true profile (blue line) in the low and mid atmosphere, although deviations are clearly evident in the upper atmosphere where the weighting function exhibits little sensitivity. A second set of runs has been conducted assuming MLS molecular profiles scaled to the correct Trappist-1e column density, see Fig. 17. Similar to Earth the Trappist-1e atmospheres have CO\({}_{2}\) VMR profiles almost constant in altitude, and the H\({}_{2}\)O VMRs are strongly decreasing for pressures below 100 mb (see Fig. 7 in Wunderlich et al.2020). (In contrast the CO\({}_{2}\) VMR of all M-Earths increases by \(10-20\%\) for \(p<100\) mb (see Fig. 4 in Wunderlich et al.2019).) However, at least for Earth, the total optical depth is essentially dominated by the CO\({}_{2}\) contribution (see Fig. A3 in Schreier et al.2020), which suggests that for these Trappist-1e atmospheres with even higher CO\({}_{2}\) concentrations this dominance will be even stronger. Hence, the assumption of isopfofiles does not have a strong impact on the quality of the retrievals, i.e. temperature in the mid atmosphere can be estimated with little deviation from the truth. Figure 17 also shows the importance of the "extrapolation" scheme based on the expansion using singular vectors. Although a temperature profile representation using only two base vectors delivers temperatures close to the true even for the lowest atmosphere (including the VMR = \(0.5\cdot 10^{0}\) cases, top row), a profile expansion with three or four base vectors appears to be more reasonable. Using five base vectors works well for VMR \(\leq 10^{-2}\), but clearly fails for very high CO\({}_{2}\) concentrations. The radiance residual norms shown in the plot also indicate that three or four base vectors can be used reliably except for the high CO\({}_{2}\) concentrations. Figure 16: Chahine iterative estimate of Trappist-1e temperatures with correct CO\({}_{2}\) abundance and varying scenarios: wet & live (red), wet & dead (cyan long dashed), and dry & dead (green dashed). SVD “extrapolation” with four base vectors. Extended TIR-LW spectra as above. The legend lists the number of iterations and the radiance residual norm. Figure 14: Trappist-1e effective heights for a Gaussian response function with \(R=1000\). (Computed with GARLIC.) Figure 15: Trappist-1e altitudes of weighting function maxima in the extended LW-TIR (Gauss \(\Gamma=0.25\) cm\({}^{-1}\)). The wet&dead weighting function maxima are largely similar to the wet&live maxima. Further tests have also been conducted with reduced or increased CO\({}_{2}\) concentrations. For moderate changes with quarter, half, double and quadruple isopffiles the retrieved temperature profiles are still close to the true temperature. (See the further discussion in subsection 4.7.) ## 4 Discussion ### Spectral resolution The retrievals reported so far have been conducted assuming moderate resolution TIR-LW and TIR-SW spectra. The motivation for \(R>2500\) has been discussed in Schreier et al. (2020, Figure 1) where we showed that for decreasing resolution the sensitivity to upper atmospheric layers decreases: for a Gaussian response function with HWHM \(\Gamma=0.25\,\mathrm{cm}^{-1}\) (corresponding to a resolution \(R=2800\) at \(700\,\mathrm{cm}^{-1}\) (wavelength \(14.3\,\mathrm{\mu m}\))) the maximum of the weighting function peak height spectrum (cf. Fig. 4) reaches altitudes above \(40\,\mathrm{km}\); a coarser resolution reduces the peak height (for \(\Gamma=1.0\,\mathrm{cm}^{-1}\) (\(R=700\)), \(\Gamma=2.0\,\mathrm{cm}^{-1}\) (\(R=350\)), and \(\Gamma=7.0\,\mathrm{cm}^{-1}\) (\(R=100\)) the maxima lie at \(40\,\mathrm{km}\), \(24\,\mathrm{km}\), and \(18\,\mathrm{km}\), respectively). The TIR-LW and TIR-SW CO\({}_{2}\) bands can be observed by the Medium Resolution Spectrometer (MRS) of the JWST Mid Infrared Instrument (MIRI) with a resolving power \(R\) of about 2500 (Rieke et al., 2015) (the Low Resolution Spectrometer (LRS) only sees the TIR-SW band with a resolution \(R=100\)). However, Morley et al. (2017) caution that "for temperate planets spectroscopy with JWST-MIRI will likely be unrealistically expensive". In the JWST Guaranteed Time Observations (GTO) program several exoplanets have already been observed with the Near Infrared Imager and Slitless Spectrograph (NIRISS), NIR Spectrograph (NIRSPEC), and NIR Camera (NIRCam) instruments, that can deliver spectra with wavelengths up to \(5\,\mathrm{\mu m}\) at low and medium resolution. For an assessment of the impact of resolution on the retrieval, synthetic observations have been generated with different resolutions for selected exoplanets. Noise has been adjusted assuming a square root relationship between resolution and \(S/N\). The results depicted in Fig. 18 confirm the expectations discussed above, i.e. with decreasing resolution information on the upper atmosphere is diminishing. In particular, for \(R=500\) Chahine estimates according to (6) are only available for altitudes below \(20\,\mathrm{km}\) (shown as cyan circles) and "extrapolation" is clearly problematic. ### Surface emission Apart from the type of inversion (least squares vs. Chahine relaxation) the methodology used here is largely identical to that of our previous feasibility study. However, there is one large difference worth discussing: in Schreier et al. (2020) we have ignored surface emission. Obviously the first term in (1) is mandatory for modeling IR spectra in atmospheric window regions where the atmosphere is relatively transparent such as the \(800-1200\,\mathrm{cm}^{-1}\) interval for Earth (except for O\({}_{3}\) absorption around \(9.6\,\mathrm{\mu m}\)). For the terrestrial exoplanets, transmission \(\mathcal{T}\approx 0\) in the CO\({}_{2}\) bands considered in Schreier et al. (2020) (LW: \(660-720\,\mathrm{cm}^{-1}\)), hence the surface contribution has no impact on mid atmospheric temperature estimates. Figure 17: Chahine iterative estimate of Trappist-1e temperatures starting with MLS \(\nu\leftrightarrow z\) mapping and molecular profiles with correct column density. The retrieved temperatures correspond to SVD “extrapolation” with two (blue dash-dotted), three (red dashed), four (cyan long dashed), and five (green dotted) base vectors. (Numbers in legend list the radiance residual norm, spectra as above.) Figure 18: Impact of resolution on Chahine iterative temperatures estimates. MLS initial guess atmosphere, SVD “extrapolation” with 4 base vectors. The red diamonds and cyan circles show the final update according to the relaxation equation (6) for \(R=2800\) and \(R=500\), respectively. TIR-LW (M-Earths) or TIR-LWX (Trappist-1e). The solid, dashed etc. lines show the corresponding extrapolated temperature profiles. However, for wavenumbers beyond \(720\,\mathrm{cm}^{-1}\) corrections cannot be neglected, so an estimate of the surface temperature is required. If wavenumbers with transmission close to one are observed, the corresponding radiance can be used to estimate \(T_{\mathrm{surf}}\). Here we approximate surface temperature by the largest equivalent brightness temperature observed, assuming that the maximum atmospheric temperature corresponds to the BoA temperature and is approximately equal to the surface temperature. (Although warmer temperatures are possible at the stratopause, these do not contribute strongly to the spectrum as indicated by the weighting functions (cf. Fig. 1). Due however to noise, the largest EBT may not always be the best estimate of the surface temperature. Alternatively the maximum of a smoothed EBT spectrum (e.g. by running averages) or the mean of the largest ten etc. values could be used. ### Molecular spectroscopy data Similar to Schreier et al. (2020) we have used the very first edition of the Hitran database (Rothman et al., 1987) rather than the latest version to speed up the computations. This simplification appears to be justified for the feasibility study presented here, but is clearly inadequate for analysis of real observations. In our previous study we discussed the then latest 2016 version (Gordon et al., 2017), here we provide a brief update for the current Hitran 2020 (Gordon et al., 2022). For a realistic modeling of IR emission spectra in the \(660-750\,\mathrm{cm}^{-1}\) interval line data in an enlarged interval have to be considered to properly account for line wing contributions and convolution with the spectral response function. Hence lines in the range \(648.75-761.25\,\mathrm{cm}^{-1}\) are read: Hitran 2020 knows 449445 lines of 29 molecules, which reduces to 79394 lines of the five main IR absorbers (CO\({}_{2}\), O\({}_{3}\), N\({}_{2}\)O, CH\({}_{4}\), H\({}_{2}\)O). In contrast, Hitran 86 returns 16003 lines of 4 molecules; furthermore removing weak lines results in 4255 lines total including 2827 lines of carbon dioxide. Despite this drastic reduction of active lines the final spectra do not change significantly: For the MLS atmosphere the peak height of the weighting function is modified by a few hundred meters only (except for \(\Delta z\approx 0.8\,\mathrm{km}\) at the high wavenumber end of the interval); Likewise, the difference of the equivalent brightness temperature spectra is usually less than \(1\,\mathrm{K}\) (with a maximum \(|\Delta T_{\mathrm{B}}|\approx 1.8\,\mathrm{K}\) at \(718\,\mathrm{cm}^{-1}\)). Air-broadened half-widths are listed in the Hitran database since its beginnings half a century ago, whereas self-broadening was introduced later with the 1986 release (Rothman et al., 1987). Collision (pressure) broadening parameters for perturbers other than "air" (i.e. Earth's N\({}_{2}\), O\({}_{2}\)) can be important for modeling spectra of other planets and were included only recently in Hitran(Gordon et al., 2017). For the terrestrial N\({}_{2}-\)O\({}_{2}\) dominated atmospheres considered here the Hitran 86 broadening parameters are clearly appropriate (with the possible exception of the 50% CO\({}_{2}\) Trapisti-1e scenarios). In this context it might also be worth noting that the line strengths in Hitran are also tuned to Earth, i.e. the strengths are scaled by the relative natural abundance of the isotopes in Earth's atmosphere.3 Footnote 3: [https://hitran.org/docs/iso-meta/](https://hitran.org/docs/iso-meta/) In addition to line transitions continua can be also important, especially water self and foreign continua (Shine et al., 2012) and collision induced absorption (CIA, Richard et al., 2012; Karman et al., 2019). Comparison of EBT spectra (MLS atmosphere, \(R=100\)) computed with and without H\({}_{2}\)O, CO\({}_{2}\), O\({}_{2}\) and N\({}_{2}\) corrections (Clough et al., 1989) indicates negligible differences in the TIR-LW regions considered here; however, in the TIR-SW differences up to almost \(8\,\mathrm{K}\) show up around \(2400\,\mathrm{cm}^{-1}\) in the right wing (short wavelength) of the CO\({}_{2}\) band. (For the entire TIR (\(500-2500\,\mathrm{cm}^{-1}\)) maximum differences up to \(\approx 10\,\mathrm{K}\) at \(1600\,\mathrm{cm}^{-1}\) can be seen.) ### Toa 60 vs 120 km In Schreier et al. (2020) all runs have been performed with a Toa at \(60\,\mathrm{km}\), mainly because the Garand et al. (2001) atmospheres are defined only for pressures down to \(0.08\,\mathrm{mb}\) (about \(60\) to \(65\,\mathrm{km}\)), but also to speed up the computations. Likewise, the M-Earth atmospheres of Wunderlich et al. (2019) have Toa altitudes in the range \(61\) to \(74\,\mathrm{km}\). However, for IR radiative transfer modeling altitudes up to about \(100\,\mathrm{km}\) might be important (the AFGL data (Anderson et al., 1986) are given up to \(120\,\mathrm{km}\)). For an assessment of the importance of the upper atmosphere we have compared radiances for \(60\) and \(120\,\mathrm{km}\) Toa and the MLS atmosphere. In the TIR-LW region the EBT difference is usually less than \(1\,\mathrm{K}\) except for the strong radiance peak at \(668\,\mathrm{cm}^{-1}\) and in the right wings of the CO\({}_{2}\)\(\nu_{2}\) band. ### Geometry Observations of exoplanet thermal emission will deliver disk averaged spectra only, whereas we have used a single line-of-sight assuming a strict nadir view. A common approximation for disk averaged spectra is to model a slant path with about \(35^{\circ}\) from radar. The equivalent brightness spectra for the vertical and slant paths differ by less than \(2\,\mathrm{K}\) in the center of the TIR-LW band (sensitive to the mid atmosphere) with somewhat larger differences for \(\nu>700\,\mathrm{cm}^{-1}\). On the other hand, the peak altitudes of the weighting functions are shifted downwards by some \(10\,\mathrm{km}\) (see Fig. 1 lower left panel). As a consequence, the magnitude of the retrieved temperature will not change significantly, but the associated altitudes could change by several kilometers. It is also important to note that the preliminary constraints discussed in subsection 3.1 are independent of the viewing angle. ### Auxiliary data, initial guess and a priori For the solution of "real world" inverse problems a large variety of auxiliary data are required, e.g. instrument parameters or observation geometry. In the case of atmospheric IR spectroscopy these auxiliary data also comprise molecular optical properties (e.g. line lists or k-distributions); for temperature sounding these data also include pressure and molecular concentrations. These auxiliary data are often denoted "a priori". However, for optimal estimation (Rodgers, 1976, 2000) this solely refers to the knowledge of the unknown state vector (e.g. temperature) prior to any measurement (all other auxiliary parameters are treated as "model parameters"). OE provides an estimate close to the a priori state vector where the relative weight of observation and a priori is determined by the respective measurement and a priori covariance matrices. (Note that this weighting is independent of the weighting function \(\partial\mathcal{T}/\partial z\) defined in subsection 2.1) Obviously a priori knowledge of exoplanetary atmospheric properties is scarce (see e.g. Shulyak et al., 2019 for a thorough discussion). Barstow (2020) noted that "the dependence of OE on an informative prior means that it is less appropriate for exoplanets", which motivated the upgrade of the NEMESIS code (Irwin et al., 2008) with MultiNest (Feroz et al., 2009). Monte Carlo type retrievals are less sensitive to a priori, but for robust retrievals thousands to millions of time expensive forward model evaluations are required (Fortney et al., 2021). Some prior idea of the atmospheric state is required for Chahine-type inversions to compute the weighting functions and the total atmospheric transmission attenuating the surface emission (cf. Eqs. (1) and (5), (6)). However, our simulations have demonstrated that the inferred M-Earth temperatures are largely independent of the assumed Earth atmospheric model and/or initial guess temperature profile. Parameterisations as proposed by Madhusudhan and Seager (2009); von Paris et al. (2013); Morley et al. (2017a) or Fossati et al. (2020) are frequently used for temperature retrievals (for a detailed discussion see Section 3 of Barstow and Heng, 2020). Obviously, these parameterisations are motivated by physical insight and can also be considered as a kind of a priori knowledge. For the iterative solution of nonlinear inverse problems by least squares (e.g. Schreier et al., 2020), OE (e.g. Irwin et al., 2008) (technically a constrained least squares) or Chahine relaxation, an initial guess is mandatory. For the Earth-like exoplanets considered here a climatological Earth temperature profile or an isoprofile defined by the mean of the observed EBT can be used to start the iteration. Alternatively, an atmospheric characterisation resulting from coupled photochemistry-climate codes such as 1D-TERRA (Wunderlich et al., 2020) can be used. ### Atmospheric composition Thermal emission IR spectra are sensitive to temperature but provide little information on atmospheric (molecular) composition. However these data are mandatory for radiative transfer modeling. In particular knowledge of the CO\({}_{2}\) concentration is important for temperature sounding exploiting its strong TIR-LW and TIR-SW bands. Actually the concentrations of all molecules absorbing in the spectral region to be analysed are required: in the case of the two TIR bands, H\({}_{2}\)O, ozone (O\({}_{3}\), longwave only) and methane (CH\({}_{4}\), shortwave) absorption are relevant. Note that H\({}_{2}\)O does not have a large impact on the TIR-LW weighting functions. Assuming CO\({}_{2}\) concentrations which are too low or too high likely has an impact on the quality of the retrievals. Further test runs have been performed with a setup similar to Fig. 17, but with CO\({}_{2}\) isoprofiles scaled by factors from 0.25 up to 10 in the model atmosphere. Figure 19 demonstrates that in most cases the residual norm is larger for incorrect CO\({}_{2}\) mixing ratios; in some cases the iteration fails (e.g. for the 0.1% CO\({}_{2}\) Trappist-1e atmospheres, usually because of negative temperatures), the number of iterations becomes larger, or the temperature profile shows stronger oscillations (zigzag profiles are unrealistic because the observed radiance is an integral (1) that is insensitive to small-scale "perturbations" of temperature and/or molecular densities). An independent estimate of concentrations is therefore desirable. Carbon dioxide has several strong bands throughout the IR: in addition to the TIR bands there are further strong rotation-vibration bands in the shortwave and near IR (SWIR, NIR) at 2.7 \(\mu\)m, 2.0 \(\mu\)m, and 1.6 \(\mu\)m. In fact the NIR bands as well as the TIR-SW band enabled the identification of CO\({}_{2}\) in the atmosphere of the Saturn-mass exoplanet WASP-39b with various JWST instruments (Ahrer et al., 2023; Alderson et al., 2023; Rustamkulov et al., 2023) and the latter two SWIR bands are used operationally for monitoring of Earth's CO\({}_{2}\) budget by several satellite missions such as OCO-2/3 or GOSAT (Crisp et al., 2004; Kuze et al., 2009). Regarding exoplanets the analysis of effective height / transit depth spectra provided by primary transit observations is therefore valuable. Moreover, the ratio of observed signals in two appropriate filter bands is a sensitive indicator of CO\({}_{2}\) atmospheric concentration (Rieke et al., 2015). For the joint analysis of primary and secondary transits see also Griffith (2014). ## 5 Summary and conclusions Temperature profiles of Earth-like exoplanets orbiting M-dwarfs have been retrieved from synthetic thermal IR emission spectra using Chahine-type relaxation methods along with a line-by-line radiative transfer code. The essential assumption is that for a particular wavenumber the outgoing radiation arises from a corresponding, well-defined altitude: In the band centre absorption is strong and only radiation from the upper atmosphere will be seen remotely; in the band wings absorption is weak and even photons from the lower atmosphere can traverse the entire atmosphere to the ToA and beyond. The feasibility of this method has been demonstrated using synthetic noise-contaminated observations of various Earth-like planets with N\({}_{2}\)-O\({}_{2}\) dominated atmospheres orbiting M-dwarfs and for Trappist-1e planets of different surface conditions (wet/dry and dead/alive) and different carbon dioxide concentrations up to about 50%. (Note that the assumption of N\({}_{2}\)-O\({}_{2}\) dominance is questionable for the high CO\({}_{2}\) Trappist-1e scenarios.) The equivalent brightness temperature (EBT) spectrum corresponding to the observed intensity can be used to deliver temperature estimates extremely quickly: The minimum and maximum EBT values provide first constraints on the range of atmospheric temperatures independent of any a priori knowledge. The EBT in the carbon dioxide absorption bands can be readily "translated" (within seconds) to mid atmospheric temperatures. Using a handful of manually selected data points is problematic because of the noise, hence averaging of some neighboring spectral pixels is required (cf. Fig. 5). How Figure 19: Impact of CO\({}_{2}\) concentration. MLS initial temperature, SVD “extrapolation” with 4 base vectors, \(S/N=20\), TIR-LW (M-Earths) or TIR-LWX (Trappist-1e) with Gaussian \(0.25\,\mathrm{cm}^{-1}\). The legend indicates the factor used to scale the CO\({}_{2}\) VMR isoprofile of the model atmosphere and the radiance residual norm. ever, exploiting the entire intensity spectrum and the corresponding wavenumber-altitude mapping, as defined by the weighting functions, is preferably and clearly advantageous (Fig. 7). Furthermore, using both TIR-LW and TIR-SW data can be helpful to overcome the limited altitude sensitivity range of one region alone (Fig. 12). In any case, the quality of this guess is however related to an appropriate knowledge of the CO\({}_{2}\) concentrations, in particular in the case of the CO\({}_{2}\)-rich Trappist-1e planets. Iterative relaxation allows a refinement of the first guess, however, the success of the improvement relies on the inter/extrapolation used to complete the limited set of \(T\) data. Note that an update of the temperature will change cross sections, transmission, and weighting functions and hence also the wavenumber-altitude mapping. Compared to classical nonlinear least squares fitting a clear advantage of the Chahine relaxation (or the related Smith and Twomey schemes) is the fact that no Jacobian (derivatives of the intensity \(I\) w.r.t. to the state vector elements) are required. This is clearly beneficial when finite differences are used to approximate the Jacobian as for (nonlinear) optimal estimation or in the SVEEEETIES study (Schreier et al., 2020) using Py4CAIS (GARLIC exploits algorithmic differentiation, where the overhead for temperature Jacobians is only about a factor 2, see Schreier et al. (2015)). On the other hand, the time required for nonlinear least squares fitting (proportional to the number of iterations) can be reduced if a good initial guess is provided. Of course the "Chahine first guess" can be used as initial guess for any other iterative optimisation solver. In conclusion, we have used an extension of the classical Chahine approach in an exoplanet context for the first time to our knowledge. This approach can deliver stable and reasonable temperature estimates for terrestrial-type exoplanets quickly (first guess in seconds, iterative refinements in minutes), even for challenging cases such as atmospheres with weak inversions and large CO\({}_{2}\) abundances. It is also attractive in view of the growing awareness on the "carbon footprint of large scale computing" and green computing (e.g. Jahnke et al., 2020). Hence it allows interesting new insight and provides a valuable addition to existing methods. ## Acknowledgements This foundations for this research were the DFG projects SCHR 1125/3-1, RA-714/7-1, and RA 714/9-1. We acknowledge the support of the DFG priority programme SPP 1992 "Exploring the Diversity of Extrasolar Planets (GO 2610/2-1)". J.L.G. thanks ISSI Team 464 for useful discussion. We thank the German Research Foundation (DFG) for financial support via the project The Influence of Cosmic Rays on Exoplanetary Atmospheric Biosignatures (Project number 282759267). Finally we would like to thank Joanna Barstow for the constructive review. ## Data Availability Atmospheric data have been taken from Wunderlich et al. (2019, 2020) and molecular spectroscopy data have been obtained from Hitran ([https://hitran.org/](https://hitran.org/)); the data analysis software is an extension of the Py4CATS package (Schreier et al., 2019) available at [https://atmos.eoc.dlr.de/tools/Py4CAts/](https://atmos.eoc.dlr.de/tools/Py4CAts/).
2309.05814
Reinforcement Learning for Supply Chain Attacks Against Frequency and Voltage Control
The ongoing modernization of the power system, involving new equipment installations and upgrades, exposes the power system to the introduction of malware into its operation through supply chain attacks. Supply chain attacks present a significant threat to power systems, allowing cybercriminals to bypass network defenses and execute deliberate attacks at the physical layer. Given the exponential advancements in machine intelligence, cybercriminals will leverage this technology to create sophisticated and adaptable attacks that can be incorporated into supply chain attacks. We demonstrate the use of reinforcement learning for developing intelligent attacks incorporated into supply chain attacks against generation control devices. We simulate potential disturbances impacting frequency and voltage regulation. The presented method can provide valuable guidance for defending against supply chain attacks.
Amr S. Mohamed, Sumin Lee, Deepa Kundur
2023-09-11T20:47:11Z
http://arxiv.org/abs/2309.05814v1
# Reinforcement Learning for Supply Chain Attacks Against Frequency and Voltage Control ###### Abstract The ongoing modernization of the power system, involving new equipment installations and upgrades, exposes the power system to the introduction of malware into its operation through supply chain attacks. Supply chain attacks present a significant threat to power systems, allowing cyberreiminals to bypass network defenses and execute deliberate attacks at the physical layer. Given the exponential advancements in machine intelligence, cyberreiminals will leverage this technology to create sophisticated and adaptable attacks that can be incorporated into supply chain attacks. We demonstrate the use of reinforcement learning for developing intelligent attacks incorporated into supply chain attacks against generation control devices. We simulate potential disturbances impacting frequency and voltage regulation. The presented method can provide valuable guidance for defending against supply chain attacks. Supply chain attacks, frequency control, voltage regulation, reinforcement learning, cyberattacks, cyber-physical security ## I Introduction The growing resources available to cybercriminals and the financial sponsoring of cyberattacks are producing advanced cyberattacks against industrial control systems. Particularly, supply chain attacks pose significant threats to critical infrastructure as they are difficult to protect against. These attacks involve cybercriminals compromising the supply chain of third-party software and equipment to inject vulnerabilities into devices either before their shipment or through subsequent firmware updates [1]. Leveraging third-party equipment makes it challenging for the targeted facility to anticipate and detect the attack. Following breach, the malware can spread to other equipment to give the cybercriminals a stronger foothold on the system and/or autonomously disrupt the operation of the infected system or cause damage to the system over a long time. The potential impact of supply chain attacks is evident in the Stuxnet malware attack against the Iranian Natanz nuclear facility, which leveraged four zero-day exploits and vulnerabilities in Microsoft and Siemens software and equipment [2]. Stuxnet infected nuclear centrifuges' programmable logic controllers, issuing malicious control commands that caused damage to the centrifuges while hiding its activity to avoid detection [3]. Supply chain attacks can enable cyberattackers to target control loops that are infeasible to compromise remotely. This facilitates the execution of stealthy and damaging attacks. Demonstrating the impact of compromising physical devices, the Aurora generator test showed that breaching control of the circuit breaker of a generator can enable a cybercriminal to do irreparable damage the generator - causing significant financial loss to owner-operators and the electric grid [4]. The modernization of electric grid infrastructure and replacement of outdated and obsolete equipment is expected to introduce vulnerabilities and provide opportunities for supply chain attacks against the electric grid. Given the significant potential for supply chain attacks to disrupt the electric grid, it is essential to understand the attack strategies that might be employed in a supply chain attack. Consequently, we emphasize the need to model intelligent supply chain attacks to study potential physical impacts as a step towards improving the security posture of the power system. Reinforcement learning (RL) presents a promising method to model and learn intelligent cyber-physical attacks. The authors in [5] used RL to develop malware that infects substations, falsifying power measurements to compromise state estimation. The malware causes voltage sags and subsequent potential cascading failure induced by low-voltage protection generation tripping. Further studying the use of RL to strategize cyber-physical power system attacks, the authors in [6, 7, 8, 9] developed RL agents to synthesize line-switching attacks, which exploit how sudden changes in grid topology can lead to cascading failures and blackout. Wang _et al._[10] proposed a combined RL-generated line-switching attack involving a physical attack that trips a transmission line and a simultaneous cyberattack that fakes its outage signal on a different line to cause improper dispatch actions. The above studies only consider the impact of attacks on state estimation and dispatch subsequent to it, without consideration of the dynamic behavior of the electric grid. Considering attack impact on dynamic frequency regulation, Mohamed _et al._[11] developed RL agents to synthesize attacks compromising load-frequency control, including load-switching and false data injection attacks. In this paper, we apply RL to model intelligent supply chain attacks. Our work expands on previous research on RL for power system cyber-physical security by addressing a gap in assessing how cyberattacks targeting voltage regulation can disrupt power system dynamics (rather than state estimation and static power flow). Since voltage control is dispatched following optimal power flow, the literature studying voltage attacks has only considered voltage dispatch falsification on a relatively long time-scale (5 minutes per state estimation). However, supply chain attacks that involve infecting devices responsible for automatic voltage regulation (AVR), power system stabilization (PSS), or voltage synchronization can execute voltage attacks at a much shorter timescale and exploit the fast voltage regulation dynamics to execute aggressive destabilizing attacks. Further, coordinated supply chain attacks - that might occur following malware propagation or within an advanced persistent threat - can cause more disruptive impacts. Hence, we develop RL-based malware to execute supply chain attacks and demonstrate potential impacts on voltage and frequency regulation and stability. Further, we demonstrate the impact of combined supply chain attacks compromising multiple devices simultaneously. The paper outline is as follows: In Section II, we discuss supply chain attacks in the context of compromising generation control devices. In Section III, we formulate supply chain attacks as a RL problem. In Section IV, we simulate the impact of several test cases on power system frequency and voltage stability. The conclusion is in Section V. ## II Problem Formulation Intelligent electronic devices (IED), including the AVR, PSS, and governor (refer to Fig. 1), regulate generation to maintain power system stability [12]. Local to their generation facility, these IEDs communicate within the plant local-area-network (LAN) via Ethernet, acquiring local measurements and sending local messages [13]. The lack of remote communication to these IEDs makes it largely infeasible for cyberattackers to remotely compromise these IEDs [13]. Nevertheless, these IEDs are vulnerable to cyberattacks that infiltrate the plant LAN, malware that is introduced via infected USB devices [13], or malware that is programmed to the IEDs in a supply chain attack. Similar to Stuxnet, the malware can be programmed to search for specific software vulnerabilities on the IEDs. Next, the malware can execute a rootkit attack, modifying a portion of the IED program. To remain hidden, the rootkit can include further specifications to remain dormant, sparsely falsify control to disrupt the grid, or report normal operation values to operators and device logs. The malware can disrupt the grid by inducing voltage and frequency fluctuations. Small fluctuations can degrade power quality, cause invisible damage to power system equipment and consumer digital devices over a long period of time, reduce equipment operational life and power system efficiency, and cause flicker [14]. High fluctuations can force equipment tripping, destabilize the power system, and cause cascading failures. Our research applies RL to develop the attack policy that the malware can upload into the infected IEDs. In RL, an agent is trained to learn a policy, mapping a set of observations to actions, to maximize a reward signal in an environment. The RL agent's learnt policy can be employed as the malware attack policy that will map the measurement inputs (observations) to the IED to attack actions. The agent is rewarded positively in relation to its disruption of the power system. The RL's policy involves a mathematical function that can be programmed into the malware. For example, the PSS monitors the local frequency for oscillations and sends control signals to the generator AVR to quickly respond to and dampen these oscillations. As illustrated in Fig. 2, an RL-based PSS malware would involve mapping the local voltage and frequency measurements to malicious control signals to the AVR. Similarly, RL-based rootkit infections of the governor (refer to Fig. 3) and AVR would involve mapping frequency and voltage observations, respectively, to falsified measurements or malicious control set-points. We also consider simultaneous supply chain attacks attacks, which might happen following the infection of multiple IEDs at a plant or multiple plants. The malware at one location can remain dormant for a long time until other devices or facilities are infected, and then a simultaneous attack is launched. While we do not consider mitigation strategies in this paper, the ultimate goal of modelling attack policies is to guide the design of defense strategies to enhance power system security. In this paper, we present a preliminary study to demonstrate the use of RL for devising supply chain attacks and rely on future work to scale the presented method to more complex power systems and develop defenses. ## III RL for Supply Chain Attacks We use the RL Proximal Policy Optimization (PPO) algorithm to model the malware. The PPO algorithm has a few advantages that makes it suitable for modelling the malware [15]: First, the algorithm has been shown to achieve state-of-the-art performance on a wide range of continuous control tasks. Likewise, the malware is developed for continuous observations (voltage and frequency measurement inputs to the IEDs) and continuous actions (false control signals). Second, in terms of performance, the PPO algorithm enables computationally efficient and stable policy learning. In this section, we will formulate supply chain attacks as a RL problem. Fig. 1: Generator with associated control devices in its LAN. The PSS and AVR regulate the generator’s voltage. The governor (GOV) regulates the generator’s frequency. The HMI represents the human-machine interface that the owner would operate within the LAN. SCADA dispatch includes automatic generation control and voltage control dispatch. Fig. 2: Malware infection of PSS IED. The malware observes the voltage, frequency, and rate-of-change of frequency measurement inputs to the PSS and computes the PSS signal to the AVR. The malware can switch between normal PSS program in-dormancy and its malicious code. The training process for RL malware involves iteratively interacting with a model of the power system, which serves as the RL environment, to optimize the malware's attack policy. The process is visually represented in Fig. 4. The RL agent is composed of an actor, responsible for generating the malware's actions, specifically the false commands or data injections for the supply chain attack. These actions are determined based on the observations of the RL agent. Additionally, the RL agent comprises a critic that evaluates the actions taken by the actor. During the interaction, the RL agent receives a vector \(\mathcal{S}_{t}\) representing the current state of the power system. Using this information, the actor generates a vector \(\mathcal{A}_{t}\) containing the false data to be injected into the power system's IED device(s). The RL-based malware can be mathematically represented as a policy \[\pi(\mathcal{A}_{t}|\mathcal{S}_{t}):\mathcal{S}_{t}\rightarrow\mathcal{A}_{t} \tag{1}\] We simulate the response of the power system to the injected false data for a time-step \(T_{s}\) seconds. The power system can be represented as a non-linear system \[\dot{x} =f(x,\mathcal{A}) \tag{2}\] \[\mathcal{S} =g(x) \tag{3}\] with state and output functions \(f\) and \(g\), respectively. Based on the impact of the injected false data on the power system, we return a reward to the RL-based malware. In this study, we reward the agent based on negative impact on the power quality quantified in terms of frequency fluctuation. The general template of the reward function that we use is as follows: \[\mathcal{R}=\sum_{g\in\mathcal{G}}\gamma_{1}\hat{\psi}_{g}^{2}+\gamma_{2}\{ \texttt{trip}_{g}\} \tag{4}\] where \(\mathcal{G}\) is the set of generators in the power systems and \(\texttt{trip}_{g}\) is a Boolean value, calculated as follows: \[\texttt{trip}_{g}=\{\tilde{V}_{g}\notin[\underline{V},\overline{V}]\}\vee\{ \hat{\omega}_{g}\notin[\underline{\omega},\overline{\omega}]\}\vee\{\hat{ \left|\hat{\omega}_{g}\right|>r\} \tag{5}\] signalling when the attack has caused generator \(g\) to trip due to the triggering of voltage or frequency protection. \(\hat{V}\), \(\hat{\omega}\), and \(\hat{\omega}\) are the generator's terminal voltage, frequency, and rate-of-change of frequency measurements, respectively. The subscript \(g\) relates the measurements to generator \(g\). The relay settings corresponding to the different protection functions included in (5) are listed in Table I. Variables \(\gamma_{1},\gamma_{2}>0\) in (5) are reward scaling values. We formulate (5) to reward the agent in proportion to the magnitude of frequency fluctuations that the agent induces all over the power system to degrade power quality. Additionally, the agent receives additional rewards if its actions lead to generation loss that may destabilize the grid. The reward function can be expanded to include additional attack goals. The training happens in episodes during which the training loops between computing an RL action, simulating the power system's response, and rewarding the agent. The goal of the PPO algorithm is to optimize the policy to maximize the agent's cumulative reward. The readers are referred to [15] for more detail about the PPO algorithm. If we consider the malware's compromise of a PSS IED, the malware's policy is \(\pi(V_{PSS}|\hat{V},\hat{\omega},\hat{\omega})\), i.e., the malware observes local voltage, frequency, and rate-of-change of frequency measurements that the PSS acquires and computes a false control signal (\(V_{PSS}\)) that is injected into the AVR of the targeted generator. This is illustrated in Fig. 2. The malware appends of piece of malicious code to the program that switches between normal PSS control and malicious control to sporadically disturb power system operation. Compromising the AVR is similar. When compromising the governor, the RL policy is \(\pi(\hat{\omega}|\hat{\omega},\hat{\omega})\) or \(\pi(\omega_{ref}|\hat{\omega},\hat{\omega})\). The malware observes the frequency measurement acquired by the governor IED and com Fig. 4: Overview of RL training. The RL environment represents the power system and the RL agent action represents the supply chain attack. The experience buffer stores dynamic trajectories of the power system during the attacks for agent training. Fig. 3: Malware infection of governor IED. The malware observes the frequency measurement input to the governor (and can additionally estimate the rate-of-change of frequency). The malware can falsify the frequency reference (from SCADA) and/or local frequency measurement fed into the governor control logic. putes its rate-of-change. Next, the malware computes a false frequency measurement or reference (\(\omega_{ref}\)) to the governor control logic, as illustrated in Fig. 3. ## IV Case Studies We use the Kundur two-area system [18] (illustrated in Fig. 5) in this study. The two-area system has under-damped modes, which makes it suitable for assessing and demonstrating RL's use for inducing oscillations and compromising power system transient stability in supply chain attacks. The system contains two coherent groups of generators, each group containing 2 synchronous generators. We perform the study using Python. We use Andes [19] to simulate the power system dynamics and package the power system model inside an OpenGym [20] environment for training the RL agent. The power system parameters can be found in Andes documentation. The GENROU [21], TGOV [22], and EXDC2 [23] models are used for the synchronous generators, and their governors and exciters, respectively. We use PyTorch [24] and StableBaselines3 [25] for RL. Readers can find the code repository for our work here1. Footnote 1: [https://github.com/amrmsab/RL-CPS-attacks](https://github.com/amrmsab/RL-CPS-attacks) We present several test cases below. In all case studies, we use a time-step (\(T_{s}\)) of 200 milliseconds between the actions of the RL agent. We also average the rate-of-change of frequency values over each 200 milliseconds. We train the RL agent in episodes, each 20 seconds long. The reward scaling values that we apply in the reward function are \(\gamma_{1}=1\) s\(\cdot\)Hz\({}^{-1}\) and \(\gamma_{2}=5\). The observations spaces of the RL agents in all case studies are limited to the local measurements of the generator(s) that the agent is attacking. For governor IED attacks, the agent can observe the local frequency and its rate of change. For AVR and PSS IED attacks, the agent can additionally observe the voltage measurement. The RL agents' action spaces are bounded within the relay settings outlined in Table I. We impose this bound to prevent the injection of simple bias attacks that might be easily detected and prevented. Instead, the small bounds encourage the agents to learn more minimal sophisticated attack strategies that induce oscillations in frequency and voltage within the power system with little modification to the compromised signals. We consider a rate-of-change of frequency relay setting of 1 Hz/s in the studies. However, we suppress generation tripping during RL training and in the presented case studies to continue to demonstrate the RL attack signal and its impact on the system. ### _Governor IED supply chain attack_ In this test case, the malware infects the governor IED of generator G1 and reports false frequency measurements to the governor that are in the range of \([59.3,60.7]\) Hz. Fig. 6 shows that the malware's corruption of the frequency measurements introduces frequency fluctuations that grow into approximately \(0.6\) Hz/s within 20 seconds (Fig. 6 (d)). Fig. 6 (a) shows the reported false frequency measurements and Figs. 6 (b) and (c) show the impact on bus voltages and generator frequencies, respectively. We observe resonance behavior in the growth of the frequency fluctuations. On further inspection, we find that the RL agent is able to identify and inject an oscillatory attack signal with a frequency that is in close vicinity to the power system's dominant oscillatory eigenmode to excite resonance. Fig. 7 shows the location of the system eigenmodes. The dominant oscillatory eigenmode is located at \(4.22\) rad/s. Fourier analysis of the attack signal, as illustrated in Fig. 8, shows that the malware infects a signal at \(4.04\) rad/s, which excites this eigenmode. Note that the attack signal can be easily scaled to induce higher frequency fluctuation. Scaling the reported false frequency measurement to the range of \([57.5,61.5]\) Hz induces frequency fluctuations that exceed \(1\) Hz/s and are very likely to trip rate-of-change of frequency protection in the 2 areas. This scaled attack is illustrated in Fig. 9. Note that while the attack targets G1 in Area 1, the frequency fluctuations in Area 2 (green and red in Fig. 9 (d)) grow faster than in Area 1, and hence, generators in Area 2 (G3 and G4) would trip sooner than generators in Area 1. Generation tripping with a rate-of-change of frequency protection setting of 1 Hz/s would happen in less than 15 seconds. This observation highlights the interconnected nature Fig. 5: Two area testbed system. Fig. 6: Governor IED supply chain attack. The falsified frequency measurement is in the range \([59.3,60.7]\) Hz. (a) Falsified frequency measurements injected by attack. (b) Voltage measurements of the 10 buses in the two-area system. (c) Frequency measurements of the 4 generators in the two-area system. (d) Rate of change of frequency of the generators. of the power system, wherein an attack on a generator has the potential to trigger failures in other areas of the system. Figure 10 presents the learning curve of the RL agent depicted in Figure 9. The plot illustrates the episode reward obtained by the agent during the training process (averaged over 40 episodes for smoothness). The observed growth in the curve is attributed to the agent's learning of policies that result in an increased rate-of-change of frequency, surpassing the rate-of-change of frequency protection setting. ### _Combined governor IED supply chain attack_ Combined supply chain attacks can produce more minimal (in terms of smaller range of reported false frequency measurements), yet more aggressive attacks (in terms of frequency fluctuation). In Fig. 11, we consider the case when the attack has infected the governor IEDs of generators G1 and G3. The reported false frequency measurements to the governors are limited to the range of \([58.5,61]\) Hz, which is smaller than the range considered in the test in Fig. 9 but with comparable effects. ### _PSS-governor IED supply chain attack_ Alternatively, a combined supply chain attacks against the PSS and governor of one generator (G1) can produce higher frequency fluctuations while also allowing a smaller range of reported false frequency measurements. In Fig. 12, the malware reports frequency measurements to the governor in the range of \([58.5,61.5]\) Hz and voltage measurements to the AVR in the range of \([0.95,1.12]\) pu. This amplifies the frequency fluctuations to close to \(2\) Hz/s. ### _AVR IED supply chain attack_ In this test case, the malware infects the AVR (or PSS) IED of generator G1. The malware reports voltage measurements to the AVR that are in the range of \([0.95,1.15]\) pu. Fig. 13 shows that the malware's corruption of the voltage measurements introduces frequency fluctuations that grow into approximately \(0.4\) Hz/s (Fig. 13 (d)). Fig. 8: Fourier frequency spectrum of the attack signal in Fig. 6. The peak is located at a frequency that is close to the testbed’s oscillatory eigenmode. Fig. 10: Learning curve for agent in Fig. 9. The reward is averaged over 40 episodes. Fig. 7: Root locus plot of the eigenmodes of the two area system. The oscillatory eigenmode is located at \(-0.20\pm 4.22j\). Fig. 9: Governor IED supply chain attack. The falsified frequency measurement is in the range \([57.5,61.5]\) Hz. We notice that the attack does not excite the oscillatory eigenmode of the system like previous attacks. The RL training learns that injecting the attack in Fig. 13 (a) causes larger frequency fluctuation. For comparison, Fig. 14 shows a voltage corruption attack that aims to excite the oscillatory eigenmode similar to the previous test cases. The resulting frequency fluctuations are smaller in Fig. 14 compared to Fig. 13. ### _Combined AVR IED supply chain attack_ AVR (or PSS) IED supply chain attacks can also combine and lead to more significant frequency fluctuations. In Figure 15, we explore the scenario where malware infects the AVR IEDs of generators G1 and G3. The reported voltage values to the AVR of G1 are in the range \([0.95,1.15]\) pu, as considered in Figure 13. The AVR of G3 receives reported values of \([0.94,1.14]\) pu. In Figure 15, we observe that this combined attack can induce amplified frequency fluctuations that exceed \(0.5\) Hz/s. ## V Conclusion In this study, we leveraged RL to devise supply chain attacks that compromise local generation control devices, specifically targeting governor, AVR, and PSS IEDs. The results of our research demonstrate the potential of these attacks to degrade power quality and inflict long-term invisible impacts on power system equipment. Further, these attacks can also pose a serious threat to power system stability by forcing generation tripping. Through several case studies, we have illustrated that RL agents can successfully learn and deploy sophisticated attack policies, including simultaneous attacks. These attack policies can then be packaged into malware, which can be maliciously uploaded to generation control IEDs during supply chain Fig. 11: Combined governor IED supply chain attacks. The falsified frequency measurements are in the range \([58.5,61]\) Hz. Fig. 14: PSS IED supply chain attack. The falsified voltage measurement is in the range \([0.95,1.15]\) pu. The attack frequency is near the system’s oscillatory eigenmode. Fig. 12: Combined governor and PSS IED supply chain attacks. The falsified frequency measurement is in the range \([58.5,61]\) Hz. The falsified voltage measurement is in the range \([0.95,1.12]\) pu. Fig. 13: PSS IED supply chain attack. The falsified voltage measurement is in the range \([0.95,1.15]\) pu. attacks, either before their installation or through system updates. The implications of our findings underscore the need to employ RL-based approaches for anticipating intelligent supply chain attacks. By proactively employing RL-based defense mechanisms, we can effectively safeguard against the emerging threat landscape and mitigate potential disruptions to power systems. In conclusion, our research serves as a persuasive call-to-action for the adoption of RL techniques to anticipate and defend against intelligent supply chain attacks. ## Appendix A Appendix
2301.00196
Quantum coherence can be transformed into heat
The first law of thermodynamics restates the law of conservation of energy. It partitions the change in energy of a system into two pieces, heat and work. While there is no ambiguity to define heat and work in classical thermodynamics, their classification in the quantum regime is not that obvious. Thus, the first law of thermodynamics becomes problematic in the quantum regime. However, recent studies have shown if contribution of quantum coherence is considered to the change of internal energy of the system, the first law of thermodynamics can be extended to the quantum domain. Here we investigate the new version of first law of thermodynamics for some quantum transformations by using two-level atomic system under non-dissipative channel. In our work we achieve a novel result that quantum coherence can be transformed into heat, and the heat can dissipate into the environments.
Xue-Qun Yan, Yan-Jiao Du, Wen-Tao Hou, Xiao-Ming Liu
2022-12-31T13:34:58Z
http://arxiv.org/abs/2301.00196v2
# Quantum coherence can be transformed into heat ###### Abstract The first law of thermodynamics restates the law of conservation of energy. It partitions the change in energy of a system into two pieces, heat and work. While there is no ambiguity to define heat and work in classical thermodynamics, their classification in the quantum regime is not that obvious. Thus, the first law of thermodynamics becomes problematic in the quantum regime. However, recent studies have shown if contribution of quantum coherence is considered to the change of internal energy of the system, the first law of thermodynamics can be extended to the quantum domain. Here we investigate the new version of first law of thermodynamics for some quantum transformations by using two-level atomic system under non-dissipative channel. In our work we achieve a novel result that quantum coherence can be transformed into heat, and the heat can dissipate into the environments. ## I Introduction When it comes to conservation laws, it is naturally easy to recall the first law of thermodynamics, which is a formulation of the law of energy conversion and conservation. In classical cases, the first law of thermodynamics classifies the changes of energy for macroscopic systems as the work performed by external driving and heat exchanged with the environments [1]. Recently it has been recognized that the law of thermodynamics should be redefined in the quantum cases because the coherence plays a significant and fundamental role [2, 3]. It was shown that coherence is independent of work and heat derived from their respective classical analogies, and it is a part of the first law of quantum thermodynamics. Thus, quantum coherence would participate in the conversion of energy in a quantum system. Progress in the past decade has consistently shown that it is possible to construct systems in which thermodynamics coexists with quantum effects [4]. Indeed, coherence is the signature of quantum behavior which is used to drive a wide variety of phenomena and devices. The well-known quantum phenomenon, the wave nature of particles, can be interpreted as manifestations of quantum coherence. More importantly, quantum coherence is regarded as a key ingredient to develop quantum technologies. Coherence is recently considered as an essential resource for quantum process, since it may be consumed to achieve useful tasks [4]. Many novel insights stem from the characterization of quantum coherence as a physical resource. Based on the resource theory of quantum thermodynamics, there have been lots of study works on understanding the roles of coherence in quantum thermodynamics [4-6]; Misra _et al._ reported their study on the role of coherence in quantum thermodynamics [7], in which they analyzed the physical situation that the resource theories of coherence and thermodynamics play completing roles. On the other hand, the effects of coherence on work extraction [8], and in determining the distribution of work done on a quantum system have also been presented [9,10]. There are some studies, in which coherence is treated as a key factor in the operation of quantum thermal machines, such as heat engines and refrigerators [11-15]. It has also been noted that quantum coherence plays important roles in the process of conversion from thermal to electrical power [16]. Moreover, recent works have found that coherence affects the performance of non-adiabatic work protocols [17-20]. Despite many works have demonstrated that quantum coherence can be used as an advantage or a resource for various thermodynamic processes [21-29], the role of quantum coherences in thermodynamics is still not fully understood. Understanding functional role of coherence is an important topic in the field of thermodynamics of quantum systems. ## II First law of quantum thermodynamics The generalization of thermodynamics to quantum regimes faces challenges ranging from the proper identification of heat and work to the clarification of the role of coherence. How to define work and heat is still a controversial topic in quantum thermodynamics, therefore in quantum regimes it is necessary to revisit these time-honored concepts. Consider a generic quantum system \(\mathcal{S}\) with reduced density matrix \(\rho_{S}\) evolving under a Hamiltonian \(H_{S}\) and coupled to an external environment. Since the internal energy of the system can be expressed as \(U=\left\langle\hat{H}_{S}\right\rangle=Tr\left(\hat{\rho}_{S}\hat{H}_{S}\right)\)[30], a common approach is based on the change of the total energy expectation value: \(\mathrm{d}U=\mathrm{Tr}\left[\hat{\rho}_{S}d\hat{H}_{S}+\hat{H}_{S}d\hat{ \rho}_{S}\right]\), defining the first term (change of Hamiltonian) as work \(\mathrm{d}W\) and the second (change of state) as heat \(\mathrm{d}Q\). This formulation gives an interpretation of the differential form of the first law of thermodynamics. The change of state may be associated with a change of entropy, _i.e._, heat. However, it is generally believed that the heat defined is not exactly equivalent to classical heat [2]. Because of its non-classical properties, it can be called quantum heat here. Since the coherence plays a unique role in the quantum thermodynamics, when there is quantum coherence in the system both the energetics and the coherence properties must be considered together. So, in quantum thermodynamics, the first law can be redefined as [2]: \(dU=dW+dQ+dC\). We can explain this by confirming that quantum coherence, as heat and work, is a form of energy exchanged between system and environment. This relation provides a quantum version of the first law for some specific quantum processes analogous to that of classical thermodynamics. The classical form is: \(dU=dW+dQ\), where \(dW\) is the amount of work done by external to the system, and \(dQ\) is the amount of heat added to the system during an infinitesimal process. Here we should note that \(C\) does not has a classical analog, unlike \(W\) and \(Q\). In contrast to the classical definitions, work and heat are also no longer absolute physical quantities. They depend on the chosen measurement basis as quantum coherence. These quantities can be in principle calculated if the density operator \(\hat{\rho}(t)\) and the Hamiltonian \(\hat{H}(t)\) of the system are given. Their time dependence in a finite quantum process can be calculated by integration. The expression of change of the internal energy of the system can be written as [2] \[\Delta U(t)=\sum_{n}\sum_{k}\int_{0}^{t}\frac{d}{dt^{\prime}}\left(E_{n}\rho_{k} \big{|}C_{n,k}\big{|}^{2}\right)dt^{\prime} \tag{1}\] where \(C_{n,k}=\left\langle n|k\right\rangle\). \(\{|k\rangle\}\) is the eigenstate basis of the density operator \(\hat{\rho}\), and \(\rho_{k}=\left\langle k|\hat{\rho}|k\right\rangle\) is the eigenvalues of \(\hat{\rho}\), i.e., \(\hat{\rho}=\sum_{k}\rho_{k}|k\rangle\langle k|\). Here, the Hamiltonian is expressed as \(\hat{H}_{S}=\sum_{n}E_{n}|n\rangle\langle n|\), with \(E_{n}=\left\langle n\big{|}\hat{H}_{S}|n\right\rangle\) and \(|n\rangle\) are the \(n\)th energy eigenvalue and eigenstate, respectively. The work, heat and quantum coherence in finite-time quantum processes can also be calculated respectively by direct integration [2]: \[W(t)=\sum_{n}\sum_{k}\int_{0}^{t}\rho_{k}\big{|}C_{n,k}\big{|}^{2}\frac{dE_{n} }{dt^{\prime}}dt^{\prime} \tag{2}\] \[Q(t)=\sum_{n}\sum_{k}\int_{0}^{t}E_{n}\big{|}C_{n,k}\big{|}^{2}\frac{d\rho_{k }}{dt^{\prime}}dt^{\prime} \tag{3}\] and \[C(t)=\sum_{n}\sum_{k}\int_{0}^{t}(E_{n}\rho_{k})\frac{d}{dt^{\prime}}\big{|}C _{n,k}\big{|}^{2}dt^{\prime} \tag{4}\] These are the energetic contribution of the dynamics of the work, heat and coherence, respectively. As can be seen, \(W(t)\), \(Q(t)\) and \(C(t)\) depend on the quantity \(\big{|}C_{n,k}(t)\big{|}^{2}=|\langle n(t)|k(t)\rangle|^{2}\). In a quantum process, this quantity varies only if the directions of the basis vectors \(|k\rangle\) of the density operator change with respect to the basis vectors \(|n\rangle\) of the Hamiltonian. As we known that coherence cannot be converted to work through a direct physical process. This phenomenon is known as work locking [31-33]. One may naturally ask whether it can be converted into heat in a direct way, or how it works to the change of the internal energy of the system. Answering this question is the purpose of our work. The basic idea being discussed is shown in Fig. 1. As shown below, we will try to give Figure 1: (color online). Diagrammatic representation of the coherence transforms into heat. The left diagram represents the Bloch sphere. The qubit \(|\psi\rangle\) (coherent superposition state of a two-level quantum system) is represented by a point on the surface of the Bloch sphere, and is defined by a vector having an angle \(\theta\) with the polar axis (here \(z\)) and azimuthal angle \(\varphi\) is on x-y plane. The north and south poles correspond to the pure qubit states \(|0\rangle\) and \(|1\rangle\), respectively. such a process that they can convert coherence into heat in a direct physical process. To be more concrete, let us consider the case of a qubit under non-dissipative channel first. ## III Physical model Before investigating this problem, we briefly recall the Kraus operator sum representation. Given an initial state for a qubit \(\rho(0)\), the evolution under external environments can be expressed in the Kraus operator sum representation [34]: \(\varepsilon[\hat{\rho}(0)]=\sum_{i}\hat{R}_{i}\hat{\rho}(0)\hat{R}_{i}^{+}= \rho(t)\). The operation elements \(\hat{R}_{i}\) are the Kraus operators associated with the decohering process of a single qubit and satisfy \(\sum_{i}K_{i}^{+}K_{i}=I\) then \(\operatorname{Tr}[\rho(t)]=1\). If the qubit is under a non-dissipative channel, decoherence occurs in the absence of transfer of energy. More specifically, we first focus on the dynamical evolution of the system under the action of phase damping channel, which the energy eigenstates of the quantum system are invariant with the time evolution, but do accumulate a phase which is proportional to the eigenvalue. For this case, the Kraus operators are given by [34] \[\tilde{R}_{1}=\begin{pmatrix}1&0\\ 0&\sqrt{1-\gamma}\end{pmatrix}\hskip-2.845276pt,\hskip 28.452756pt\tilde{R}_{2}= \begin{pmatrix}0&0\\ 0&\sqrt{\gamma}\end{pmatrix} \tag{5}\] where \(\gamma=1-\exp(-\Gamma t)\) with \(\Gamma\) denoting a decay rate. If we suppose that in the energy basis \(\{\lfloor g\rangle,\lfloor e\rangle\}\), the initial state is written in the form \[\hat{\rho}(0)=\begin{pmatrix}\rho_{00}&\rho_{01}\\ \rho_{10}&\rho_{11}\end{pmatrix} \tag{6}\] then the density matrix as a function of time is given by \[\hat{\rho}(t)=\begin{pmatrix}\rho_{00}&e^{\frac{1}{2}\tau t}\rho_{01}\\ e^{-\frac{1}{2}\tau t}\rho_{10}&\rho_{11}\end{pmatrix} \tag{7}\] In the following, the system is considered to be a two-level atom, whose ground and excited states, \(\lfloor g\rangle\) and \(\lfloor e\rangle\), have energies \(E_{g}\) and \(E_{e}\), respectively, so that the Hamiltonian is given by \(\hat{H}_{S}=E_{g}\lfloor g\rangle\langle g\rfloor+E_{e}\lfloor e\rangle \langle e\rfloor\). We consider the case that the atom is assumed to be initially prepared in the pure state \(\lfloor\psi(0)\rangle=\cos\theta\lvert g\rangle+\sin\theta\,\lvert e\rangle\). Since the energy eigenvalues are constant (\(E_{n}=E_{g},\sigma E_{e}\)), no work is done by means of Eq. (2), i.e., \(W=0\). And the phase-damping channel induces a loss of quantum coherence without net energy exchange between the system and environment, thus in this process the internal energy of the system remains unchanged, that is, \(\Delta U=0\). In terms of the redefined first law of thermodynamics, we have that \(\, In order to further investigate this physical insight, we also consider the case of a two-level atom under Pauli maps (phase flip, bit flip, and bit-phase flip channels) [34]. These channels are non-dissipative channels, thus there is not energy exchange between the system and environment in these processes. The Kraus operators \(\hat{R}_{i}\) for phase flip, bit flip, and bit-phase flip channels are given by Table 1. For the channels, the explicit time dependence of the dephasing factor is \(p=1-\exp(-\Gamma t)\) with \(\Gamma\) being the dephasing rate. Next, we consider only phase flip channel as noise model, since the evolutions of the state \(\rho(t)\) under bit flip and bit-phase flip channel are symmetric with that of phase flip channel. \begin{tabular}{|c|c|c|} \hline Channel & \multicolumn{2}{c|}{Kraus operators} \\ \hline Phase flip & \(K_{1}=\sqrt{1-p}I\) & \(K_{2}=\sqrt{p}\sigma_{z}\) \\ \hline Bit flip & \(K_{1}=\sqrt{1-p}I\) & \(K_{2}=\sqrt{p}\sigma_{x}\) \\ \hline Bit-phase flip & \(K_{1}=\sqrt{1-p}I\) & \(K_{2}=\sqrt{p}\sigma_{y}\) \\ \hline \end{tabular} Table 1. Kraus operators \(\hat{R}_{i}\) for phase flip, bit flip, and bit-phase flip channels in terms of \(p\). Here, \(I\) is unit operator, and \(\sigma_{x},\ \sigma_{y}\) and \(\sigma_{z}\) are Pauli operators respectively. The density operator as a function of time, in the energy basis \(\{|g\rangle,|e\rangle\}\) under phase flip channel, has the following forms Figure 2: (color online). Phase damping process. The heat, work and internal energy, as function of the dimensionless time \(\tau\equiv\Gamma t\) in the case of \(\theta=\pi/6\). The dotted (online red) curve corresponds to the heat exchanged between the atom system and the environment, Q, the dashed (online blue) curve corresponds to the quantum coherence of the system, while the solid (online green) line corresponds to the internal energy of the system \(\Delta U\). Following the classical definition, Q is negative for heat leaves the system, so if heat added to the system, Q is positive. The process causes the coherence to be extracted and converted into heat of the system. \[\rho(t)=\begin{pmatrix}\rho_{00}&(2e^{-t}-1)\rho_{01}\\ (2e^{-t}-1)\rho_{10}&\rho_{11}\end{pmatrix} \tag{8}\] As in the forward case, we consider that the atom is assumed to be initially prepared in the state \(\left|\psi(0)\right\rangle=\cos\theta|g\rangle+\sin\theta\left|e\right\rangle\). Similarly, if we calculate the eigenvalues and eigenstates of \(\hat{\rho}(t)\) in (8), then we can use Eqs. (3) and (4) again to obtain \(C\) and \(Q\) (see Appendix B Information for details). Fig. 3 shows the results of \(C\) and \(Q\) for \(\theta=\pi/6\). As can be seen from the figure that in phase flip channel the heat of atomic absorption is still always equal to the extracted coherence despite there is a decay over time. It is clear that the results are qualitatively the same as seen in Fig. 2. These qualitative and numerical analyses lead us to a bold conclusion that the quantum coherence of an atom can be converted into heat transferred into the environment. ## 4 Conclusions We have investigated the first law of quantum thermodynamics for some quantum transformation in the framework of two-level atomic systems under non-dissipative channel. We can clearly see that due to the presence of coherences, the first law of thermodynamics is found to be unsatisfied in the quantum regime and must be corrected by introduced the coherence instead. It is well known that heat is not the energy that an object contains, but rather refers to the amount of energy transferred from one object to another. In the classical domain, energy is transferred by heat and work. However, in the quantum regime, coherence may also be involved in energy conversion. Although the quantum heat is different from classical heat, we believe that in some cases, the quantum heat must contain some characteristics of classical heat. We therefore speculate from our analysis that under certain circumstances, such as the non-dissipative channel, coherence can be converted into the heat, which dissipates into the environment. It is fair to say, however the fundamental meaning of quantum coherence and heat for our understanding of the first law of quantum thermodynamics is still awaiting discovery. While there is no doubt that our results give a new insight into this Figure 3: (color online). Same as Fig. 2 but for phase flip channel. For specific, we set \(E_{g}=0\), \(E_{e}=1\). problem. Finally, we briefly mention that physically, phase damping can be employed to describe, for example, what happens when a photon scatters randomly as it travels through a waveguide, or how electronic states in an atom are perturbed upon interacting with distant electrical charges. Thus, this suggests that our theoretical results would be realized through the possible experiments. ## Acknowledgements We thank Tiangong University 2017 degree and graduate education reform project, Project No. Y20170702. ## Appendix A Thermodynamics under phase damping channel Now we calculate in detail the evolution of heat and quantum coherence of a two-level atom under the action of phase damping channel. As pointed out in the main text, the time evolution of the atom is given by (7). In order to evaluate \(Q(t)\) and \(C(t)\), we need calculate the eigenvalues of \(\hat{\rho}(t)\), which can be found as follows \[\rho_{0}(t)=\frac{1}{2}\big{(}\rho_{00}+\rho_{11}+\sqrt{m}\big{)} \tag{10}\] and \[\rho_{1}(t)=\frac{1}{2}\big{(}\rho_{00}+\rho_{11}-\sqrt{m}\big{)} \tag{11}\] as well as the respective eigenvectors \[|k_{0}(t)\rangle=\frac{1}{\sqrt{\big{(}\rho_{11}-\rho_{00}-\sqrt{m}\big{)}^{2 }+4e^{-rt}\rho_{10}^{2}}}\Big{[}\big{(}\rho_{00}-\rho_{11}+\sqrt{m}\big{)}|g \rangle+2e^{-\frac{1}{2}\tau t}\rho_{10}|e\rangle\Big{]} \tag{12}\] and \[|k_{1}(t)\rangle=\frac{1}{\sqrt{\big{(}\rho_{11}-\rho_{00}+\sqrt{m}\big{)}^{2 }+4e^{-rt}\rho_{10}^{2}}}\Big{[}\big{(}\rho_{00}-\rho_{11}-\sqrt{m}\big{)}|g \rangle+2e^{-\frac{1}{2}\tau t}\rho_{10}|e\rangle\Big{]} \tag{13}\] where \(m=\rho_{00}^{2}-2\rho_{00}\rho_{11}+\rho_{11}^{2}+4e^{-rt}\rho_{01}\rho_{10}\). Since the initial state is \(|\psi(0)\rangle=\cos\theta|g\rangle+\sin\theta\,|e\rangle\), using the results of Eqs. (10) to (13), we can calculate the heat exchanged between the atom and the environment as a function of dimensionless scaled time \(\tau\equiv\Gamma t\) by means of Eq. (3) (here we take \(\theta=\pi/6\)), \[Q(\tau) = E_{g}\left[\int_{0}^{\tau}\!\!|\langle g|k_{0}(\tau^{\prime}) \rangle|^{2}\frac{d}{d\tau^{\prime}}\rho_{0}(\tau^{\prime})\,d\tau^{\prime}+ \int_{0}^{\tau}\!\!|\langle g|k_{1}(\tau^{\prime})\rangle|^{2}\frac{d}{d\tau ^{\prime}}\rho_{1}(\tau^{\prime})\,d\tau^{\prime}\right] \tag{14}\] \[+E_{e}\left[\int_{0}^{\tau}\!\!|\langle e|k_{0}(\tau^{\prime}) \rangle|^{2}\frac{d}{d\tau^{\prime}}\rho_{0}(\tau^{\prime})\,d\tau^{\prime}+ \int_{0}^{\tau}\!\!|\langle e|k_{1}(\tau^{\prime})\rangle|^{2}\frac{d}{d\tau ^{\prime}}\rho_{1}(\tau^{\prime})\,d\tau^{\prime}\right]\] \[= \frac{(\text{Ee}-\text{Eg})}{8}\left[\tau+\log 4-\log(3+e^{ \tau})\right]\] The result is shown in Fig. 2. In what follows, we calculate the energetic contribution of the dynamics of coherence in this example. From Eq. (4) we obtain that \[C(\tau) = E_{g}\left[\int_{0}^{\tau}\rho_{0}(\tau^{\prime})\frac{d}{d\tau^{ \prime}}|\langle g|k_{0}(\tau^{\prime})\rangle|^{2}\,d\tau^{\prime}+\int_{0}^{ \tau}\rho_{1}(\tau^{\prime})\frac{d}{d\tau^{\prime}}|\langle g|k_{1}(\tau^{ \prime})\rangle|^{2}\,d\tau^{\prime}\right] \tag{10}\] \[+E_{e}\left[\int_{0}^{\tau}\rho_{0}(\tau^{\prime})\frac{d}{d\tau^ {\prime}}|\langle e|k_{0}(\tau^{\prime})\rangle|^{2}\,d\tau^{\prime}+\int_{0}^{ \tau}\rho_{1}(\tau^{\prime})\frac{d}{d\tau^{\prime}}|\langle e|k_{1}(\tau^{ \prime})\rangle|^{2}\,d\tau^{\prime}\right]\] \[= \frac{(\text{Ee}-\text{Eg})}{8}\left[-\tau-\log 4\,+\log(3+e^{ \tau})\right]\] The result is also shown in Fig. 2 along with that for the internal energy. ## Appendix B Thermodynamics Under Phase Flip Channel Next let us calculate the evolution of heat and quantum coherence of a two-level atom under the action of phase flip channel. The eigenvalues of the equation (8), \(\hat{\rho}(t)\), can be found to be \[\rho_{0}(t)=\tfrac{1}{2}\left(\rho_{00}+\rho_{11}+\sqrt{m}\right) \tag{11}\] and \[\rho_{1}(t)=\tfrac{1}{2}\left(\rho_{00}+\rho_{11}-\sqrt{m}\right) \tag{12}\] as well as the respective eigenvectors \[|k_{0}(t)\rangle=\tfrac{1}{\sqrt{\left(\rho_{11}-\rho_{00}-\sqrt{m}\right)^{2} +4\rho_{10}^{2}(2e^{-rt}-1)^{2}}}\left[(\rho_{00}-\rho_{11}+\sqrt{m})|g\rangle +2(2e^{-rt}-1)^{2}\rho_{10}|e\rangle\right] \tag{13}\] and \[|k_{1}(t)\rangle=\tfrac{1}{\sqrt{\left(\rho_{11}-\rho_{00}+\sqrt{m}\right)^{2} +4\rho_{10}^{2}(2e^{-rt}-1)^{2}}}\left[(\rho_{00}-\rho_{11}-\sqrt{m})|g\rangle +2(2e^{-rt}-1)^{2}\rho_{10}|e\rangle\right] \tag{14}\] where \(m=\rho_{00}^{2}-2\rho_{00}\rho_{11}+\rho_{11}^{2}+4(2e^{-r}\,\,-1)^{2}\rho_{ 01}\rho_{10}\). In the followings, we can calculate the heat exchanged between the atom and the environment as a function of dimensionless scaled time \(\tau\equiv\Gamma t\) by means of Eq. (3). The initial state is \(|\psi(0)\rangle=\cos\theta|g\rangle+\sin\theta\,|e\rangle\) (here we take \(\theta=\pi/6\)), then the evolution of heat is given \[Q(\tau) = \frac{Eg}{16}\left[-4+4e^{-\tau}\sqrt{e^{2\tau}-3e^{\tau}+3}-2 \tau+\log(e^{2\tau}-3e^{\tau}+3)\right] \tag{15}\] \[+\tfrac{Eg}{16}\left[4-4e^{-\tau}\sqrt{e^{2\tau}-3e^{\tau}+3}-2 \tau+\log(e^{2\tau}-3e^{\tau}+3)\right]\] \[+\tfrac{Eg}{16}\left[-4+4e^{-\tau}\sqrt{e^{2\tau}-3e^{\tau}+3}- \log(3e^{-2\tau}-3e^{-\tau}+1)\right]\] \[+\tfrac{Eg}{16}\left[4-4e^{-\tau}\sqrt{e^{2\tau}-3e^{\tau}+3}- \log(3e^{-2\tau}-3e^{-\tau}+1)\right]\] The dynamics of coherence can be obtained by \[C(\tau) = \tfrac{Eg}{16}\,\,\left[1+2\tau-\log(3-3e^{\tau}+e^{2\tau})- \tfrac{e^{\tau}}{\sqrt{e^{2\tau}-3e^{\tau}+3}}\right] \tag{16}\] \[+\tfrac{Eg}{16}\,\,\left[-1+2\tau-\log(3-3e^{\tau}+e^{2\tau})+ \tfrac{e^{\tau}}{\sqrt{e^{2\tau}-3e^{\tau}+3}}\right]\] \[+\tfrac{Eg}{16}\,\,\left[-1-2\tau+\log(3-3e^{\tau}+e^{2\tau})+ \tfrac{e^{\tau}}{\sqrt{e^{2\tau}-3e^{\tau}+3}}\right]\] \[+\frac{Ee}{16}\ \left[1-2\tau+\log(3-3e^{\tau}+e^{2\tau})-\frac{e^{\tau}}{ \sqrt{e^{2\tau}-3e^{\tau}+3}}\right] \tag{10}\] For specific, if we set \(\ E_{g}=0,\ E_{e}=1\), then \[Q(\tau)=\tfrac{1}{8}\left[-\log(1+3e^{-2\tau}-3e^{-\tau})\right] \tag{11}\] and \[C(\tau)=\tfrac{1}{8}\left[\log(3+e^{2\tau}-3e^{\tau})-2\tau\right] \tag{12}\] These results are plotted in Fig. 3 along with that for the internal energy.
2303.00089
Radiall symmetry of minimizers to the weighted $p-$Dirichlet energy
Let $\mathbb{A}=\{z: r< |z|<R\}$ and $\A^\ast=\{z: r^\ast<|z|<R^\ast\}$ be annuli in the complex plane. Let $p\in[1,2]$ and assume that $\mathcal{H}^{1,p}(\A,\A^*)$ is the class of Sobolev homeomorphisms between $\A$ and $\A^*$, $h:\A\onto \A^*$. Then we consider the following Dirichlet type energy of $h$: $$\mathcal{F}_p[h]=\int_{\A(1,r)}\frac{\|Dh\|^p}{|h|^p}, \ \ 1\le p\le 2.$$ We prove that this energy integral attains its minimum, and the minimum is a certain radial diffeomorphism $h:\A\onto \A^*$, provided a radial diffeomorphic minimizer exists. If $p>1$ then such diffeomorphism exist always. If $p=1$, then the conformal modulus of $\A^\ast$ must not be greater or equal to $\pi/2$. This curious phenomenon is opposite to the Nitsche type phenomenon known for the standard Dirichlet energy.
David Kalaj
2023-02-28T21:18:32Z
http://arxiv.org/abs/2303.00089v2
# Radially symmetry of minimizers to the weighted \(p-\)Dirichlet energy ###### Abstract. Let \(\mathbb{A}=\{z:r<|z|<R\}\) and \(\mathbb{A}^{*}=\{z:r^{*}<|x|<R^{*}\}\) be annuli in the complex plane. Let \(p\in[1,2]\) and assume that \(\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})\) is the class of Sobolev homeomorphisms between \(\mathbb{A}\) and \(\mathbb{A}^{*}\), \(h:\mathbb{A}\xrightarrow{\text{onto}}\mathbb{A}^{*}\). Then we consider the following Dirichlet type energy of \(h\): \[\mathscr{F}_{p}[h]=\int_{\mathbb{A}(1,r)}\frac{\|Dh\|^{p}}{|h|^{p}},\ \ 1\leqslant p \leqslant 2.\] We prove that this energy integral attains its minimum, and the minimum is a certain radial diffeomorphism \(h:\mathbb{A}\xrightarrow{\text{onto}}\mathbb{A}^{*}\), provided a radial diffeomorphic minimizer exists. If \(p>1\) then such diffeomorphism exists always. If \(p=1\), then the conformal modulus of \(\mathbb{A}^{*}\) must not be greater or equal to \(\pi/2\). This curious phenomenon is opposite to the Nitsche type phenomenon known for the standard Dirichlet energy. Key words and phrases:Variational integrals, harmonic mappings, energy-minimal deformations, Dirichlet-type energy 2010 Mathematics Subject Classification: Primary 35J60; Secondary 30C70 ## 1. Introduction The general law of hyperelasticity tells us that there exists an energy integral \(E[h]=\int_{\mathbb{X}}E(x,h,Dh)dx\) where \(E:\mathbb{X}\times\mathbb{Y}\times\mathbb{R}^{n\times n}\to\mathbb{R}\) is a given stored-energy function characterizing mechanical properties of the material. Here \(\mathbb{X}\) and \(\mathbb{Y}\) are nonempty bounded domains in \(\mathbb{R}^{n},n>2.\) The mathematical models of nonlinear elasticity have been first studied by Antman [1], Ball [4, 5], and Ciarlet [8]. One of the interesting and important problems in nonlinear elasticity is whether the radially symmetric minimizers are indeed global minimizers of the given physically reasonable energy. This leads us to study energy minimal homeomorphisms \(h:\mathbb{A}\xrightarrow{\text{onto}}\mathbb{A}^{*}\) of Sobolev class \(\mathscr{W}^{1,2}\) between annuli \(\mathbb{A}=\mathbb{A}(r,R)=\{x\in\mathbb{R}^{n}:r<|x|<R\}\) and \(\mathbb{A}^{*}=\mathbb{A}(r_{*},R_{*})=\{x\in\mathbb{R}^{n}:r_{*}<|x|<R_{*}\}\). Here \(0\leqslant r<R\) and \(0\leqslant r_{*}<R_{*}\) are the inner and outer radii of \(\mathbb{A}\) and \(\mathbb{A}^{*}\). The variational approach to Geometric Function Theory [2, 3] makes this problem more important. Indeed, several papers are devoted to understanding the expected radial symmetric properties see [16] and the references therein. Many times experimentally known answers to practical problems have led us to the deeper study of such mathematically challenging problems. We seek to minimize the \(p\)-harmonic energy of mappings between two annuli in \(\mathbb{R}^{2}\). We consider the modified Dirichlet energy \(\mathscr{F}_{p}[f]=\int_{\mathbb{A}}\frac{\|Df\|^{p}}{|f|^{p}}\), \(1\leqslant p\leqslant 2\) and minimize it. ## 2. \(p\)-harmonic equation and statement of the main results For natural number \(n\), let \(A=(a_{i,j})_{n\times n}\in\mathbb{R}^{n\times n}\). We use \(A^{T}\) to denote the transpose of \(A\). The _Hilbert-Schmit norm_, also called the _Frobenius norm_, of \(A\) is denoted by \(\|A\|\), where \[\|A\|^{2}=\sum_{1\leq i,j\leq n}\left|a_{i,j}\right|^{2}=\operatorname{tr}[A^{T }A].\] For \(p\geq 1\), we say that a mapping \(h\) belongs to the class \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\), if \(h\) belongs to the Sobolev space \(\mathcal{W}^{1,p}(\mathbb{A})\) and maps \(\mathbb{A}\) onto \(\mathbb{A}^{*}\). Let \(h=(h^{1},\ldots,h^{n})\) belong to \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\). We denote the _Jacobian matrix_ of \(h\) at the point \(x=(x_{1},\ldots,x_{n})\) by \(Dh(x)\), where \(Dh(x)=\left(\frac{\partial h^{i}}{\partial x_{j}}\right)_{n\times n}\in \mathbb{R}^{n\times n}\). Then \[\|Dh\|^{2}=\sum_{1\leq i,j\leq n}\left|\frac{\partial h^{i}}{\partial x_{j}} \right|^{2}.\] Here \(\frac{\partial h^{i}}{\partial x_{j}}\) denotes the weak partial derivatives of \(h^{i}\) with respect to \(x_{j}\). If \(h\) is continuous and belongs to \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\) (\(p\geq 1\)), then the weak and ordinary partial derivatives coincide a.e. in \(\mathbb{A}\) (cf. [19, Proposition 1.2]). Let \(h=\rho S\), where \(S=\frac{h}{|h|}\) and \(\rho=|h|\). By [13, Equality (3.2)], we obtain that \[Dh(x)=\nabla\rho(x)\otimes S(x)+\rho\cdot DS(x)\] and \[\|Dh(x)\|^{2}=|\nabla\rho(x)|^{2}+\rho^{2}\|DS(x)\|^{2}, \tag{2.1}\] where \(\nabla\rho\) denotes the gradient of \(\rho\). We say that \(h:\mathbb{A}\to\mathbb{A}^{*}\) is a _radial mapping_, if \(h(x)=\rho(|x|)\frac{x}{|x|}\) and if \(\rho\) is real and positive function. We use \(\mathcal{R}(\mathbb{A},\mathbb{A}^{*})\) to denote the class of radial homeomorphisms in \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\) and use \(\mathcal{P}(\mathbb{A},\mathbb{A}^{*})\) to denote the class of generalized radial homeomorphisms in \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\). We also use \(\mathcal{H}(\mathbb{A},\mathbb{A}^{*})\) to denote the class of homeomorphisms in \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\). As it is said before, an important problems in nonlinear elasticity is whether the radially symmetric minimizers are indeed global minimizers. For example, Iwaniec, and Onninen [17] discussed the minimizers of the following two energy integrals: \[\mathfrak{E}[h]=\int_{\mathbb{A}}\|Dh(x)\|^{n}dx\quad\text{and}\quad\mathfrak{ F}[h]=\int_{\mathbb{A}}\frac{\|Dh(x)\|^{n}}{|h(x)|^{n}}dx\] among all homeomorphisms in \(\mathcal{W}^{1,n}(\mathbb{A},\mathbb{A}^{*})\), respectively. The energy integral \(\mathfrak{F}\) for \(n=2\), has been considered previously by Astala, Iwaniec, and Martin in [2]. Further such energy has been generalized in planar annuli by Kalaj in [14, 15] and spatial annuli in [12]. On the other hand, Koski and Onninen [16] investigated the minimizers of the \(p\)-harmonic energy \[\mathcal{E}_{p}[h]=\int_{\mathbb{A}}\|Dh(x)\|^{p}dx\] among all homeomorphisms in \(\mathcal{W}^{1,p}(\mathbb{A},\mathbb{A}^{*})\), where \(\mathbb{A}\) and \(\mathbb{A}^{*}\) are planar annuli and \(1\leq p<2\), provided the homeomorphisms fix the outer boundary. Recently, Kalaj [13] studied the Dirichlet-type energy \(\mathscr{F}[h]\) among mappings in \(\mathcal{H}(\mathbb{A},\mathbb{A}^{*})\), where \[\mathscr{F}[h]=\int_{\mathbb{A}}\frac{\|Dh(x)\|^{n-1}}{|h(x)|^{n-1}}dx. \tag{2.2}\] For \(n=3\), the author proved that the minimizers of \(\mathscr{F}[h]\) are certain generalized radial diffeomorphism (cf. [13, Theorem 1.1]). Motivated by the case \(n=3\), in [13] it was posed the following question. **Question 2.1**.: For \(n\neq 3\), does the Dirichlet integral of \(h\in\mathcal{H}(\mathbb{A},\mathbb{A}^{*})\), i.e. the integral \[\mathscr{F}[h]=\int_{\mathbb{A}}\frac{\|Dh(x)\|^{n-1}}{|h(x)|^{n-1}}dx,\] achieve its minimum for generalized radial diffeomorphisms between annuli? Then in the subsequent paper by Kalaj and Chen [11] was given the following answer. **Theorem 2.1**.: _For \(n\geq 4\), we have_ \[\inf_{h\in\mathcal{H}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}[h]=\inf_{h\in \mathcal{P}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}[h]\] _The last infimum is never attained._ In this paper, we consider the case of the \(p-\)energy Sobolev \(\mathcal{W}^{1,p}\) homeomorphisms between annuli \(\mathbb{A}\) and \(\mathbb{A}^{*}\) in the complex plane. Let \[\mathscr{F}_{p}[h]=\int_{\mathbb{A}(1,r)}\frac{\|Dh\|^{p}}{|h|^{p}},\ \ 1\leqslant p<2.\] Then we seek the homeomorphisms \(h\) of the class \(\mathcal{W}^{1,p}\) which are furthermore assumed to preserve the order of the boundary components \(|h(z)|\to\)r when \(|z|\to r^{*}\) and \(|h(z)|\to R^{*}\) when \(|z|\to R\). Such a class of Sobolev homeomorphisms with the above property is denoted by \(\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})\) and we say that they are _admissible homeomorphisms_. Since we minimize the \(\mathscr{F}_{p}\) energy in the class of homeomorphisms, we can perform the inner variation of the independent variable \(z_{\epsilon}=z+\epsilon\tau(z)\), which leads to the system (see for example [13]) \[\operatorname{div}\left(\frac{1}{|h|^{p}}\|Dh\|^{p-2}(Dh)^{*}Dh-\frac{1}{p|h| ^{p}}\|Dh\|^{p}I\right)=0, \tag{2.3}\] where \[\operatorname{div}\left(\begin{array}{cc}a(x,y)&b(x,y)\\ c(x,y)&d(x,y)\end{array}\right):=\left(\begin{array}{c}a_{x}+b_{y}\\ c_{x}+d_{y}\end{array}\right).\] Here \(z=(x,y)\). Our argument does not make direct use of the inner variational equation (2.3). Some important facts that follow from (2.3) are as follows. 1. If we assume that \(h\) is radial, then (2.3) reduces to the Euler-Lagrange equation (3.1) below. 2. Further if \(f\) is a solution of (2.3) then so is \(\tilde{f}=\frac{1}{f}\). 3. Let \(f_{1}(z)=\frac{1}{r_{*}}f(rz)\). Then \(f_{1}:\mathbb{A}(1,r_{1})\stackrel{{\text{onto}}}{{\longrightarrow}} \mathbb{A}(1,R_{1})\), provided that \(f:\mathbb{A}(r,R)\stackrel{{\text{onto}}}{{\longrightarrow}} \mathbb{A}(r^{*},R^{*})\), where \(R_{1}=R_{*}/r_{*}\) and \(r_{1}=R/r\). Moreover, \(f\) satisfies (2.3) if and only if \(f_{1}\) satisfies the same equation. This is why we reduce the problem to the annuli \(\mathbb{A}=\mathbb{A}(1,r)\) and \(\mathbb{A}^{*}=\mathbb{A}(1,R)\). Now we formulate the main results. **Theorem 2.2**.: _Let \(\mathbb{A}\) and \(\mathbb{A}^{*}\) be planar annuli and \(1<p\leqslant 2.\) Then there exists a radially symmetric mapping \(h_{\circ}:\mathbb{A}\to\mathbb{A}^{*}\) such that_ \[\min_{\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}_{p}[h]=\mathscr{ F}_{p}[h_{\circ}]. \tag{2.4}\] _The map \(h_{\circ}\) is the unique minimizer, up to a rotation, in the class \(\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})\). Furthermore, the minimizer \(h_{\circ}\) is a homeomorphism._ **Theorem 2.3**.: _Let \(\mathbb{A}\) and \(\mathbb{A}^{*}\) be planar annuli. Then there exists a radially symmetric mapping \(h_{\circ}:\mathbb{A}\to\mathbb{A}^{*}\) which is a homeomorphism such that_ \[\min_{\mathcal{H}^{1,1}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}_{1}[h]=\mathscr{ F}_{p}[h_{\circ}], \tag{2.5}\] _if and only if_ \[\frac{\pi}{2}-\tan^{-1}\left[\frac{1}{\sqrt{r^{2}-1}}\right]\geqslant\log R. \tag{2.6}\] _The map \(h_{\circ}\) is the unique minimizer, up to a rotation, in the class \(\mathcal{H}^{1,1}(\mathbb{A},\mathbb{A}^{*})\)._ **Remark 2.4**.: Note that the case \(p=2\) of Theorem 2.2 has been already considered by Astala, Iwaniec, and Martin in [2]. On the other hand side our result can be seen as a variation of minimization property of radial mappings of \(p-\)Dirichlet energy throughout Sobolev mappings from the unit ball \(\mathbb{B}\subset\mathbb{R}^{n}\) onto the unit sphere \(\mathbb{S}^{n-1}\), fixing the boundary. This is an old problem solved by several authors (see for example [7], [6], [18]). Furthermore, as was remarked before, Koski and Onninen [16] have considered \(\mathcal{E}_{p}\) energy and proved the minimization property, under a certain constrain. Indeed, if we denote the outer boundary of \(\mathbb{A}\) by \(\partial_{\circ}\mathbb{A}\) and consider the subfamily of homomorphisms \(\mathcal{H}_{\circ}=\{f\in\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*}):f(x)= \frac{R_{\circ}}{R}x,\ \ \text{for}\ x\in\partial_{\circ}\mathbb{A}\}\), then the minimizer of \(\mathcal{E}_{p}\) energy is a radial mapping \(h(x)=\rho(x)\frac{x}{|x|}\) provided that \(R\) and \(r\) satisfies some inequality that depends on \(p\) ([16, Theorem 1.5]). In the same paper they proved that this constraint is crucial and there exists annuli, where the minimizer of \(\mathcal{E}_{p}\) is not a radial mapping. **Remark 2.5**.: By virtue of the density of diffeomorphisms in \(\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})\), see [9, 10], we can equivalently replace the admissible homeomorphisms by sense preserving diffeomorphims. Indeed, for \(p\geqslant 1\), we have \[\inf_{f\in\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})}\mathcal{E}_{p}[h]=\inf _{f\in\operatorname{Diff}(\mathbb{A},\mathbb{A}^{*})}\mathcal{E}_{p}[h]. \tag{2.7}\] Here by \(\operatorname{Diff}(\mathbb{A},\mathbb{A}^{*})\) we denote the class of orientation preserving diffeomorphisms from \(\mathbb{A}\) onto \(\mathbb{A}^{*}\) which also preserve the order of the boundary components. A similar result hold for the \(\mathscr{F}_{p}\) energy. Indeed \[\inf_{f\in\mathcal{H}^{1,p}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}_{p}[h]=\inf _{f\in\operatorname{Diff}(\mathbb{A},\mathbb{A}^{*})}\mathscr{F}_{p}[h]. \tag{2.8}\] ## 3. Radial minimizer of the energy \(\mathscr{F}_{p}[h]\), \(1<p<2\) This section aims is to find the radial minimizer \(h_{\circ}\) of \(\mathscr{F}_{p}\) energy that maps annuli \(\mathbb{A}(1,r)\) onto \(\mathbb{A}(1,R)\) keeping the boundary order. Moreover, we will use that solution to prove the minimization property of \(h_{\circ}\) in the class of all Sobolev homeomorphisms. Contrary to the case \(p=1\), which will be considered later, we will not have any restriction on \(r\) and \(R\). Assume that \(h(z)=H(t)e^{i\theta}\), where \(z=te^{i\theta}\), where \(H\) is a differentiable function and that \(t\in[1,r]\), \(\theta\in[0,2\pi]\). Then \[\|Dh\|^{2}=|h_{t}|^{2}+\frac{|h_{\theta}|^{2}}{t^{2}}=\dot{H}(t)^{2}+\frac{H(t )^{2}}{t^{2}}.\] Furthermore \[t\frac{\|Dh\|^{p}}{|h|^{p}}=t\left(\frac{1}{t^{2}}+\frac{\dot{H}(t)^{2}}{H(t) ^{2}}\right)^{p/2}.\] Let \[L(t,H,\dot{H})=t\left(\frac{1}{t^{2}}+\frac{\dot{H}(t)^{2}}{H(t)^{2}}\right)^{ p/2}.\] Then Euler-Lagrange equation \[L_{H}=\partial_{t}L_{\dot{H}},\] can be written in the following form \[\ddot{H}=\frac{\dot{H}\left((p-3)H^{3}+tH^{2}\dot{H}-t^{2}H\dot{H}^{2}+(p-1)t^ {3}\dot{H}^{3}\right)}{tH^{3}+(p-1)t^{3}H\dot{H}^{2}}, \tag{3.1}\] where \(H=H(t)\), \(\dot{H}=H^{\prime}(t)\) and \(\ddot{H}=H^{\prime\prime}(t)\). Then by straightforward calculation (3.1) can be reduced to the following differential equation \[\frac{t\dot{H}(t)}{H(t)}=\frac{\sqrt{g(t)}}{\sqrt{1-g(t)}}, \tag{3.2}\] where \(g\) is a solution to the following differential equation \[\dot{g}(t)=F[t,g(t)]:=\frac{2(2-p)(g(t)-1)g(t)}{t+(p-2)tg(t)}. \tag{3.3}\] Show that \(F<0\) provided that \(t\geqslant 1\) and \(g(t)\in(0,1)\). Namely \[t+(-2+p)tg(t)\geqslant t+(p-2)t=(p-1)t>0.\] Since \(2(2-p)(g(t)-1)g(t)<0\) we infer that \(g\) is a decreasing function. The general solution of (3.3) is given by \(g=k^{-1}\), where the function \(k\) is defined by \[k(s)=b\exp\left(\frac{(p-1)\log(1-s)-\log s}{2(2-p)}\right), \tag{3.4}\] where \(b\) is a positive constant and \(s\in(0,1)\). By (3.2) we infer that \(H\) is given by \[H(t)=C\exp\left[\int_{1}^{t}\frac{\sqrt{g(x)}}{\sqrt{1-g(x)}x}\,dx\right]. \tag{3.5}\] By using the change \(t=k(s)\) in (3.5) we obtain \[H(t)=C\exp\left[\int_{g(t)}^{g(1)}\frac{\left(\frac{p-1}{1-s}+\frac{1}{s} \right)\sqrt{s}}{2(2-p)\sqrt{1-s}}ds\right]. \tag{3.6}\] Since we seek increasing homeomorphic mappings \(H:[1,r]\xrightarrow{\text{onto}}[1,R]\), we have the initial conditions \(H(1)=1\) and \(H(r)=R\). Then \(C=1\). Let \(0<\tau<1\) and chose \(b=b(\tau)\) so that \[b=\exp\left(\frac{(p-1)\log(1-\tau)-\log\tau}{2(p-2)}\right).\] Denote the corresponding \(g\) by \(g_{\tau}\). Then we have \(g_{\tau}(1)=\tau\). Moreover by (3.4) \[g_{\tau}\left[\exp\left(\frac{(p-1)\log(\frac{1-t}{1-\tau})-\log\frac{t}{\tau} }{2(2-p)}\right)\right]=t.\] Define the function \[\mathcal{R}(\tau)=\exp\left[\int_{g_{\tau}(r)}^{\tau}\frac{\left(\frac{p-1}{1- s}+\frac{1}{s}\right)\sqrt{s}}{2(2-p)\sqrt{1-s}}dx\right].\] Then we also define \[H_{\tau}(t)=\exp\left[\int_{g_{\tau}(t)}^{\tau}\frac{\left(\frac{p-1}{1-s}+ \frac{1}{s}\right)\sqrt{s}}{2(2-p)\sqrt{1-s}}ds\right].\] Then \[H_{\tau}(1)=1\] and \[H_{\tau}(r)=\mathcal{R}(\tau). \tag{3.7}\] Let us show that there is a unique \(s_{\circ}=s(r,\tau)\in(0,\tau)\) such that \(B(s_{\circ})=0\), where \[B(s):=\frac{(p-1)\log(\frac{1-s}{1-\tau})-\log\frac{s}{\tau}}{2(2-p)}-\log r.\] Note that \(B\) is continuous, \(B(\tau)=0\) and \(B(0)=+\infty\). Moreover \[B^{\prime}(s)=\frac{1+(-2+p)s}{2(2-p)(-1+s)s}<0.\] Thus there is a unique \(s_{\circ}\) so that \(B(s_{\circ})=0\). Then \(g_{\tau}(r)=s_{\circ}\). Since for \(0<s<\tau\) and \(p\in(1,2]\), we have \[\frac{\log(\frac{1-s}{1-\tau})-\log\frac{s}{\tau}}{2(2-p)}\geqslant\frac{(p- 1)\log(\frac{1-s}{1-\tau})-\log\frac{s}{\tau}}{2(2-p)},\] it follows that \[\frac{\log(\frac{1-s_{\circ}}{1-\tau})-\log\frac{s_{\circ}}{\tau}}{2(2-p)}- \log r\geqslant B(s_{\circ})=\frac{(p-1)\log(\frac{1-s_{\circ}}{1-\tau})-\log \frac{s_{\circ}}{\tau}}{2(2-p)}-\log r=0.\] Thus \[0<s_{\circ}<\tau_{\circ}=\frac{1}{1+r^{4-2p}\left(-1+\frac{1}{\tau}\right)}. \tag{3.8}\] Then \[\mathcal{R}(\tau)=\exp\left[\int_{s_{\circ}}^{\tau}\frac{\left(\frac{p-1}{1-s}+ \frac{1}{s}\right)\sqrt{s}}{2(2-p)\sqrt{1-s}}ds\right].\] Let us show now that, if \(p>1\), then for every \(R\in(1,+\infty)\), there is \(\tau\in(0,1)\) so that \(\mathcal{R}(\tau)=R\). It is clear that \(\mathcal{R}\) is continuous and also it is clear that \(\lim_{\tau\to 0}\mathcal{R}(\tau)=1\). Let us show that \(\lim_{\tau\to 1}\mathcal{R}(\tau)=+\infty\). Observe that \(0\leqslant s\leqslant\sqrt{s}\leqslant 1\). Then from (3.8) we have that \[\mathcal{R}(\tau)\geqslant K(\tau),\] where \[K(\tau)=\exp\left[\int_{\tau_{\circ}}^{\tau}\frac{\left(\frac{p-1}{s-1}+\frac {1}{s}\right)s}{2(2-p)\sqrt{1-s}}ds\right].\] Then \(K(\tau)=\exp(k(\tau)-k(\tau_{0}))\), where \[k(s)=\frac{3+p(s-2)-2s}{(2-p)\sqrt{1-s}}.\] Then \[\lim_{\tau\to 1^{-}}\sqrt{1-\tau}\log K(\tau)=\frac{(p-1)\left(r^{2}+r^{p} \right)}{(2-p)r^{2}}.\] We notice that here is the moment where \(p\in(1,2)\) is an important assumption. In particular \(\lim_{\tau\to 1}R(\tau)=\infty\). So there is \(\tau=\tau(r,R)\) so that \(\mathcal{R}(\tau)=R\). In view of (3.7), we have constructed a smooth increasing mapping \(H_{\circ}=H_{r,R}:[1,r]\to[1,R]\) so that \(H(1)=1\) and \(H(r)=R\). Let us show that \[h_{\circ}(z)=H(t)e^{i\theta},\ \ z=te^{i\theta}, \tag{3.9}\] is the minimizer in the class of radial homeomorphisms between \(\mathbb{A}\) and \(\mathbb{A}^{*}\). Assume now that \(H:[1,r]\to[1,R]\) is any smooth homeomorphism and assume that \(h(z)=H(t)e^{i\theta}\). Prove that \[\mathscr{F}_{p}[h]\geqslant\mathscr{F}_{p}[h_{\circ}]. \tag{3.10}\] We start from a simple inequality from [16] \[(a+b)^{q/2}\geqslant s^{1-q/2}a^{q/2}+(1-s)^{1-q/2}b^{q/2},\ \ q\in[1,2],\ \ s\in[0,1]. \tag{3.11}\] By inserting \(q=p\), \(s=g(t)\), \[a=t^{\frac{2}{p}-2},\ \ b=t^{2/p}\frac{\dot{H}^{2}}{H^{2}}\] in (3.11) we have \[t\left(\frac{1}{t^{2}}+\frac{\dot{H}^{2}}{H^{2}}\right)^{p/2} =\left(t^{2/p-2}+t^{2/p}\frac{\dot{H}^{2}}{H^{2}}\right)^{p/2}\] \[\geqslant\left(1-g(t)\right)^{1-p/2}t^{1-p}+g(t)^{1-p/2}t\frac{| \dot{H}|^{p}}{|H|^{p}}. \tag{3.12}\] The equality in (3.11) is attained precisely when \[\frac{b}{a}=\frac{s}{1-s}\] and thus the equality is attained in (3.12) precisely when \[\frac{t\dot{H}}{H}=\frac{\sqrt{g(t)}}{\sqrt{1-g(t)}}. \tag{3.13}\] Then by \[a^{p}\geqslant px^{p-1}a-(p-1)x^{p}, \tag{3.14}\] where \(a=\frac{\dot{H}(s)}{H(s)}\) and \(x=\frac{\sqrt{g(s)}}{\sqrt{1-g(s)s}}\) we get \[t\left(\frac{1}{t^{2}}+\frac{\dot{H}(t)^{2}}{H(t)^{2}}\right)^{p/2}\geqslant t ^{1-p}\frac{1-pg(t)}{(1-g(t))^{p/2}}+\frac{s^{2-p}\sqrt{g(t)}}{(1-g(t))^{(p-1)/ 2}}p\frac{\dot{H}}{H}. \tag{3.15}\] Notice that, the condition (3.13) is precisely satisfied when we have equality in (3.15). Define \[P(t)=t^{2-p}\left(1-g(t)\right)^{\frac{1}{2}(1-p)}\sqrt{g(t)},\] and show that it is a constant. _This fact is crucial for our approach._ By (3.3) we obtain that \[\frac{P^{\prime}(t)}{P(t)}=\frac{2-p}{t}+\frac{(1+(p-2)g(t))\dot{g}(t)}{2(1-g( t))g(t)}=0.\] Thus \[P(t)\equiv c=P(r)=r^{2-p}\left(1-g(r)\right)^{\frac{1}{2}(1-p)}\sqrt{g(r)}. \tag{3.16}\] Observe that \[g(r)=g_{\tau}(r)=c_{\circ}(r,\tau)=c_{\circ}(r,\tau(r,R)).\] Thus \(c=c(r,R)\). Now we have \[\mathscr{F}_{p}[h] =2\pi\int_{1}^{r}t\left(\frac{1}{t^{2}}+\frac{\dot{H}(t)^{2}}{H^ {2}(t)}\right)^{p/2}dt\] \[\geqslant 2\pi\int_{1}^{r}\left(t^{1-p}\frac{1-pg(t)}{(1-g(t))^{p/2 }}+c(r,R)\frac{\dot{H}}{H}\right)dt\] \[=2\pi\int_{1}^{r}\left(t^{1-p}\frac{1-pg(t)}{(1-g(t))^{p/2}} \right)dt+2\pi\int_{1}^{r}c(r,R)\frac{\dot{H}(t)}{H(t)}dt\] \[=2\pi\int_{1}^{r}\left(t^{1-p}\frac{1-pg(t)}{(1-g(t))^{p/2}} \right)dt+2\pi c(r,R)\log R\] \[=\mathscr{F}_{p}[h^{\circ}].\] ## 4. Radial minimizers for the case \(p=1\) The corresponding subintegral expression for the functional \(\mathscr{F}_{1}[h]=\int_{\mathbb{A}(1,r)}\frac{|Df(z)|}{|f(z)|}\), for radial function \(h(z)=H(t)e^{i\theta}\), \(z=te^{i\theta}\) is given by \[L(t,H,\dot{H})=\left(1+\frac{t^{2}\dot{H}(t)^{2}}{H(t)^{2}}\right)^{1/2}.\] The corresponding differential equation (3.1) for \(p=1\) reduces to \[\left(-tH(t)\dot{H}(t)^{2}+t^{2}\dot{H}(t)^{3}+H(t)^{2}\left(2\dot{H}(t)+t\ddot{H }(t)\right)\right)=0 \tag{4.1}\] which can be written in the following form \[\frac{t\dot{H}(t)}{H(t)}=\frac{\sqrt{g(t)}}{\sqrt{1-g(t)}}\] where \(g\) is a solution of the differential equation (see (3.3) for \(p=1\)): \[2g(t)+t\dot{g}(t)=0. \tag{4.2}\] Then the general solution of (4.2) is given by \(g(t)=bt^{-2}.\) Then the solution of (4.1) is the solution of the equation \[\frac{t\dot{H}(t)}{H(t)}=\frac{1}{\sqrt{b^{2}t^{2}-1}}\] and it is given by \[H(t)=c\exp\left(-\cot^{-1}\left[\sqrt{b^{2}t^{2}-1}\right]\right).\] If we let that \(H(1)=1\) then \[H(t)=\exp\left(\cot^{-1}\left[\sqrt{b^{2}-1}\right]-\cot^{-1}\left[\sqrt{b^{2} t^{2}-1}\right]\right). \tag{4.3}\] Here \(b\geqslant 1\). Moreover, if we assume that \(H(r)=R\), then after straightforward computations we get \[b=\frac{\sqrt{(1+r^{2}-2r\cos\log R)}\csc[\log R]}{r}.\] The corresponding minimizer is denoted by \(h_{\circ}(z)=H(r)e^{i\theta}\), \(z=re^{i\theta}\). Hence \[\mathscr{F}[h]=2\pi\int_{1}^{r}\left(1+\frac{t^{2}(\dot{H}(t))^{2}}{H^{2}(t)} \right)^{1/2}dt\geqslant 2\pi\int_{1}^{r}\sqrt{1-\frac{1}{b^{2}t^{2}}}+\frac{ \dot{H}(t)}{bH(t)}dt\] Thus \[\mathscr{F}[h]\geqslant\mathscr{F}[h_{\circ}]\] where \[\mathscr{F}[h_{\circ}]=2\pi\frac{-\sqrt{b^{2}-1}+\sqrt{b^{2}r^{2}-1}-\csc^{-1 }\left[b\right]+\csc^{-1}\left[br\right]}{b}+\frac{2\pi\log R}{b}.\] **Lemma 4.1**.: _It exists a radial homeomorphism \(h:\mathbb{A}(1,r)\rightarrow\mathbb{A}(1,R)\) if and only if_ \[\frac{\pi}{2}-\tan^{-1}\left[\frac{1}{\sqrt{r^{2}-1}}\right]>\log R.\] Proof.: By differentiating (4.3) w.r.t. \(b\) we get \[\partial_{b}H(t)=\frac{\exp\left(\cot^{-1}\left[\sqrt{-1+b^{2}}\right]-\cot^{ -1}\left[\sqrt{-1+b^{2}t^{2}}\right]\right)\left(-\frac{1}{\sqrt{-1+b^{2}}}+ \frac{1}{\sqrt{-1+b^{2}t^{2}}}\right)}{b}.\] Hence \(H\) is decreasing in \(b\). The largest value is for \(b=1\) and it is equal to \[R_{\circ}(r):=\exp\left(\frac{\pi}{2}-\tan^{-1}\left[\frac{1}{\sqrt{r^{2}-1} }\right]\right)\] for \(t=r\). In other words, there is a increasing diffeomorphism of \([1,r]\) onto \([1,R]\) if and only if \(R\leqslant R_{\circ}(r)\). **Remark 4.2**.: Observe that \(\lim_{r\to\infty}\mathcal{R}(r)=e^{\pi/2}\), so there is not any homeomorphic minimizer of the \(\mathscr{F}\) between annuli \(\mathbb{A}(1,r)\) and \(\mathbb{A}(1,e^{\pi/2})\). Note that the conformal modulus of \(\mod\mathbb{A}(1,e^{\pi/2})\) is \(\log e^{\pi/2}=\pi/2\). So the case \(p=1\) differs from the case \(p>1\). Moreover, this case is also opposite to the Nitsche type phenomenon for Dirichlet energy \(\mathcal{E}\). Namely Nitsche type phenomenon asserts that \(R\) could be arbitrarily large, but not small enough. ## 5. Proof of Theorem 2.2 and Theorem 2.3 We begin with the following proposition **Proposition 5.1**.: _Assume that \(h=\rho(z)e^{i\Theta(z)}\) is a diffeomorphism between annuli \(\mathbb{A}(1,r)\) and \(\mathbb{A}(1,R)\). Then for every \(t\in[1,r]\) and \(\theta\in[0,2\pi]\) we have_ \[\int_{t\mathbb{T}}|\nabla\Theta(z)||dz|\geqslant 2\pi. \tag{5.1}\] _If the equality hold in (5.1) for every \(\theta\in[0,2\pi]\), then \(\Theta(z)=e^{i\varphi(\theta)},\,z=te^{i\theta}\), for a diffeomorphism \(\varphi:[0,2\pi]\xrightarrow{\text{onto}}[\alpha,2\pi+\alpha]\). Further, we have_ \[\int_{1}^{R}\frac{|\nabla\rho(te^{i\theta})|}{\rho(te^{i\theta})}dt\geqslant \log R. \tag{5.2}\] _If the equality hold in (5.2) for every \(t\in[1,R]\), then \(\rho(te^{i\theta})=\rho(t)\)._ Proof of Proposition 5.1.: First of all, for fixed \(t\), \(\gamma(\theta)=e^{i\Theta(te^{i\theta})}\) is a surjection of \([0,2\pi]\) onto \(\mathbb{T}=\{z:|z|=1\}\). Further \[|\nabla\Theta(te^{i\theta})|^{2}=|\Theta_{t}|^{2}+\frac{|\Theta_{\theta}|^{2}} {t^{2}}.\] So \[|\gamma^{\prime}(\theta)|=|\Theta_{\theta}|\leqslant t|\nabla\Theta(te^{i \theta})|. \tag{5.3}\] Figure 1. The graphic of \(H_{\circ}\) satisfying initial conditions \(H(1)=1\), \(H(2)=2\) is far from being identity. The equality is attained in (5.3) if and only if \(\Theta_{t}\equiv 0\). In this case \(\gamma(\theta)=e^{i\varphi(\theta)}\), for a smooth function of \(\varphi:[0,2\pi]\stackrel{{\text{onto}_{0}}}{{\longrightarrow}}[ \alpha,2\pi+\alpha]\). We obtain that \[|\mathbb{T}|=2\pi\leqslant\int_{0}^{2\pi}|\gamma^{\prime}(\theta)|d\theta \leqslant\int_{t\mathbb{T}}|\nabla\Theta(z)||dz|,\] with an equality if and only if \(\Theta(se^{i\theta})\) does not depend on \(t\). Thus the first statement of the proposition is proved. Similarly the function \(\alpha(t)=\log\rho(te^{i\theta})\) is a surjection of \([1,r]\) onto \([0,\log R]\) and hence \[\log R=\int_{1}^{r}\alpha^{\prime}(t)dt\leqslant\int_{1}^{r}\frac{|\nabla\rho (te^{i\theta})|}{\rho(te^{i\theta})}dt.\] The equality statement can be proved in the same way as the former part. We only need to use the formula \[|\nabla\rho(te^{i\theta})|^{2}=|\rho_{t}|^{2}+\frac{|\rho_{\theta}|^{2}}{t^{2 }}\geqslant|\rho_{t}|^{2}.\] Proof of Theorem 2.2.: Assume as before that \(h(z)=\rho(z)e^{i\Theta(z)}\) is a mapping from the annulus \(\mathbb{A}\) onto the annulus \(\mathbb{A}^{*}\). We start from the following inequality which follows from Holder inequality \[\mathscr{F}_{p}[h]=\int_{\mathbb{A}(1,r)}\frac{\|Dh\|^{p}}{|h|^{p}}\geqslant \frac{\left(\int_{\mathbb{A}(1,r)}\frac{\|Dh\|}{|h|}\cdot\frac{\|Dh_{\circ}\|^{ p-1}}{|h_{\circ}|^{p-1}}\right)^{p}}{\left(\int_{\mathbb{A}(1,r)}\frac{\|Dh_{ \circ}\|^{p}}{|h_{\circ}|^{p}}\right)^{p-1}}.\] In view of (2.1) \[\|Dh\|^{2}=|\nabla\rho|^{2}+\rho^{2}|\nabla\Theta|^{2},\] where \(\rho(z)=|h(z)|\). And thus \[\frac{\|Dh\|}{|h|}=\left(|\nabla\Theta|^{2}+\frac{|\nabla\rho|^{2}}{\rho^{2}} \right)^{1/2}.\] Then by (3.10), for \(q=1\) we have \[\frac{\|Dh\|}{|h|}\geqslant\left(\sqrt{1-g(t)}|\nabla\Theta|+\sqrt{g(t)}\frac {|\nabla\rho|}{\rho}\right). \tag{5.4}\] From (5.4) we get \[\int_{\mathbb{A}(1,r)}\frac{\|Dh\|}{|h|} \cdot\frac{\|Dh_{\circ}\|^{p-1}}{|h_{\circ}|^{p-1}}\] \[=\int_{\mathbb{A}(1,r)}\left(|\nabla\Theta|^{2}+\frac{|\nabla \rho|^{2}}{\rho^{2}}\right)^{1/2}\cdot\frac{\|Dh_{\circ}\|^{p-1}}{|h_{\circ}| ^{p-1}}\] \[\geqslant\int_{0}^{2\pi}\int_{1}^{r}t\frac{\|Dh_{\circ}\|^{p-1}}{ |h_{\circ}|^{p-1}}\left(\sqrt{1-g(t)}|\nabla\Theta|+\sqrt{g(t)}\frac{|\nabla \rho|}{\rho}\right)dtd\theta.\] Let \[K(t)=t\sqrt{g(t)}\frac{\|Dh_{\circ}\|^{p-1}}{|h_{\circ}|^{p-1}}.\] Then \[K(t)=t\sqrt{g(t)}\left(\frac{1}{t^{2}}+\frac{g(t)}{t^{2}(1-g(t))}\right)^{\frac{1}{ 2}(p-1)}=P(t).\] Thus we again use (3.16) to conclude that \(K(t)=c(r,R)\). Furthermore \[t\frac{\|Dh\|}{|h|}\cdot\frac{\|Dh_{\circ}\|^{p-1}}{|h_{\circ}|^{ p-1}} \geqslant t\left(t^{2}(1-g(t))\right)^{\frac{1}{2}(1-p)}\left[ \sqrt{1-g(t)}|\nabla\theta|+\sqrt{g(t)}\frac{|\nabla\rho|}{\rho}\right]\] \[=t^{2-p}\left(1-g(t)\right))^{1-p/2}|\nabla\Theta|+c(r,R)\frac{| \nabla\rho|}{\rho}.\] Now by Proposition 5.1 we have \[\int_{\mathbb{A}}\frac{|\nabla\rho|}{|\rho|}\geqslant 2\pi\log R\] and \[t\int_{0}^{2\pi}|\nabla\Theta(te^{i\theta})|d\theta\geqslant 2\pi.\] So we have \[\int_{\mathbb{A}(1,r)}\frac{\|Dh\|}{|h|}\cdot\frac{\|Dh_{\circ}\|^{p-1}}{|h_{ \circ}|^{p-1}}\geqslant 2\pi\left(c\log R+\int_{1}^{r}t^{2-p}(1-g(t))^{1-p/2} dt\right)=\mathscr{F}_{p}[h_{\circ}].\] Thus \[\mathscr{F}_{p}[h]\geqslant\frac{\mathscr{F}_{p}^{p}[h_{\circ}]}{\mathscr{F} _{p}^{p-1}[h_{\circ}]}=\mathscr{F}_{p}[h_{\circ}].\] The uniqueness part of this theorem follows from Proposition 5.1. The equation in (5.4) is satisfied if and only if \[\frac{\rho(te^{i\theta})|\nabla\Theta(te^{i\theta})|}{|\nabla\rho(te^{i\theta} )|}\] is a function that depends only on \(t\). Since \(\Theta(\theta)=e^{i\varphi(\theta)}\), we get \(|\nabla\Theta(\theta)|=\varphi^{\prime}(\theta)=\text{const}\). Because \(\varphi:[0,2\pi]\stackrel{{\text{onto}}}{{\longrightarrow}}[ \alpha,2\pi+\alpha]\), it follows that \(\varphi(\theta)=\theta+\alpha\). In other words \(h(z)\) is a minimizer if and only if \(h(z)=H_{\circ}(t)e^{i(\theta+\alpha)}=e^{i\alpha}h_{\circ}(z)\). This finishes the proof. Proof of Theorem 2.3.: The proof of Theorem 2.3 is the same as the proof of Theorem 2.2 up to the part concerning the existence of the radial solutions given in Section 4 (See Lemma 4.1).
2308.16599
Using machine learning to understand causal relationships between urban form and travel CO2 emissions across continents
Climate change mitigation in urban mobility requires policies reconfiguring urban form to increase accessibility and facilitate low-carbon modes of transport. However, current policy research has insufficiently assessed urban form effects on car travel at three levels: (1) Causality -- Can causality be established beyond theoretical and correlation-based analyses? (2) Generalizability -- Do relationships hold across different cities and world regions? (3) Context specificity -- How do relationships vary across neighborhoods of a city? Here, we address all three gaps via causal graph discovery and explainable machine learning to detect urban form effects on intra-city car travel, based on mobility data of six cities across three continents. We find significant causal effects of urban form on trip emissions and inter-feature effects, which had been neglected in previous work. Our results demonstrate that destination accessibility matters most overall, while low density and low connectivity also sharply increase CO$_2$ emissions. These general trends are similar across cities but we find idiosyncratic effects that can lead to substantially different recommendations. In more monocentric cities, we identify spatial corridors -- about 10--50 km from the city center -- where subcenter-oriented development is more relevant than increased access to the main center. Our work demonstrates a novel application of machine learning that enables new research addressing the needs of causality, generalizability, and contextual specificity for scaling evidence-based urban climate solutions.
Felix Wagner, Florian Nachtigall, Lukas Franken, Nikola Milojevic-Dupont, Rafael H. M. Pereira, Nicolas Koch, Jakob Runge, Marta Gonzalez, Felix Creutzig
2023-08-31T09:57:52Z
http://arxiv.org/abs/2308.16599v2
# A Causal Discovery Approach to Learn How Urban Form Shapes Sustainable Mobility Across Continents ###### Abstract Global sustainability requires low-carbon urban transport systems, shaped by adequate infrastructure, deployment of low-carbon transport modes and shifts in travel behavior. To adequately implement alterations in infrastructure, it's essential to grasp the location-specific cause-and-effect mechanisms that the constructed environment has on travel. Yet, current research falls short in representing causal relationships between the "6D" urban form variables and travel, generalizing across different regions, and modeling urban form effects at high spatial resolution. Here, we address all three gaps by utilizing a causal discovery and an explainable machine learning framework to detect urban form effects on intra-city travel based on high-resolution mobility data of six cities across three continents. We show that both distance to city center, demographics and density indirectly affect other urban form features. By considering the causal relationships, we find that location-specific influences align across cities, yet vary in magnitude. In addition, the spread of the city and the coverage of jobs across the city are the strongest determinants of travel-related emissions, highlighting the benefits of compact development and associated benefits. Differences in urban form effects across the cities call for a more holistic definition of 6D measures. Our work is a starting point for location-specific analysis of urban form effects on mobility behavior using causal discovery approaches, which is highly relevant for city planners and municipalities across continents. ## 1 Introduction and Background Cities are currently responsible for 70% of the world's carbon emissions [1] and by 2050 nearly 70% of the earth's population will live in urban areas [2]. Thus they have a pivotal role in defining how humanity will respond to the climate crisis. The largest CO2 emitter in cities next to the building sector is urban transport, being responsible for 3 Gt CO2-eq per year [3]. For sustainable urban transport, physical infrastructure is widely accepted to be a key leverage point, with stronger influence than personal or social factors [4]. To combat these grand challenges, governments around the globe are heavily investing in redesigning urban infrastructure, as demonstrated, for example, by the $1.2 trillion Bipartisian Infrastructure Bill, signed by the president of the United States Joe Biden in 2021 [5]. However, to effectively implement necessary changes, it is important to determine which infrastructure investments will be most effective where. Deriving these insights is challenging as random control trials are hardly feasible in complex socio-environmental contexts, such as cities. Fortunately, novel causal discovery and inference methods allow to approximate these cause and effect relationships from purely observational data [6]. For planning infrastructure interventions this is particularly relevant, due to the longevity and resulting lock-in effects of urban planning decisions. Previous studies have shown that compact urban development is associated with shorter vehicle kilometer traveled (VKT) which significantly contributes to lower travel-related emissions [7; 8; 9]. Compact development is defined via a set of features called the "6Ds of compact development", describing a location's destination access, density, distance to transit, diversity, design and demographic properties [10]. Most prior studies concluded that the two variables that contribute the most to reducing VKT are accessibility, described by the proximity to a city center (or a certain number of jobs) [11; 12; 10; 13; 14; 15], followed by urban density measures, such as population density [15; 12; 11; 16; 17; 10; 18; 19; 20; 21]. Socio-demographic variables, such as income, age or gender were also found to have an effect. Income was found to have both negative [12; 15; 20; 16] and positive [13; 22] correlations to VKT. Smaller effects on VKT have been found with the design of locations, such as intersection density or number of four-way intersections [17; 10; 18; 14; 22], as well as with land use diversity, such as high land use mix ratios [16; 17; 20; 10; 18; 22]. Higher distances to transit have also been found to have a smaller increasing effect on travel distances [12; 10; 18; 16; 22], yet similar to land use diversity tend to be more relevant for mode choice decisions in the context of low-carbon transport. Rather little work has paid attention to the cause and effect relationships between the individual 6D urban form features [23]. Naess et al. [24] have argued to utilize methods that incorporate causal relationships as urban form features such as distance to subcenters, density and street design are mutually dependent on distance to center and affect transit provision. Similar work that used causal methods have analyzed the effects of urban infrastructure change on induced cycling [25], new infrastructure on urbanization [26] or used graph discovery methods to understand causal drivers of urban form on mode choice [27; 28; 29] or subway ridership [30]. The analysis of urban form effects on VKT aims to inform decision makers about which infrastructure changes can have the largest effect on emission reductions. Yet, first, many of the prior findings focus only on a few cities or on several of the same country (with a strong focus on US-based cities). This results in highly case-specific findings, making it difficult to disentangle whether effects are specific to a city or may have a more general applicability. Second, very few studies address the causal questions of "what and how" with methods designed for causal reasoning [23; 27]. Consequently, criticism on the general impact of urban form [18] and a lack of understanding for interaction effects between D-variables has been raised [31; 24]. Third, little research has focused on analyzing cause-effect mechanisms in a spatially-explicit manner [13], thereby ignoring inner-city differences, which are highly relevant for real-world implementation of low-carbon planning. The increasing availability of big data on mobility and urban form combined with causal graph discovery and explainable machine learning methods make it possible to address all three shortcomings. Here, we propose a novel cross-city analysis of the spatially-explicit causal effects of urban form on travel-related carbon emissions. Our contributions are threefold: (1) We first utilize observational data across six cities (Berlin, Boston, Los Angeles, the Bay Area, Rio de Janeiro and Bogota) to establish a directed-acyclic graph (DAG) representing the causal paths between a subset of the 6D features and car travel distances for commute (VKT), providing a nuanced understanding of how urban form interacts and causes VKT. (2) We then utilize the information of the DAG to inform a supervised machine learning model to reveal how much of the variation in daily VKT can be explained by urban form. (3) Finally, we use the DAG to demonstrate how the causal effects of individual urban form features unfold by applying causal shapley values. ## 2 Results ### 6D urban form effects on car travel follow direct and indirect paths. Via causal graph discovery, we find for the given set of variables that across all cities destination accessibility, density and design have a direct effect on VKT, while no significant direct effect could be measured for income. In addition, we observe inter-dependencies between several of the 6D urban form features, enabling more effective urban form interventions based on a clear distinction of causes and effects. To examine the relationship between the considered 6D urban form variables and VKT, we apply the conditional independence test based causal discovery framework, called PCMCI+ [32]. The framework identifies spurious correlations (e.g. resulting from indirect links or common drivers), by detecting and orienting edges in a DAG using conditional independence tests among all observed variables and VKT. The discovered DAG can be interpreted as possibly causal for the set of considered 6D variables and considering certain assumptions, thus, allowing for stronger statements towards causality than previous correlation based studies. It also provides an intuition of causal strength via the test static of the applied conditional independence test in the form of the Momentary Conditional Independence (MCI) partial correlation value. To calculate a DAG, we use a balanced sample of 1314 traffic assignment zones (TAZ). As some cities have more TAZ than others, the samples are pooled equally from all cities to remove city-specific sample bias. We subtract the mean and scale to unit variance per city to remove city-specific hidden confounding, such as differences in gas prices or network lengths. We assume variable relationships to be continuous and linear with different marginal distributions and, hence, select the robust partial correlation independence test in the PCMCI+ framework (for further assumptions and model specifications, see 5. Methods, Modelling). Figure 1: **Comparison of causal graphs of 6D urban form features on VKT.** Causal DAG displaying the relationships between Destination Accessibility, Density, Demographics, Design and VKT based on a literature review (left) and based the causal discovery framework PCMCI+ using the robust partial correlation (Robust ParCorr) conditional independence test at \(tau_{alpha}=0.025\) (right). The arrow points into the direction of the causal effect, while the coloring in the right figure denotes the Cross Momentary Conditional Independence (MCI) partial correlation value, indicating the causal strength of the effect between two nodes. Via PCMCI+ we analyze the causal relationships between the distance to the main center of a city, distance to employment, population density, household income, street connectivity and VKT. The resulting DAG (right side in Fig. 1) is compared with the DAG derived from a literature review (left side in Fig 1, compare Appendix, Table 3). We observe that lower distances to the main center and to employment as well as higher population density and street connectivity have decreasing direct effects on VKT. In comparison to the graph from literature, we cannot find a significant effect of demographics, measured by mean household income, on VKT. In addition, we observe further causal relationships between some of the 6D urban form features. We find that distance to the main center affects distance to employment and density, while we cannot observe any further indirect effects on other variables. We also find that population density is positively linked to street connectivity (reflecting a more connected street network in denser areas) and negatively to distance to employment. Additionally, income negatively affects density (reflecting that higher income areas are in less dense areas across the cities). Analyzing the MCI value of each link causing the target, the direct effect of distance to employment (\(\textit{MCI}_{\textit{distance to employment}\to F}\)\(\kappa_{T}\) = 0.324) and center (\(\textit{MCI}_{\textit{distance to centre}\to F}\)\(=\) 0.138) on VKT appear to be strongest, while population density (\(\textit{MCI}_{\textit{population density}\to F}\)\(\kappa_{T}\)\(=\)\(-\) 0.069) and street connectivity appear lower (\(\textit{MCI}_{\textit{street connectivity}\to F}\)\(\kappa_{T}\)\(=\)\(-\) 0.112). ### Urban form can explain between 26% and 81% of the variation in car commuting distances. We find that all 6D urban form features with a direct causal effect on VKT hold predictive information on car commute distances when generalizing towards an unseen city. When training a model in five cities on all features with a direct causal effect on the target and predicting in an unseen 6th city, we find that the variation in VKT can be explained up to 81% in Berlin, 61% in Boston, 51% in Bogota, 26% in Los Angeles, 27% in Rio de Janeiro, and 28% in the Bay Area. When training in five cities and predicting in a 6th city (for more details see 5. Methods, Model Generalization), the Mean Average Error Ranges (MAE) from 1.4 _km_ in Boston to 3.0 _km_ in Berlin, while the Root Mean Square Error (RMSE) varies between 2.2 _km_ in Boston and Bogota and 4.6 _km_ in Rio de Janeiro. While the high R2 score in Berlin is also due to the very high standard deviation of VKT (compare SI, Table 4), it also shows that given our model and data, the variation in VKT can be better explained in some cities than in others. Possibly, the more monocentric urban form of Berlin or Boston is easier to predict than the distribution of VKT in more dispersed regions, such as the Bay Area, Los Angeles or Rio de Janeiro. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **City** & **R2** & **MAE [km]** & **RMSE [km]** & \(\overline{VKT}\) [km] \\ \hline Berlin & 0.81 & 3.0 & 3.8 & 14.1 \\ \hline Boston & 0.61 & 1.4 & 2.2 & 8.0 \\ \hline Los Angeles & 0.26 & 2.3 & 3.7 & 15.4 \\ \hline Bay Area & 0.28 & 2.1 & 2.9 & 14.5 \\ \hline Rio de Janeiro & 0.27 & 2.8 & 4.6 & 12.5 \\ \hline Bogota & 0.51 & 1.8 & 2.2 & 11.9 \\ \hline \end{tabular} \end{table} Table 1: **Results of a 6-fold, city-wise cross-validation procedure to analyze how much of the variation in VKT can be explained by urban form.** The generalization performance is provided for each city representing the test set, using the R2 score, the mean average error (MAE) and the root mean square error (RMSE). Additionally, the average VKT of the city is provided for reference. ### 2.3 Individual urban form effects follow similar trends, but differ in magnitude. When analyzing individual features, we find that urban form effects on VKT tend to follow similar trends across cities. Yet, there are also city-specific differences due to varying city sizes, urban network layouts and densities. Across all cities, we find that distances to the center and to employment have an increasing effect on VKT in comparison to the city specific average. Lower population density and street connectivity have an increasing effect, while higher densities or network connectivities have varying impacts on VKT. The results are presented in Fig. 2. All graphs are also plotted for each city individually and added to the Appendix in Fig. 7 to 12. In addition, causal shapley values per city are also visualized via beeswarm density plots, which can also be found in the Appendix, Fig. 5. We measure urban form effects using causal shapley values, calculated during the cross validation for each city. As non of the urban features are independent, see DAG 1, causal shapley values allow us to accurately represent causal relationships in comparison to previous approaches for calculating feature importance (such as using marginal shapley values - for a direct comparison of marginal and causal shapley values, see Appendix Fig 6). To calculate causal shapley values, we translate the obtained DAG into a causal chain, with the causal order _distance to center \(\rightarrow\) population density \(\rightarrow\) distance to employment \(\rightarrow\) street connectivity_. This sets the constraints that distance to center, as the first chain component, causes the subsequent components and the target, while, for example, population density, as the second component, is caused by distance to center and causes the following components and the target. Missing edges from the graph in 1 are inferred from conditional independencies in the data. Figure 2: **Comparison of causal shapley effects on VKT for each urban form feature across cities.** Causal shapley scatter plots are visualized for the features (A) distance to center (top,left), (B) distance to employment (top, right), (C) population density (bottom, left) and (D) street connectivity (bottom, right). Every dot represents a TAZ of a city, whereas its feature value is displayed on the x-axis and the corresponding causal shapley value on the y-axis. For each city a sample of 200 TAZ are displayed and color coded as shown in the legend. For the feature distance to the city center (Fig. 2 A) all cities follow a similar ascending trend, while effect magnitudes and slope vary. In Berlin, Boston and Los Angeles we observe VKT decreasing effects down to \(-3\)\(km\) within a 25 \(km\) radius from the main center. Beyond this, Berlin follows the strongest incline up to \(+7.8km\) increasing effect on VKT beyond 45 \(km\) radius, followed by Los Angeles with \(+4.5\)\(km\) beyond 50\(km\) radius, Boston with \(+3.5km\) at 5\(km\). In Rio de Janeiro we find decreasing effects down to \(-4.5\)\(km\) within a 60 \(km\) radius. In addition, Rio follows the slowest incline with \(+4\)\(km\) at 75 \(km\) radius. In Bogota we observe a decreasing effect magnitude of \(-1.4\)\(km\) within 22 \(km\) radius and a short but strong incline up to \(+2.7\)\(km\) between 25 \(km\) and 35 \(km\). In the Bay Area we also observe areas with reducing effects closer to the center and an increase of up to \(+4.5\)\(km\) within a 55 \(km\) radius. Yet, we also find areas in closer distance to a center, within a 30\(km\) radius, that have an increasing effect up to \(+1\)\(km\) on VKT. For distance to employment (Fig. 2 B), we observe ascending trends across all cities. Yet, the distance to employment varies strongly across cities, causing offsets and differences in magnitudes of urban form effects on VKT. The shortest distances to jobs can be observed in Bogota and Berlin. In Bogota 10% of all jobs can be reached within 6 \(km\) distance but no clear positive or negative effect on VKT can be detected. From 12 \(km\) to 20 \(km\) distance to employment a rapid increase up to \(+2.75\)\(km\) on VKT is measured. In Berlin a decreasing effect of up to \(-2\)\(km\) can be measured for distances to employment below 20 \(km\). For longer distances an increasing effect of up to \(+8.2\)\(km\) at around 45 \(km\) is measured. In Boston shortest distances to employment are between 14 \(km\) and 30 \(km\) with both positive and negative effects on VKT. After 30 \(km\) the effect on VKT increases up to \(+5\)\(km\) between 45 \(km\) and 55 \(km\) distance to employment. In Rio de Janeiro a mostly decreasing effect similar to Berlin can be measured between 16 \(km\) and 30 \(km\). Beyond 30 \(km\) the effect increases up to \(+8.5\)\(km\) at 6 \(km\). In Los Angeles we observe a peak of \(-2.5\)\(km\) decreasing effect on VKT for shortest distances to employment at 20 \(km\). Afterwards, an increase up to \(+5\)\(km\) at 55 \(km\) distance to employment is measured. In the Bay Area, we find an effect increase from \(-1\)\(km\) up to \(+3\)\(km\) between 30 \(km\) and 60 \(km\) distance to employment. The strongest urban form effect differences can be found for population density (Fig. 2 C). In Berlin we observe a downward slope with increasing effects on VKT of up to \(+6.9\)\(km\) for very low densities and a decreasing effect of up to \(-1.2\)\(km\) beyond 4 thousand residents per square kilometer (\(k\)\(p\)/\(km^{2}\)). Similarly, we measure increasing effects up to \(+2.25\)\(km\) for low densities in the Bay Area and decreasing effects down to \(-0.9\)\(km\) beyond 2\(k\)\(p\)/\(km^{2}\). In Boston and Los Angeles we measure increasing effects of up to \(+2.1\)\(km\) and \(+2km\) for low densities, while for higher densities there are mixed effects. In Boston we observe decreasing effects down to \(-0.4\)\(km\) at 4\(k\)\(p\)/\(km^{2}\), but also increasing effects up to \(+0.75km\) at 4.9\(k\)\(p\)/\(km^{2}\). In Los Angeles we find effects down to \(-1.4km\) at 3.59 \(p\)/\(km^{2}\) and increasing values up to \(+0.5\)\(km\) at 6\(k\)\(p\)/\(km^{2}\). In Rio de Janeiro we find population density to have the strongest variation in effects. While for very low densities an increasing effect of up to 4.6 \(p\)/\(km^{2}\)can be found, between 2\(k\)\(p\)/\(km^{2}\)and 25\(k\)\(p\)/\(km^{2}\) both increasing and decreasing effects between \(+3\)\(km\) and \(-1.5\)\(km\), following a slight downward slope towards higher densities are measured. In Bogota, a different trend is measured where we find a plateau of increasing effects up to \(+2\)\(km\) between 0 and 9 \(k\)/\(km^{2}\), followed by downward slope and another plateau between 10\(k\)\(p\)/\(km^{2}\) and 16\(k\)\(p\)/\(km^{2}\)at \(+1\)\(km\) and \(-0.6\)\(km\) effects on VKT. For street connectivity (Fig. 2 D), we observe a VKT increasing effect for lower and a slightly VKT reducing effect for higher street connectivities in most cities. In Berlin, we measure a downwards slope with increasing effects up to \(+1.8\)\(km\) for 0 to 10 intersections per square kilometer (\(n\)/\(km^{2}\)) and mostly reducing effects up to \(-0.8\)\(km\) at more than 40 \(n\)/\(km^{2}\). In Los Angeles we find some outliers for low connectivities between 0 \(n\)/\(km^{2}\)and 10 \(n\)/\(km^{2}\) with increasing effects up to \(+6.8\)\(km\), while for higher connectivities above 30 \(n\)/\(km^{2}\) we observe slightly reducing effects of up to \(-1\)\(km\). While in Bogota a downwards trend ranging from increasing effects of \(+0.7\)\(km\) to decreasing effects of \(-1.1\)\(km\) over the large span of connectivity values (up to 580 \(n\)/\(km^{2}\)) can be observed, no clear trend can be found in the Bay Area and Rio de Janeiro. ## 3 Discussion ### Cross-city insights for urban planning. Considering the above findings, we conclude that while city specific urban form effects exist, somewhat similar trends across cities can be observed. We find that the distance to the center and to employment exert the strongest effects on VKT, while density and street connectivity are slightly less relevant. This implies that strategies to reduce VKT and related emissions should first and foremost focus on creating transport alternatives at the outskirts of cities to avoid the disproportionately high VKT of long distance commuters in those areas. Similarly, new residents should be allocated towards the inner city to avoid additional long distance trips. Here, high resolution building data sets, like [33], might be highly relevant to detect suitable living spaces in already dense areas. While density and network connectivity are less relevant, they can still have strong VKT increasing effects that should be avoided in areas with many commuters. The results also show that the chosen urban form features better explain VKT variation in more monocentric cities like Berlin (R2 = 0.81) or Boston (R2 = 0.61) than in more polycentric cities like Los Angeles (R2 = 0.26) or the Bay Area (R2 = 0.28). This implies that other non-measured factors must exist, which are causing the city specific variation in VKT. Such factors could include that job locations are spread across larger areas of the city, causing more inner-city travel and less short car trips (compare high average VKT and comparatively small standard deviations of VKT in LA, the Bay Area or Rio de Janeiro in Appendix Table 4). Such differences are not yet fully captured with the used urban form features and call for additional accessibility measures. Furthermore, currently unobserved 6D urban form features include measures of land use diversity, differences in demographics and distance to transit. Including such features might help to explain VKT variation even further. Our work based on the considered set of urban form features and used algorithms also demonstrates that income does interact with urban form variables but contributes only indirectly to explaining variations in VKT. This is in contrast to traditional economic models of urban form and of transport demand in general. However, it corroborates results from a recent study that finds that the built environment explains country level GHG emissions as well as income metrics [34]. ### City specific differences call for further research. Across the six cities we find that the impact of urban form on VKT follows different shapes and magnitudes. Such distinct differences could be leverage points for future urban planning strategies. A strong difference between the cities can be observed for the effect of distance to center. Between 30 \(km\) and 60 \(km\) distance to the center, the feature effect increases in all cities of corresponding size. Yet, in Berlin a stronger slope and a maximum increasing effect of up to \(+7.8\)\(km\) is reached, while in the other cities less strong effects below \(+4.5\)\(km\) are measured. Here, an initial analysis of the destinations of those specific trips shows that in Berlin the majority of trips end in the same, very central area. In contrast, in Los Angeles, the Bay Area and Rio de Janeiro, but also in Boston along the coast upwards to Lynn, Salem and Gloucester, the destinations are more diverse (see yellow and red areas in SI Fig 13), highlighting the potential of additional centers of attraction across the metropolitan region. While this requires further investigation, including an assessment of the differences in the remaining features, it shows the high potential of cause effect analysis across different city archetypes. Furthermore, it demonstrates novel opportunities how a city like Berlin can learn from cities such as Boston, Los Angeles and Rio de Janeiro. ### Applying Causal Discovery to 6D urban form analysis. The causal graph discovery demonstrates that the 6D urban form features are not independent, but rather interconnected. This helps to reveal the most relevant leverage points to introduce changes to the urban system. For urban planning this is highly relevant, as it implies that changes to one 6D dimension can also affect others and that not reflecting latent effects might lead to overestimating others. Obtaining a causal graph opens new opportunities for further research to apply more advanced methods to analyse cause-effect mechanisms in urban settings, including causal shapley values and causal inference methods. The combination of a graph discovery with causal shapely values reveals the spatial heterogeneity of urban form effects while reflecting causal feature relationships (for a comparison of traditional, marginal shapley values and causal shapley values, refer to Appendix, Figure 6). While the graph discovery minimizes required assumptions about if and how different urban form features are interlinked, it comes at the cost of other assumptions, e.g., the choice of independence test. While these are made very explicit (see Methods, Causal Graph Discovery), we acknowledge that the graph and the derived results are to some extent influenced by them. One important assumption (which has also been present in previous studies), is the choice and definition of included 6D urban form features. The feature choice was constrained by data availability and modeling dimensionality. Our features are proxies for measurable and non-measurable urban form and demographic differences across cities. As dimensionality affects link detection power [35], we aimed at finding a minimal set of representative features, yet, recognize that this comes at the cost of not accurately describing urban form. Another key assumption is the choice of conditional independence (CI) test. We acknowledge that we observe both linear and non-linear dependencies between the analyzed variables (see SI Fig 3). This implies that using a CI test that supports both linear and non-linear dependencies (e.g. GPDC, CMIknn [35; 36]) might be best suited for the analysis. Yet, they come at the cost of having a lower detection power for linear links, and hence true linear, but smaller effects might be overlooked. We acknowledge that this is a current shortcoming of the method and hence, provide the DAG of Fig 1 also calculated with CMIknn CI test, displayed in SI Fig 4. Future work can compare different graph discovery algorithms to get a more robust estimate of the true DAG. ## 4 Conclusion Our findings demonstrate the relevance of analyzing urban form effects on mobility with causal methods, providing rich insights that can be highly relevant for sustainable urban development. We find that the 6D urban form effects exhibit strong causal dependencies that have previously not been considered. By recognizing them, we find that distance to center and to employment play a vital role across all cities, supporting inner city densification strategies and the development of access to alternative modes for commuting at the outskirts. Differences in urban form effect magnitudes require further investigation to enable a learning process across cities. We anticipate our work to be a starting point for context-specific analysis of urban form effects on mobility behavior using causal discovery approaches. Future work should further examine the observed differences, analyze how urban form effects are related to travel demand (measured by number of trips), and incorporate a temporal dimension to allow for a differentiated analysis of short and long term urban form interventions. ## 5 Methods We apply a constrained based causal graph discovery to built environment and mobility data of six cities across three continents to analyze the causal relationships between the 6D urban form features and VKT. We used the obtained DAG to inform a supervised machine learning model that is recurrently trained on five cities and generalizes to a sixth unseen city using only features with a direct causal effect on VKT. By doing so, we demonstrate the explainable power of the developed urban form features. Furthermore, we use the DAG to calculate causal shapley values with which we can derive a location specific understanding of urban form effects across the six cities. ### Data **Datasets.** For mobility data we use various data sources of previous peer-reviewed studies. In Boston, Los Angeles, the Bay Area, Rio de Janeiro, and Bogota we use origin-destination tables for traffic demand based on call detail records aggregated on traffic analysis zone (TAZ) as used in [37; 38; 39; 40; 41]. In Berlin we utilize origin-destination tables based on GPS signals of navigation systems of various sources, including connected cars, floating cars and commercial car fleets, cleaned and calibrated as in a previous study [13]. For all cities, urban form indicators were calculated using OpenStreetMap (OSM) data [42], while population density data is derived from Meta's High Resolution Population Density Maps [43]. Income and employment data is collected from various sources as no harmonized data source could be obtained. For the US cities income data from the United States Census Bureau, in particular, "aggregated household income" data by block group for all blocks in Massachusetts and California from 2013 are used [44]. For Rio de Janeiro, mean household income per capita data is obtained from the Access to Opportunities (AOP) project from the year 2010 [45]. In Bogota socioeconomic stratification on the block level as developed in [41] is used as a proxy for income. In Berlin, income data on the household level is obtained from the commercial provider AXCIOM as used in [13]. Employment data, defined by number of jobs per TAZ, is derived from the Smart Location Database [46] for the US cities and in Rio de Janeiro from the AOP project [45]. In Berlin and Bogota no open employment data could be obtained and employment locations are inferred from all work destinations derived from the SRV mobility survey 2017 [47] and Bogota mobility survey 2015 respectively [48]. **Data Preparation.** From all mobility datasets only trips in the morning hours between 6 am and 10 am and which start and end within the city's boundary are considered. For the cities boundary definition we adopt the OECD's functional urban areas data [49] as they include the inner city area and the surrounding commuting zone. For Rio de Janeiro, we adopt the functional urban area defined by the Brazilian official statics office. The mobility data from the US cities, Rio de Janeiro and Bogota are further cleaned and calibrated as in previous studies, which includes assigning mean number of trips from CDR records to TAZ. For Berlin, the same cleaning procedure as in [13] is applied and individual trip origins and destinations are aggregated to TAZ to match the other data sources. **Target Variable - Vehicle Kilometer Traveled (VKT).** For the US cities, Rio de Janeiro and Bogota the mean travel distances per TAZ are calculated by creating uniformly random samples of trip origins and destinations on the road network. For each zip code we sample the number of points starting and ending in that zip code. To reduce errors we only sample onto residential or tertiary roads, as we assume that no trips start on highways, primary or secondary roads. To determine travel distances, we calculate the shortest path along the street network from origin to destination and average travel distances per TAZ. In Berlin, we similarly average the travel distances of all trips starting within each TAZ. We acknowledge that individual routing choices might deviate from the optimal, shortest path. Yet, previous work [50] suggests that distance errors are minimal and equally distributed across cities as large deviations from the optimal route between origin and destination are uncommon, bounded within an ellipse of high eccentricity and independent of the urban layout and its street network. **Feature Engineering.** We develop five features across all cities to represent four of the 6D dimensions. We define the features based on what has been used most frequently across previous work and for which data is available across all cities (see Table 3 in the Appendix). Our features represent destination accessibility, measured by road network distance to center and distance to employment. The latter is computed for each TAZ by taking the weighted average distance to 10% of all jobs. The distances are calculated between each centroid of a TAZ and weighted by the number of jobs a TAZ contains. Density is measured as total number of inhabitants per area of TAZ, while demographics is measured by income, provided in 7 classes ranging from low to high. Design is represented via street network connectivity and measured via the number of intersections per TAZ area. We choose to exclude the diversity dimension, as no coherent datasets across the six cities could be obtained and as previous studies only associated with it (if at all) only very small effects in the context of car travel. We also exclude distance to transit, as our data sample only represents car travel, which implies that every person has already decided to use a car. Table 2 summarizes the features. We acknowledge the significance of addressing residential self-selection when analyzing urban form effects. Control variables for residential self-selection include both socio-demographic and attitude-based factors, yet there is also work that suggests that errors are minimal when only using socio-demographic variables [51]. In line with this, we only consider the effects of residential self-selection that can be proxied by socio-demographic variables, such as income, similar to [12]. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **D-Variable** & **Feature** & **Description** \\ \hline Destination Accessibility & Distance to city center & Distance from TAZ center to main center. \\ \hline Destination Accessibility & Distance to employment & Weighted average distance to 10\% of all jobs. \\ \hline Density & Population density & Number of inhabitants per area of TAZ. \\ \hline Demographics & Income & Average income within TAZ. \\ \hline Design & Street connectivity & Number of intersections per area of TAZ. \\ \hline \end{tabular} \end{table} Table 2: Predictive features of urban form based on the 6D’s of compact development ### 5.2 Modelling **Causal Graph Discovery.** To test the hypotheses that urban form affects mobility behavior and to examine the causal relationship between the considered variables in more detail, we apply causal graph discovery. Causal graph discovery is usually based on either constrained or score based algorithms. While the former uses conditional independence tests to obtain a graph, the latter defines the graph discovery as a structure learning problem, where the graph that maximizes some score is selected out of the space of possible graphs. In addition, hybrid approaches that combine the above mentioned concepts also exist (for an overview and a comparison of different algorithms, refer to [52]). For our problem setting, we adopt a conditional independence-based causal discovery framework, called PCMCI+, which aims at defining the presence and absence as well as the directionality of edges between nodes in a DAG. The nodes of a DAG represent all features and the target of our problem setting. PCMCI+ is an extension of the original PC algorithm [53], as it can handle time series data and contemporaneous causal effects as well as linear and nonlinear variable dependencies with varying noise distributions. In comparison to alternative causal discovery algorithms, PCMCI+ is optimized to reduce problem dimensionality (PC stage) and robustly control false-positive rates (MCI stage), resulting in very high detection power. A detailed introduction to the method can be found in [32]. Next to discovering a DAG, we use the framework to examine the strength of the observed causal dependencies, by investigating the MCI partial correlation value derived from the selected conditional independence test at maximal p-value. The MCI value between two nodes can be seen as a qualitative estimator of causal strength [35]. The method assumes Causal Markov Condition, Faithfulness and Causal Sufficiency. The first indicates that a statistical dependence between two variables is the result of an underlying causal connection. Faithfulness defines that statistical independence suggests a lack of causal connection and causal sufficiency implies that the analyzed variables include all common causes. Though, similar to [54], we have to relax the causal sufficiency assumption as not all causes of travel distance can be measured or are freely available as open data. In addition, we assume the following about our variable relationships: first, we expect that none of the urban form features can be caused by the target VKT. Second, we assume that income differences (as a proxy for socio-demographics and behavioral attitudes) can affect both urban form (via residential self selection) and travel behavior, but cannot be caused by urban form. Third, as previous studies have frequently mentioned the indirect effect of distance to the city center on other variables, we assume that distance to city center cannot be caused by other variables. The causal graph discovery method is applied to an equally pooled sample of all six cities covered in the study, with all features and the target being scaled to unit variance and subtracted mean. We utilize the found causal graph learned from observational data to analyze urban form and VKT relationships and to inform the calculation of causal shapley values. **Model Generalization.** We adopt a 6-fold, city-wise cross validation procedure to analyze how much of the variation in VKT can be explained by urban form. We train a Gradient Boosting Decision Tree Regression Model (GBDT), using the XGBOOST python library [55], on five cities and predict VKT in the unseen 6th city. We only use the urban form features with a direct causal effect on VKT to better generalize to unseen data. We choose a GBDT model, as it can represent non-linear variable dependencies, is robust against feature multicollinearity [12] and has been shown to perform best with tabular input data in a similar problem setting [56]. We evaluate the generalization performance using the coefficient of determination (R2), the mean absolute error (MAE), and root mean squared error (RMSE). **Model Interpretation.** To analyze heterogeneity in urban form effects, we calculate causal shapley values on the prediction test set, as proposed by Heskes et al. [57]. Causal shapley values are based on shapley values [58], which solve an attribution problem where the prediction score of a model is distributed to its individual features. This is calculated by taking the weighted average across all possible feature combinations, also called coalitions, of how much a prediction changes when a feature is part of a coalition \(S\) versus when it is not \(S\). Here, estimating the prediction with the absence of a feature sparked a debate among scholars about which probability distribution is the right one to draw from (for further reference, see for example [59]), questioning the suitability of shapley values, especially when correlated features are present. While different solutions were proposed [60; 61; 62; 63], we adopt causal shapley values which estimate the (counterfactual) prediction based on drawing samples from an interventional conditional distribution instead of previously used observational conditional distributions. For this, causal shapley values incorporate the information of a causal graph as an input (currently available implementations only consider a causal chain graph, which decomposes a DAG into a partial causal ordering, refer to Figure 2 in [57]). By doing so, causal shapley values break the dependence between the features in the coalition \(S\) and the remaining features, allowing to obtain meaningful feature attributions that reflect the underlying causal structure when explaining a model. For analyzing spatially explicit urban form effects, the properties of causal shapley values are of high use, as they allow (1) to keep the desirable property of shapley values to calculate effects for individual samples (which translate to individual locations) and (2) to reflect causal relationships between our features and the target via causal chain graphs.
2309.09790
A Product on Lorenz Hulls, Zonoids, and Vector Measure Ranges
A Lorenz hull is the convex hull of the range of an $n$-dimensional vector of finite signed measures defined on a common measurable space. We show that the set of $n$-dimensional Lorenz hulls is endowed with a natural product that is commutative, associative, and distributive over Minkowski sums. The same holds with "zonoid" in place of "Lorenz hull" as the two concepts give rise to the same set of subsets of $\mathbb{R}^n$. The product is defined via the common notion of a product measure.
John Steinberger, Zhe Zhang
2023-09-07T14:13:25Z
http://arxiv.org/abs/2309.09790v1
# A Product on Lorenz Hulls, Zonoids, and Vector Measure Ranges ###### Abstract A _Lorenz hull_ is the convex hull of the range of an \(n\)-dimensional vector of finite signed measures defined on a common measurable space. We show that the set of \(n\)-dimensional Lorenz hulls is endowed with a natural product that is commutative, associative, and distributive over Minkowski sums. The same holds with "zonoid" in place of "Lorenz hull" as the two concepts give rise to the same set of subsets of \(\mathbb{R}^{n}\). The product is defined via the common notion of a product measure. ## 1 Introduction Let \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{n})\) an \(n\)-tuple of signed finite (and hence bounded) measures on a common space \((S,\mathcal{F})\) where \(S\) is a ground set and \(\mathcal{F}\) is a \(\sigma\)-algebra of subsets of \(S\). We refer to \(\boldsymbol{\mu}\) as a _finite signed vector measure_. The _Lorenz hull_ of \(\boldsymbol{\mu}\), denoted \(\mathsf{LH}(\boldsymbol{\mu})\), is the convex hull of the range of \(\boldsymbol{\mu}\); i.e., \[\mathsf{LH}(\boldsymbol{\mu}):=\mathsf{conv.hull}(\{\boldsymbol{\mu}(A):A\in \mathcal{F}\})\] where \(\mathsf{conv.hull}(F)\) is the convex hull of a bounded set \(F\subseteq\mathbb{R}^{n}\). We also refer to the range of \(\boldsymbol{\mu}\) itself as the _Lorenz skeleton_ of \(\boldsymbol{\mu}\), denoted \(\mathsf{LS}(\boldsymbol{\mu})\); i.e., \(\mathsf{LS}(\boldsymbol{\mu}):=\{\boldsymbol{\mu}(A):A\in\mathcal{F}\}\) and \(\mathsf{LH}(\boldsymbol{\mu})=\mathsf{conv.hull}(\mathsf{LS}(\boldsymbol{\mu}))\). In relation to previous terminology, Harremoes [14] refers to \(2\)-dimensional Lorenz hulls as _Lorenz diagrams_. Moreover the Lorenz skeleton of a tuple of continuous measures is well-known as a _zonoid_[2, 33]. In the latter case, by a classical theorem of Lyapunov [21], the Lorenz skeleton is convex, and, therefore, the Lorenz hull and the Lorenz skeleton coincide. Moreover, while a Lorenz hull is a nominally broader concept than a zonoid one can show that for every finite signed vector measure \(\boldsymbol{\mu}\) there is some finite signed vector measure \(\boldsymbol{\mu}^{\prime}\) with continuous elements such that \(\mathsf{LH}(\boldsymbol{\mu})=\mathsf{LS}(\boldsymbol{\mu}^{\prime})\), i.e., every Lorenz hull also happens to be a zonoid [2]. For the type of theorem that we present below, however, the strongest statement is obtained by allowing the largest possible class of vector measures to be counted as part of the "preimage" of the hull/zonoid, so we enlarge the scope of the discussion to Lorenz hulls. We note that the "Lorenz" moniker enters into the scene via the _Lorenz curve_, popular in the social sciences, that can be characterized as the lower boundary of a two-dimensional Lorenz hull, more specifically the lower boundary of the Lorenz hull of a pair of nonnegative (i.e., unsigned) measures on a common space.1 Koshevoy [17] seems to be the first author to draw a connection between zonoids and Lorenz curves. Footnote 1: The _Gini coefficient_ of a Lorenz curve can also be given an elegant description in terms of Lorenz hull, being the area of the hull. Our main result is to show the existence of a natural product, based off of the common notion of a product measure, on the set of \(n\)-dimensional Lorenz hulls (zonoids). The construction is implicit in the following statement: **Theorem 1**.: _Let \(H_{1}\),..., \(H_{k}\) be Lorenz hulls. Let \(\boldsymbol{\mu}^{1}\),..., \(\boldsymbol{\mu}^{k}\) where \(\boldsymbol{\mu}^{j}=(\mu_{1}^{j},\ldots,\mu_{n}^{j})\) be such that \(H_{j}=\mathsf{LH}(\boldsymbol{\mu}^{j})\) for \(j=1,\ldots,k\). Then \(\mathsf{LH}(\boldsymbol{\mu}^{\times})\) depends only on \(H_{1}\),..., \(H_{k}\) and not on the choice of \(\boldsymbol{\mu}^{1}\),..., \(\boldsymbol{\mu}^{k}\) where \(\boldsymbol{\mu}^{\times}:=(\mu_{1}^{\times},\ldots,\mu_{n}^{\times})\), where \(\mu_{i}^{\times}:=\mu_{i}^{1}\times\cdots\times\mu_{i}^{k}\) for \(i=1,\ldots,n\)._ To paraphrase, the hull of a tuple of product measures--all of same arity, named \(k\) above--is determined by the \(k\) individual "index slice" hulls, i.e., the hulls that arise by keeping only the \(j\)-th term inside each product, \(j=1,\ldots,k\). Whether a similar theorem holds for Lorenz skeletons is left as a main open problem. It is easy to check that the case \(k=2\) of Theorem1 implies the general case and that the binary product defined by the case \(k=2\) is associative and yields the same \(k\)-ary product via said associativity as when the \(k\)-ary product is defined directly as it is in Theorem1. We refer to the product defined by Theorem1 as the _Lorenz product_, writing \(H_{1}H_{2}\) for the product of hulls \(H_{1}\) and \(H_{2}\). One can easily check from the definition that the Lorenz product is commutative. Moreover, writing \(H_{1}+H_{2}\) for the Minkowski sum of Lorenz hulls \(H_{1}\) and \(H_{2}\), and noting that the set of Lorenz hulls is closed under such sums, we also show: **Theorem 2**.: \(H_{1}(H_{2}+H_{3})=H_{1}H_{2}+H_{1}H_{3}\) _for all Lorenz hulls \(H_{1}\), \(H_{2}\), \(H_{3}\subseteq\mathbb{R}^{n}\)._ **Theorem 3**.: \(H_{1}\subseteq H_{3}\)_, \(H_{2}\subseteq H_{4}\implies H_{1}H_{2}\subseteq H_{3}H_{4}\) for all Lorenz hulls \(H_{1}\), \(H_{2}\), \(H_{3}\), \(H_{4}\subseteq\mathbb{R}^{n}\)._ I.e., the Lorenz product is distributive and "inclusion-preserving". One can also easily check that the Lorenz product has a (necessarily unique) multiplicative identity \(\mathsf{conv.hull}(\{\mathbf{0},\mathbf{1}\})\) where \(\mathbf{0}=(0,\ldots,0)\), \(\mathbf{1}=(1,\ldots,1)\in\mathbb{R}^{n}\) and that the only Lorenz hulls with a multiplicative inverse are those of the form \(\mathsf{conv.hull}(\{\mathbf{0},(x_{1},\ldots,x_{n})\})\) where \(x_{i}\neq 0\) for \(i=1,\ldots,n\). (More precisely, one can easily argue that the presence of non-codirectional points in a hull implies the presence of non-codirectional points in any hull of the form \(H_{1}H_{2}\) such that \(H_{1}H_{2}\ni\mathbf{1}\), e.g..) Interestingly, our results also extend to complex-valued measures, as sketched in Section 8, but even further generalizations are not pursued here. Historical work on zonoids.In a famous theorem, Lyapunov [21] (1940) proved that Lorenz skeletons are closed and that Lorenz hulls of tuples of continuous (a.k.a., "non-atomic") vector measures are convex. A simplified proof of Lyapunov's theorem was presented in English by Halmos [11], published in 1948. Lindenstrauss [22] also gave an elegant short proof of the closedness and convexity of Lorenz skeletons of continuous vector measures in 1966 from a functional analysis perspective, cited by Rudin [30]. Rickert [28] showed a bijection between zonoids and measures on certain "standard" spaces, specifically the projective space. In 1969, Bolker [2] wrote a first survey of results on zonoids (also introducing the term), where in particular it is shown that every Lorenz hull2 is a zonoid. A survey including work up to the early 1980s is given by Schneider and Weil [33]. Footnote 2: There is no dedicated term to denote the concept of a Lorenz hull, however, until the afore-mentioned paper by Harremoes [14]. After the late 1970s one research theme concerned the approximability of zonoids by zonotopes, their discrete counterparts, culminating in a result by Talagrand [8, 32, 3, 34]. Two-dimensional zonoids and zonotopes, and in particular inclusion relationships between these, can also be related to the important topic of _majorization_[12, 13, 24]. See for example works Foster [9], Harremoes and Harremoes and van Erden [14, 7], and by Koshevoy and Koshevoy and Mosler [17, 19, 18]. More recent work.Work on zonoids has continued apace in recent decades. See [1, 15, 23, 20] for some recent references. We note that one recurring question has been the issue of proving that a given set is not a zonoid [10, 27, 23]. Proof Summary.We establish Theorem 1 by way of the following more general inclusion-preservation result, that also establishes Theorem 3: **Theorem 4**.: _Let \(\boldsymbol{\alpha}\), \(\boldsymbol{\beta}\), \(\boldsymbol{\alpha}^{\prime}\), \(\boldsymbol{\beta}^{\prime}\) be four \(n\)-dimensional signed vector measures such that \(\mathsf{LH}(\boldsymbol{\alpha})\subseteq\mathsf{LH}(\boldsymbol{\alpha}^{ \prime})\), \(\mathsf{LH}(\boldsymbol{\beta})\subseteq\mathsf{LH}(\boldsymbol{\beta}^{ \prime})\). Then \(\mathsf{LH}(\boldsymbol{\alpha}\times\boldsymbol{\beta})\subseteq\mathsf{LH}( \boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\) where \(\boldsymbol{\alpha}\times\boldsymbol{\beta}\) is the \(n\)-dimensional signed vector measure whose \(i\)-th coordinate is the direct product of the \(i\)-th coordinates of \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\), and likewise for \(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime}\)._ Theorem 1 is easily seen to be a corollary of Theorem 4 by application of the principle that sets \(S\) and \(T\) are equal if and only if \(S\subseteq T\) and \(T\subseteq S\). To establish the conclusion of Theorem 4 we use a separating hyperplane argument. Because the Lorenz hulls \(\mathsf{LH}(\boldsymbol{\alpha}\times\boldsymbol{\beta})\), \(\mathsf{LH}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\) are closed convex sets, specifically, it suffices to show that \[\sup_{\boldsymbol{z}\in\mathsf{LH}(\boldsymbol{\alpha}\times\boldsymbol{ \beta})}\boldsymbol{x}^{*T}\boldsymbol{z}\leq\sup_{\boldsymbol{z}\in\mathsf{ LH}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})}\boldsymbol{x}^{*T} \boldsymbol{z} \tag{1}\] for every \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) in order to show \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\alpha}^{ \prime}\times\mathbf{\beta}^{\prime})\). In turn, (1) is established by characterizing, for each \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), the optimal \(\mathbf{z}\) for which the left-hand side achieves its supremum. Knowing this characterisation allows us to write the left-hand side as an integral that can be rewritten as an iterated integral, given the product structure of the measure space. The assumption that \(\mathsf{LH}(\mathbf{\alpha})\subseteq\mathsf{LH}(\mathbf{\alpha}^{\prime})\) gives an inequality (in fact, the direct analog of (1)) that can be applied to the inner integral, effectively replacing \(\mathbf{\alpha}\) by \(\mathbf{\alpha}^{\prime}\) at that stage in the computation, while introducing an inequality; proceeding symmetrically, one can rewind, reverse the order of integration, and replace \(\mathbf{\beta}\) by \(\mathbf{\beta}^{\prime}\). One can also rely on symmetry to reduce the theorem to the case \(\mathbf{\beta}=\mathbf{\beta}^{\prime}\) as a preamble. (As we actually choose to do.) Follow-up work. We will show in a separate paper that \(m\)-th roots of nonnegative Lorenz hulls under the product introduced by this paper are unique when such roots exist, i.e., "uniqueness of roots". Organization. We have tried to keep the paper friendly to non-mathematicians (in the hopes of accommodating computer scientists in particular3), resulting in preliminary background sections on measure theory and convex analysis. We also include a separate introduction to signed measures in Section 4. Familiar readers should be able to start in Section 5 and skim backwards as necessary to find the definitions of nonstandard notations. Footnote 3: Note that products of two-dimensional Lorenz diagrams, in the sense defined by this paper, crop up whenever the distinguishability or “divergence” (according to any standard metric) of two vectors \((X_{1},\ldots,X_{k})\), \((X_{1}^{\prime},\ldots,X_{k}^{\prime})\), each consisting of independently sampled random variables, comes under discussion: The two-dimensional Lorenz hull for the pair of measures induced by the pair \(((X_{1},\ldots,X_{k}),(X_{1}^{\prime},\ldots,X_{k}^{\prime}))\) is the product of the \(k\) two-dimensional Lorenz hulls associated to the respective pairs \((X_{1},X_{1}^{\prime})\), \(\ldots\), \((X_{k},X_{k}^{\prime})\). ## 2 Measure Theory We recall the standard elements of measure theory. More details may be found in Durrett [6] and Rudin [31]. **Definition 1**.: _A measurable space is a pair \((S,\mathcal{F})\) where \(S\) is a set and \(\mathcal{F}\) is a \(\sigma\)-algebra on \(S\), i.e., \(\mathcal{F}\) is a set of subsets of \(S\) such that_ 1. \(S\in\mathcal{F}\)_,_ 2. _if_ \(A\in\mathcal{F}\) _then_ \(S\backslash A\in\mathcal{F}\)_, and_ 3. \(\bigcup_{i=1}^{\infty}A_{i}\in\mathcal{F}\) _for any countable collection_ \(\{A_{i}\}_{i\in\mathbb{N}}\) _of elements of_ \(\mathcal{F}\)_._ **Definition 2**.: _A measure on a measurable space \((S,\mathcal{F})\) is a function \(\mu:\mathcal{F}\to\mathbb{R}\cup\{\infty\}\) such that_ 1. \(\mu(A)\geq 0\) _for all_ \(A\in\mathcal{F}\)_, and_ 2. \(\mu\big{(}\bigcup_{i=1}^{\infty}A_{i}\big{)}=\sum_{i=1}^{\infty}\mu(A_{i})\) _for any collection_ \(\{A_{i}\}_{i\in\mathbb{N}}\) _of pairwise disjoint elements of_ \(\mathcal{F}\)_._ _Moreover, \(\mu\) is finite if \(\mu(S)<\infty\) and is \(\sigma\)-finite if there exists a sequence \(A_{1}\), \(A_{2}\), \(\ldots\) of elements of \(\mathcal{F}\) such that \(\mu(A_{n})<\infty\) and \(\bigcup_{n}A_{n}=S\)._ **Definition 3**.: _A measure space is a triple \((S,\mathcal{F},\mu)\) where \((S,\mathcal{F})\) is a measurable space and \(\mu\) is a measure on \((S,\mathcal{F})\)._ **Definition 4**.: _Let \((S,\mathcal{F})\) and \((S^{\prime},\mathcal{F}^{\prime})\) be two measurable spaces. A function \(g:S\to S^{\prime}\) is measurable (with respect to the \(\sigma\)-algebras \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\)) if for any \(A\in\mathcal{F}^{\prime}\), \(g^{-1}(A)\coloneqq\{s\in S\,:\,g(s)\in A\}\in\mathcal{F}\)._ **Definition 5**.: _A set system \(\mathcal{G}\) of subsets of \(S\) generates a \(\sigma\)-algebra \(\mathcal{F}\) of \(S\) if \(\mathcal{F}\) is the smallest \(\sigma\)-algebra of \(S\) containing \(\mathcal{G}\)._ As commonly pointed out, the notion of a "smallest" \(\sigma\)-algebra appearing in Definition5 is well-defined since the intersection of an arbitrary collection of \(\sigma\)-algebras of \(S\) is a \(\sigma\)-algebra of \(S\). **Definition 6**.: _The product \((S,\mathcal{F})\times(T,\mathcal{G})\) of two measurable spaces \((S,\mathcal{F})\), \((T,\mathcal{G})\) is the measurable space \((S\times T,\mathcal{H})\) where \(\mathcal{H}\) is the \(\sigma\)-algebra on \(S\times T\) generated by sets of the form \(F\times G\), \(F\in\mathcal{F}\), \(G\in\mathcal{G}\)._ Given a set \(S\), a measurable space \((T,\mathcal{G})\), and a function \(f:S\to T\), one can check that \(f^{-1}(\mathcal{G})\coloneqq\{f^{-1}(B)\,:\,B\in\mathcal{G}\}\) is a \(\sigma\)-algebra on \(S\). Moreover, if a set system \(\mathcal{A}\) of subsets of \(T\) generates \(\mathcal{G}\), then \(f^{-1}(\mathcal{A})\) generates \(f^{-1}(\mathcal{G})\). The next lemma follows from this observation. **Lemma 1**.: _Let \((S,\mathcal{F})\) and \((T,\mathcal{G})\) be two measurable spaces. Let \(\mathcal{A}\) be a subset of \(\mathcal{G}\) which generates \(\mathcal{G}\). If a function \(f:S\to T\) is such that \(f^{-1}(A)\in\mathcal{F}\) for any \(A\in\mathcal{A}\), then \(f\) is measurable with respect to \(\mathcal{F}\) and \(\mathcal{G}\)._ As an application of Lemma1, continuous functions are measurable with respect to Borel (see below) sets. For another example that will be used later, if functions \(g_{i}\) from \((S,\mathcal{F})\) to \((T_{i},\mathcal{G}_{i})\), \(1\leq i\leq n\), are measurable, then \(f:S\to\bigtimes_{i=1}^{n}T_{i}\) defined by \(f(s)=(g_{1}(s),g_{2}(s),\ldots,g_{n}(s))\) for \(s\in S\) is measurable with respect to the product \(\sigma\)-algebra in the sense of Definition6 on \(\bigtimes_{i=1}^{n}T_{i}\), since for any product set \(A=\bigtimes_{i=1}^{n}A_{i}\) where \(A_{i}\in\mathcal{G}_{i}\), \(1\leq i\leq n\), one has \(f^{-1}(A)=\bigcap_{i=1}^{n}g_{i}^{-1}(A_{i})\in\mathcal{F}\). The next lemma establishes the existence and uniqueness of a measure on a product space that is naively compatible with the measures on the component spaces. **Lemma 2**.: _Let \((S\times T,\mathcal{H})\) be defined as in Definition6 and let \(\mu\), \(\nu\) be \(\sigma\)-finite measures on \((S,\mathcal{F})\) and \((T,\mathcal{G})\), respectively. Then there is a unique measure \(\lambda\) on \((S\times T,\mathcal{H})\) such that \(\lambda(F\times G)=\mu(F)\nu(G)\) for all \(F\in\mathcal{F},G\in\mathcal{G}\). Moreover, \(\lambda\) is \(\sigma\)-finite._ We write \(\mu\times\nu\) for the measure \(\lambda\) of Lemma2. We note that the proof of Lemma2 uses the following lemma, that we will reuse, together with Lemma2, when it comes time to extend the definition of product measures to signed measures (cf. Proposition10 in Section6); this next lemma can be proved via the \(\pi\)-\(\lambda\) theorem (cf. Durrett [6]): **Lemma 3**.: _If finite signed measures \(\alpha\), \(\alpha^{\prime}\) on \((S,\mathcal{F})\) agree on \(\mathcal{G}\subseteq\mathcal{F}\) where \(\mathcal{G}\) generates \(\mathcal{F}\), where \(\mathcal{G}\) is closed under intersection, and where there exists a sequence of \(G_{i}\in\mathcal{G}\) such that \(S=\bigcup_{i=1}^{\infty}G_{i}\), then \(\alpha\), \(\alpha^{\prime}\) agree on \(\mathcal{F}\)._ The set of _extended real numbers_ is the set \(\overline{\mathbb{R}}=\mathbb{R}\,\cup\,\{\infty,-\infty\}\) where \(\infty\), (also written "\(+\infty\)"), \(-\infty\) are designated symbols. \(\overline{\mathbb{R}}\) is endowed with a standard topology and arithmetic [30]. In particular, the topology of \(\overline{\mathbb{R}}\) is isomorphic to the topology of the closed interval \([-1,1]\) via the 1-to-1 mapping from \(\overline{\mathbb{R}}\) to \([-1,1]\) given by \[x\to\begin{cases}x/(1+|x|)&\text{if }x\in\mathbb{R},\\ 1&\text{if }x=\infty,\\ -1&\text{if }x=-\infty.\end{cases}\] Arithmetic-wise, one defines \(0\cdot\pm\infty=0\), whereas \(\infty-\infty\), \(\pm\infty/\pm\infty\) as well as \(a/0\) are undefined for all \(a\in\overline{\mathbb{R}}\). One can also note that \(\overline{\mathbb{R}}\) is totally ordered. The _Borel \(\sigma\)-algebra_ of \(\overline{\mathbb{R}}\) (resp. \(\mathbb{R}\)) is the \(\sigma\)-algebra generated by all open subsets of \(\overline{\mathbb{R}}\) (resp. \(\mathbb{R}\)). Without ambiguity, the symbol \(\mathcal{B}\) will denote the Borel \(\sigma\)-algebra of either \(\overline{\mathbb{R}}\) or \(\mathbb{R}\). Given a measurable space \((S,\mathcal{F})\), a function \(f:S\to\overline{\mathbb{R}}\) is _measurable_ if it is a measurable function from \((S,\mathcal{F})\) to \((\overline{\mathbb{R}},\mathcal{B})\) in the sense of Definition 4. The Borel \(\sigma\)-algebra on \(\mathbb{R}^{n}\) is similarly defined as the \(\sigma\)-algebra generated by all open subset of \(\mathbb{R}^{n}\). It also coincides with the \(\sigma\)-algebra on \(\mathbb{R}^{n}\) generated by "rectangles", i.e., sets of the form \(\bigtimes_{1\leq i\leq n}A_{i}\) where each \(A_{i}\) is a Borel subset of \(\mathbb{R}\). Given a measure space \((S,\mathcal{F},\mu)\), a measurable function \(f:S\to\overline{\mathbb{R}}\), and \(A\in\mathcal{F}\), the expression \[\int_{A}f\,\mathrm{d}\mu\] denotes the Lebesgue integral of \(f\) on \(A\) with respect to the measure \(\mu\). One may also write this integral as \[\int_{A}f(s)\,\mu(\mathrm{d}s)\] which offers the possibility of specifying the function \(f\) on the fly in terms of an algebraic expression of \(s\). The value of the Lebesgue integral is an element of \(\overline{\mathbb{R}}\), or else is undefined. We recall that the Lebesgue integral is defined as the difference of its positive and negative parts, being undefined if and only if the positive and negative parts are both infinite. In particular, the integral is guaranteed to exist if \(f\) is nonnegative, and is guaranteed to exist and to be finite if \(f\) is bounded4 and \(\mu(S)<\infty\). Furthermore, the integral is linear (provided the linear-combination-of-functions evaluates to a well-defined function from \(S\) to \(\overline{\mathbb{R}}\) and the linear combination-of-integrals evaluates to a well-defined element of \(\overline{\mathbb{R}}\)), and \[\mu(A)=\int_{S}\mathbf{1}_{A}\,\mathrm{d}\mu=\int_{A}\mathrm{d}\mu\] for any \(A\in\mathcal{F}\), where \(\mathbf{1}_{A}\) is the indicator function of \(A\) on \(S\). **Definition 7**.: _Given a measure \(\mu\) on \((S,\mathcal{F})\) and a boolean property \(P\) of elements of \(S\) we say \(P\) holds \(\mu\)-almost everywhere if the set of \(s\in S\) for which \(P(s)\) is false is contained in a set \(N\in\mathcal{F}\) such that \(\mu(N)=0\)._ **Lemma 4**.: (Theorem 1.40, Rudin [31]) _Let \(\mu\) be a finite measure on \((S,\mathcal{F})\) and let \(f:S\to\mathbb{R}\) be such that \(\int_{S}|f|\,\mathrm{d}\mu<\infty\). Let \(E\) be a closed subset of \(\mathbb{R}\). If_ \[\frac{1}{\mu(A)}\int_{A}f\,\mathrm{d}\mu\in E\] _for every \(A\in\mathcal{F}\) such that \(\mu(A)>0\), then \(f\in E\)\(\mu\)-almost everywhere on \(S\)._ The following lemma, commonly known as "Fubini's theorem", singles out sufficient conditions under which an integral over a product space can be evaluated via iterated integration. This will be central to our work: **Lemma 5**.: (Fubini's theorem) _Let \((S,\mathcal{F},\mu)\) and \((T,\mathcal{G},\nu)\) be two \(\sigma\)-finite measure spaces. If \(f\geq 0\) or \(\int_{S\times T}|f|\,\mathrm{d}(\mu\times\nu)<\infty\) then_ \[\int_{S}\int_{T}f(s,t)\,\nu(\mathrm{d}t)\,\mu(\mathrm{d}s)=\int_{S\times T}f \,\mathrm{d}(\mu\times\nu)=\int_{T}\int_{S}f(s,t)\,\mu(\mathrm{d}s)\,\nu( \mathrm{d}t).\] We note that the condition \(\int_{S\times T}|f|\,\mathrm{d}(\mu\times\nu)<\infty\) of Lemma 5 is automatically fulfilled if \(\mu\), \(\nu\) are finite and \(f\) is bounded. **Lemma 6**.: (dominated convergence theorem) _Let \(\mu\) be a \(\sigma\)-finite measure on \((S,\mathcal{F})\). Let \(f\) and \(f_{n}\), \(n\geq 1\) be measurable functions from \(S\) to \(\mathbb{R}\) such that \(f_{n}\to f\)\(\mu\)-almost everywhere. If there exists a measurable function \(g\) from \(S\) to \(\mathbb{R}\) such that \(|f_{n}|\leq g\) for all \(n\) and such that \(\int_{S}g\,\mathrm{d}\mu<\infty\), then \(\int_{S}f_{n}\,\mathrm{d}\mu\to\int_{S}f\,\mathrm{d}\mu\)._ A measure \(\nu\) on \((S,\mathcal{F})\) is said to be _absolutely continuous_ with respect to a measure \(\mu\) on the same measurable space if \(\nu(A)=0\) for any \(A\) such that \(\mu(A)=0\). We also say that \(\mu\)_dominates_\(\alpha\). The following classical theorem (c.f. [6], Theorem A.4.8 on Page 417) shows that a measure \(\nu\) that is absolutely continuous with respect to \(\mu\) can be expressed in terms of integration with respect to \(\mu\). **Lemma 7**.: (Radon-Nikodym theorem) _Let \(\mu\), \(\nu\) be \(\sigma\)-finite measures on \((S,\mathcal{F})\). If \(\nu\) is absolutely continuous with respect to \(\mu\), then there exists a measurable \(g:S\to[0,\infty)\) such that_ \[\nu(A)=\int_{A}g\,\mathrm{d}\mu\] _for all \(A\in\mathcal{F}\). Moreover, if \(h\) is another such function then \(h=g\)\(\mu\)-almost everywhere._ The notation \[\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\] is used to denote an arbitrary choice of the function \(g\) described in Lemma7 for \(\mu\), \(\nu\), and is called _the Radon-Nikodym derivative of \(\nu\) with respect to \(\mu\)_. We will introduce a similar notation after Lemma10 in Section4. The Radon-Nikodym derivative satisfies the following key property, strengthening the equation in Lemma7: **Lemma 8**.: _If \(\mu\), \(\nu\) are \(\sigma\)-finite measures on a measurable space \((S,\mathcal{F})\) such that \(\nu\) is absolutely continuous with respect to \(\mu\), then_ \[\int_{S}f\,\mathrm{d}\nu=\int_{S}f\cdot\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\, \mathrm{d}\mu\] _for any measurable \(f:S\to\mathbb{R}\) such that \(f>0\), or such that either one of \(\int_{S}|f|\,\mathrm{d}\nu<\infty\) or \(\int_{S}\left|f\cdot\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right|\mathrm{d}\mu<0\) holds._ ## 3 Convex Analysis Elements of \(\mathbb{R}^{n}\) are written in bold font and are interpreted as column vectors. We write \(\mathbf{x}^{T}\) for the row vector transpose of the column vector \(\mathbf{x}\in\mathbb{R}^{n}\). In particular, \(\mathbf{x}^{T}\mathbf{y}\) becomes the inner product of vectors \(\mathbf{x}\), \(\mathbf{y}\in\mathbb{R}^{n}\), written as a matrix product. We write \(\|\mathbf{x}\|\) for the Euclidean norm \(\sqrt{\mathbf{x}^{T}\mathbf{x}}\) of \(\mathbf{x}\in\mathbb{R}^{n}\). We take for granted basic notions of point-set topology in \(\mathbb{R}^{n}\), including open and closed sets, as well as closures, interiors, and boundary points. The closure and interior of a set \(C\subseteq\mathbb{R}^{n}\) are written \(\overline{C}\) and \(C^{\circ}\), respectively. **Definition 8**.: _Let \(\mathbf{x}_{1},\dots,\mathbf{x}_{k}\in\mathbb{R}^{n}\). A convex combination of \(\mathbf{x}_{1},\dots,\mathbf{x}_{k}\) is a vector of the form_ \[\lambda_{1}\mathbf{x}_{1}+\dots+\lambda_{k}\mathbf{x}_{k}\] _where \(\lambda_{1},\dots,\lambda_{k}\) are nonnegative real numbers such that \(\lambda_{1}+\dots+\lambda_{k}=1\)._ **Definition 9**.: _A set \(C\subseteq\mathbb{R}^{n}\) is convex if every convex combination of vectors in \(C\) is in \(C\)._ One can check that a set \(C\subseteq\mathbb{R}^{n}\) is convex if and only if \(\lambda\mathbf{x}+(1-\lambda)\mathbf{y}\in C\) for all \(\mathbf{x}\), \(\mathbf{y}\in C\) and all \(0\leq\lambda\leq 1\). (I.e., closure under convex combinations of size two suffices.) Moreover, the closure and interior of a convex set are convex and an arbitrary intersection of convex sets is convex. **Definition 10**.: _Let \(A\subseteq\mathbb{R}^{n}\). The convex hull of \(A\), written \(\mathsf{conv.hull}(A)\), is the intersection of all convex sets containing \(A\)._ It is often more practical to characterize \(\mathsf{conv.hull}(A)\) as the set of all convex combinations of elements of \(A\). (Since this set is convex, contains \(A\), and is contained in every convex set containing \(A\).) In fact, convex combinations of a definite size suffice by the following famous result of Caratheodory (that can be used, e.g., to simplify the proof of (2) or Proposition5 below, though, in truth, the previous observation suffices just as well): **Lemma 9**.: (Caratheodory) _Let \(A\subseteq\mathbb{R}^{n}\). Then \(\mathbf{x}\in\mathsf{conv.hull}(A)\) if and only if \(\mathbf{x}\) is a convex combination of \(n+1\) points in \(A\)._ We note that since the closure of a convex set is convex, the closure-of-a-convex-hull is convex; on the other hand, it is not true in general that the convex hull of a closed set is convex, nor, perforce, that the convex-hull-of-a-closure is closed. (Take the closed set \(\{(x,0):x\in\mathbb{R}\}\cup\{(0,1)\}\) in \(\mathbb{R}^{2}\).) However: **Proposition 1**.: \(\mathsf{conv.hull}(\overline{C})\subseteq\overline{\mathsf{conv.hull}(C)}\) _for all \(C\subseteq\mathbb{R}^{n}\), with equality if \(C\) is bounded._ Containment relations between closed, convex sets (bounded or unbounded, though we shall only be concerned with the bounded case) may be obtained in a "divide and conquer" approach, comparing how far two given sets reach on a direction-by-direction basis, as per the supremum appearing in (1). We find it convenient to develop a notational shorthand for the related supremum: **Definition 11**.: _Let \(C\subseteq\mathbb{R}^{n}\). The reach function of \(C\), written \(\|\cdot\|_{C}\), is the function from \(\mathbb{R}^{n}\) to \(\overline{\mathbb{R}}\) defined by \(\|\mathbf{x}^{*}\|_{C}:=\sup\{\mathbf{x}^{*T}\mathbf{x}\,:\,\mathbf{x}\in C\}\)._ It is possible to check that \(\|\cdot\|_{C}\) is continuous and convex5 for every nonempty \(C\subseteq\mathbb{R}^{n}\). We also note that despite the suggestive notation, \(\|\cdot\|_{C}\) is not in general6 a norm. Footnote 5: In the sense of a _function_ from \(\mathbb{R}^{n}\) to \(\overline{\mathbb{R}}\) being convex, not needed for this work. Footnote 6: As the incantation goes, \(\|\cdot\|_{C}\) is a norm if and only if the convex hull of \(C\) is bounded, has nonempty interior, and is centrally symmetric around \(\mathbf{0}\in\mathbb{R}^{n}\). We write \[\|\cdot\|_{C}\leq\|\cdot\|_{D},\qquad\qquad\|\cdot\|_{C}=\|\cdot\|_{D}\] if \(\|\mathbf{x}^{*}\|_{C}\leq\|\mathbf{x}^{*}\|_{D}\), respectively \(\|\mathbf{x}^{*}\|_{C}=\|\mathbf{x}^{*}\|_{D}\), for all \(\mathbf{x}^{*}\in\mathbb{R}^{n}\). It is not hard to see that \[\|\cdot\|_{C}=\|\cdot\|_{\mathsf{conv.hull}(C)},\qquad\|\cdot\|_{C}=\|\cdot\|_ {\overline{C}} \tag{2}\] for all \(C\subseteq\mathbb{R}^{n}\), where the second identity follows by continuity of the inner product \(\mathbf{x}^{*T}\mathbf{x}\) as a function of \(\mathbf{x}\in\mathbb{R}^{n}\). Moreover, taking closures and taking the convex hull are the only operations that do not enlarge the reach, in the sense of the following proposition: **Proposition 2**.: \(\|\cdot\|_{D}\leq\|\cdot\|_{C}\) _if and only if \(D\subseteq\overline{\mathsf{conv.hull}(C)}\) for all \(C,D\subseteq\mathbb{R}^{n}\)._ If \(C\) is convex and closed then \(\overline{\mathsf{conv.hull}(C)}=C\), naturally, so: **Proposition 3**.: _If \(C\subseteq\mathbb{R}^{n}\) convex and closed then \(D\subseteq C\) if and only if \(\|\cdot\|_{D}\leq\|\cdot\|_{C}\)._ It should be noted that Proposition2 relies on--indeed, is equivalent to--a "separating hyperplane theorem", one of the deeper tools in convex analysis. (See, e.g., Rockafellar [29], Theorem 11.3.) Our containment results will be obtained by way of Proposition3. In so doing, it is often convenient to restrict the comparison between two reach functions to \[\mathsf{Sph}_{\mathbb{R}^{n}}\coloneqq\{\mathbf{x}^{*}\in\mathbb{R}^{n}\,:\,\|\mathbf{x }^{*}\|=1\}\] the unit sphere in \(\mathbb{R}^{n}\), since a reach function is positively homogeneous. The following elementary propositions are also recorded for completeness: **Proposition 4**.: \(\mathsf{conv.hull}(\mathsf{clf}(B,\mathbf{x}^{*}))\subseteq\mathsf{clf}(\mathsf{ conv.hull}(B),\mathbf{x}^{*})\) _for all \(B\subseteq\mathbb{R}^{n}\), \(\mathbf{x}^{*}\in\mathsf{Sph}_{\mathbb{R}^{n}}\), with equality if \(B\) is bounded._ Let \(A+B\) denote the Minkowski sum of sets \(A\), \(B\subseteq\mathbb{R}^{n}\), i.e., \[A+B\coloneqq\{\mathbf{x}+\mathbf{y}\,:\,\mathbf{x}\in A,\mathbf{y}\in B\}.\] We write \(\mathbf{x}+A\) to denote \(\{\mathbf{x}\}+A\) for \(\mathbf{x}\in\mathbb{R}^{n}\), \(A\subseteq\mathbb{R}^{n}\) for convenience. Moreover, let \[-A\coloneqq\{-\mathbf{x}\,:\,\mathbf{x}\in A\}\] for \(A\in\mathbb{R}^{n}\) by convention. **Proposition 5**.: _Let \(A\), \(B\subseteq\mathbb{R}^{n}\). Then \(\mathsf{conv.hull}(A+B)=\mathsf{conv.hull}(A)+\mathsf{conv.hull}(B)\), \(\mathsf{conv.hull}(-A)=-\mathsf{conv.hull}(A)\)._ ## 4 Signed Vector Measures **Definition 12**.: _A signed measure on a measurable space \((S,\mathcal{F})\) is a function \(\alpha:\mathcal{F}\to\overline{\mathbb{R}}\) such that_ \[\alpha\Big{(}\bigcup_{i=1}^{\infty}A_{i}\Big{)}=\sum_{i=1}^{\infty}\alpha(A_{ i}) \tag{3}\] _for any collection \(\{A_{i}\}_{i\in\mathbb{N}}\) of pairwise disjoint elements of \(\mathcal{F}\)._ By contrast, a measure defined as in Definition2 is sometimes called a _positive measure_ to emphasize it being a special kind of signed measure. Just as in the case of (positive) measures, (11) in the definition implies \(\alpha(\emptyset)=0\). It is also noted that the sum in (11) converges absolutely, since any rearrangement of the series converges to the measure of the same union. If the range of \(\alpha\) does not include \(\infty\) or \(-\infty\) then \(\alpha\) is called _finite_. Our work will only concern finite signed measures. **Definition 13**.: _The total variation of a finite signed measure \(\alpha\) on \((S,\mathcal{F})\) is the function \(|\alpha|:\mathcal{F}\to\overline{\mathbb{R}}\) defined by_ \[|\alpha|(A)=\sup\sum_{i=1}^{\infty}|\alpha(A_{i})|\] _for all \(A\in\mathcal{F}\), where the supremum is taken over all collections \(\{A_{i}\}_{i\in\mathbb{N}}\) of pairwise disjoint elements of \(\mathcal{F}\) of union \(A\)._ It is easy to check that \(|\alpha|\) is a positive measure on \((S,\mathcal{F})\) for every signed measure \(\alpha\) on \((S,\mathcal{F})\). Similarly to the case when \(\alpha\) is positive and \(\sigma\)-finite, a signed measure \(\alpha\) on \((S,\mathcal{F})\) is said to be _absolutely continuous_ with respect to a positive measure \(\mu\) on the same measurable space if \(\alpha(A)=0\) for any \(A\) such that \(\mu(A)=0\), and we also say that \(\mu\)_dominates_\(\alpha\). Obviously, a finite \(\alpha\) is absolutely continuous with respect to \(|\alpha|\), and it is easy to show that \(\mu\) dominates \(|\alpha|\) if and only if \(\mu\) dominates \(\alpha\). As the analogue of Lemma7 one has the following lemma (c.f. the traditional theorem of Lebesgue-Radon-Nikodym for complex measures, 6.10 of Rudin [31]): **Lemma 10**.: _Let \(\alpha\) be a finite signed measure on \((S,\mathcal{F})\) and let \(\mu\) be a \(\sigma\)-finite positive measure on \((S,\mathcal{F})\). If \(\mu\) dominates \(\alpha\), then there exists a measurable function \(g:S\to\mathbb{R}\) such that_ \[\alpha(A)=\int_{A}g\,\mathrm{d}\mu\] _for all \(A\in\mathcal{F}\). Moreover, if \(h\) is another such function, then \(h=g\)\(\mu\)-almost everywhere._ Accordingly, \[\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}\] denotes an arbitrary choice of \(g\) for \(\alpha\), \(\mu\) as in Lemma10, and is referred to as _the Radon-Nikodym derivative of \(\alpha\) with respect to \(\mu\)_. When \(\alpha\) is positive in addition to finite, this definition coincides with the definition made after Lemma7 and won't cause any ambiguity. The following lemma is a summary of some useful facts that can be found in Chapter 6 of Rudin [31]: **Lemma 11**.: _The total variation \(|\alpha|\) of a finite signed measure \(\alpha\) on \((S,\mathcal{F})\) is a finite positive measure on \((S,\mathcal{F})\). There exists measurable \(h:S\to\{-1,1\}\) such that_ \[\alpha(A)=\int_{A}h\,\mathrm{d}|\alpha|\] _for all \(A\in\mathcal{F}\). Moreover, if_ \[\alpha(A)=\int_{A}g\,\mathrm{d}\mu\] _for all \(A\in\mathcal{F}\) for some measurable \(g:S\to\mathbb{R}\) and positive measure \(\mu\) on \((S,\mathcal{F})\), then_ \[|\alpha|(A)=\int_{A}|g|\,\mathrm{d}\mu\] _for all \(A\in\mathcal{F}\)._ In particular, the range of the total variation of a finite signed measure, thus also the range of the finite signed measure, is actually bounded. **Proposition 6**.: _Let \(\alpha\) be a finite signed measure on \((S,\mathcal{F})\). Let \(\mu\), \(\mu^{\prime}\) be \(\sigma\)-finite positive measures on \((S,\mathcal{F})\) such that \(\mu^{\prime}\) dominates \(\alpha\), \(\mu\) dominates \(\mu^{\prime}\). Then \(\mu\) dominates \(\alpha\) and_ \[\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}=\frac{\mathrm{d}\alpha}{\mathrm{d}\mu^ {\prime}}\frac{\mathrm{d}\mu^{\prime}}{\mathrm{d}\mu}\] \(\mu\)_-almost everywhere._ Proof.: The fact that \(\mu\) dominates \(\alpha\) is obvious. For the rest, one just note that \[\alpha(A) =\int_{A}\frac{\mathrm{d}\alpha}{\mathrm{d}\mu^{\prime}}\, \mathrm{d}\mu^{\prime}\] \[=\int_{A}\frac{\mathrm{d}\alpha}{\mathrm{d}\mu^{\prime}}\frac{ \mathrm{d}\mu^{\prime}}{\mathrm{d}\mu}\,\mathrm{d}\mu\] for all \(A\in\mathcal{F}\), where the second equality follows by Lemma 8, for which the fact that \[\int_{A}\left|\frac{\mathrm{d}\alpha}{\mathrm{d}\mu^{\prime}}\right|\mathrm{ d}\mu^{\prime}=|\alpha|(A)<\infty\] follows by Lemma 11. Given finite signed measure \(\alpha\) on \((S,\mathcal{F})\), \(\sigma\)-finite positive measure \(\mu\) dominating \(\alpha\) on \((S,\mathcal{F})\), and \(f:S\to\mathbb{R}\) such that \(\int_{S}|f|\,\mathrm{d}|\alpha|<\infty\), one has \[\int_{A}\left|f\cdot\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}\right|\mathrm{d}\mu =\int_{A}|f|\cdot\left|\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}\right|\mathrm{d }\mu=\int_{A}|f|\,\mathrm{d}|\alpha|<\infty\] for any \(A\in\mathcal{F}\), where the last equality follows by Lemma 11 and Lemma 8, so that (note that \(\mu\) dominates \(|\alpha|\)) \[\int_{A}f\cdot\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}\,\mathrm{d}\mu=\int_{A} f\cdot\frac{\mathrm{d}\alpha}{\mathrm{d}|\alpha|}\cdot\frac{\mathrm{d}|\alpha|}{ \mathrm{d}\mu}\,\mathrm{d}\mu=\int_{A}f\cdot\frac{\mathrm{d}\alpha}{\mathrm{d} |\alpha|}\,\mathrm{d}|\alpha|\] for any \(A\in\mathcal{F}\), where the first equality follows by Proposition 6 and where the second equality follows by Lemma 8. Thus the Lebesgue integrals with respect to finite signed measures in the following are well-defined: **Definition 14**.: _Let \(\alpha\) be a finite signed measure on \((S,\mathcal{F})\) and let \(f:S\to\mathbb{R}\) be a measurable function such that \(\int_{S}|f|\,\mathrm{d}|\alpha|<\infty\). The Lebesgue integral of \(f\) with respect to \(\alpha\), written as \(\int_{S}f\,\mathrm{d}\alpha\), is defined by_ \[\int_{S}f\,\mathrm{d}\alpha=\int_{S}f\cdot\frac{\mathrm{d}\alpha}{\mathrm{d} \mu}\,\mathrm{d}\mu\] _where \(\mu\) is any \(\sigma\)-finite positive measure dominating \(\alpha\) on \((S,\mathcal{F})\)._ The integral defined by Definition14 possesses similar properties that ordinary Lebesgue integrals hold such as linearity. However we will not list those properties here because we will do all the computations about such integrals by applying the definition and manipulating ordinary Lebesgue integrals with respect to a \(\sigma\)-finite positive measure. **Definition 15**.: _An \(n\)-dimensional signed measure on a measurable space \((S,\mathcal{F})\) is a function \(\boldsymbol{\alpha}:\mathcal{F}\to\overline{\mathbb{R}}^{n}\) so that_ \[\boldsymbol{\alpha}(A)=(\alpha_{1}(A),\alpha_{2}(A),\ldots,\alpha_{n}(A))\] _for all \(A\in\mathcal{F}\), where each \(\alpha_{i}(A)\), \(1\leq i\leq n\), is a signed measure on \((S,\mathcal{F})\)._ We write \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\) to denote that \(\boldsymbol{\alpha}\) is defined as in Definition15, and call \(\alpha_{i}\), \(1\leq i\leq n\), the \(i\)-th component of \(\boldsymbol{\alpha}\). An \(n\)-dimensional signed measure \(\boldsymbol{\alpha}\) is _finite_ if each component of \(\boldsymbol{\alpha}\) is finite. If each component of an \(n\)-dimensional signed measure \(\boldsymbol{\mu}\) is positive, then \(\boldsymbol{\mu}\) is simply called an \(n\)_-dimensional measure_, or an \(n\)_-dimensional positive measure_ for emphasis. A \(1\)-dimensional signed or positive measure reduces to a signed or positive measure, respectively. We will also call an \(n\)-dimensional (finite, signed or positive) measure a (finite, signed or positive) vector measure when the dimension is not emphasized. For an \(n\)-dimensional finite signed measure \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\) on \((S,\mathcal{F})\), we define \[|\boldsymbol{\alpha}|\coloneqq\sum_{i=1}^{n}|\alpha_{i}|\] to be the total variation of \(\boldsymbol{\alpha}\) (with respect to the \(1\)-norm). It is easy to check that \(|\boldsymbol{\alpha}|\) is a measure on \((S,\mathcal{F})\), and that \(\alpha_{i}\) is absolutely continuous with respect to \(|\boldsymbol{\alpha}|\), \(1\leq i\leq n\). Moreover, similar as when \(\boldsymbol{\alpha}\) is \(1\)-dimensional, a positive measure \(\mu\) on \((S,\mathcal{F})\) dominates \(|\boldsymbol{\alpha}|\) if and only if \(\mu\) dominates \(\boldsymbol{\alpha}\) (i.e., dominates each \(\alpha_{i}\)), and there exists \(\frac{\mathrm{d}\boldsymbol{\alpha}}{\mathrm{d}\mu}:S\to\mathbb{R}^{n}\) where \[\frac{\mathrm{d}\boldsymbol{\alpha}}{\mathrm{d}\mu}=\Big{(}\frac{\mathrm{d} \alpha_{1}}{\mathrm{d}\mu},\frac{\mathrm{d}\alpha_{2}}{\mathrm{d}\mu},\ldots, \frac{\mathrm{d}\alpha_{n}}{\mathrm{d}\mu}\Big{)}\] if \(\mu\) is in addition \(\sigma\)-finite. We note that \(\frac{\mathrm{d}\boldsymbol{\alpha}}{\mathrm{d}\mu}\) is measurable by the discussion following Lemma1. For a shorthand, and to more clearly signify the presence of a vector, we use \[\mathbf{D}_{\mu}^{\mathbf{\alpha}}\] to denote \(\frac{\mathrm{d}\mathbf{\alpha}}{\mathrm{d}\mu}\). We also apply the notation of integrals of vector-valued functions: given \(\sigma\)-finite positive measure \(\mu\) on \((S,\mathcal{F})\), let \[\int_{S}\mathbf{f}\,\mathrm{d}\mu\coloneqq\Big{(}\int_{S}f_{1}\, \mathrm{d}\mu,\int_{S}f_{2}\,\mathrm{d}\mu,\ldots,\int_{S}f_{n}\,\mathrm{d}\mu \Big{)} \tag{4}\] for \(\mathbf{f}=(f_{1},f_{2},\ldots,f_{n}):S\to\mathbb{R}^{n}\) where each \(f_{i}:S\to\mathbb{R}\) either is non-negative or satisfies \(\int_{S}|f_{i}|\,\mathrm{d}\mu<\infty\). By definition of \(\mathbf{D}_{\mu}^{\mathbf{\alpha}}\), one has \[\mathbf{\alpha}(A)=\int_{A}\mathbf{D}_{\mu}^{\mathbf{\alpha}}\,\mathrm{d}\mu\] for all \(A\in\mathcal{F}\), if \(\mu\) dominates \(\mathbf{\alpha}\). Moreover, for \(n\)-dimensional finite signed measure \(\mathbf{\alpha}\) on \((S,\mathcal{F})\), we define \[\int_{S}f\,\mathrm{d}\mathbf{\alpha}\coloneqq\int_{S}f\cdot\mathbf{D}_{ \mu}^{\mathbf{\alpha}}\,\mathrm{d}\mu \tag{5}\] for all measurable function \(f:S\to\mathbb{R}\) such that \(\int_{S}|f|\,\mathrm{d}|\mathbf{\alpha}|<\infty\) (which is especially true when \(f\) is bounded), where \(\mu\) is any \(\sigma\)-finite positive measure dominating \(\mathbf{\alpha}\) on \((S,\mathcal{F})\). This definition is valid for the same reason that Definition 14 is valid for, from which one also has \[\int_{S}f\,\mathrm{d}\mathbf{\alpha}=\Big{(}\int_{S}f\,\mathrm{d}\alpha_{1}, \ldots,\int_{S}f\,\mathrm{d}\alpha_{n}\Big{)}\] for \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\). **Definition 16**.: _Given a signed measure \(\alpha\) on \((S,\mathcal{F})\), a set \(A\in\mathcal{F}\) is an atom of \(\alpha\) if \(\alpha(A)\neq 0\) and if for any \(B\in\mathcal{F}\) either \(\alpha(A\cap B)=\alpha(A)\) or \(\alpha(A\cap B)=0\). An \(n\)-dimensional signed measure \(\mathbf{\alpha}\) is non-atomic (or continuous) if none of its components has atoms._ We note that in particular the Lebesgue measure on \(\mathbb{R}^{n}\) is non-atomic. Moreover, we claim the following proposition without its elementary proof: **Proposition 7**.: _For finite signed measure \(\alpha\) and \(n\)-dimensional finite signed measure \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\) on \((S,\mathcal{F})\), the following properties hold:_ 1. _Atoms of_ \(\alpha\) _are atoms of_ \(|\alpha|\)_, and vice versa;_ 2. _For_ \(i\in[n]\)_, each atom_ \(A\) _of_ \(|\mathbf{\alpha}|\) _is an atom of_ \(\alpha_{i}\) _if_ \(|\alpha_{i}|(A)>0\)_;_ 3. _For_ \(i\in[n]\)_, each atom of_ \(\alpha_{i}\) _contains an atom of_ \(|\mathbf{\alpha}|\)_;_ 4. _If_ \(f_{k}:S\to\mathbb{R}\) _is measurable for each_ \(k\)_,_ \(1\leq k\leq m\)_, and if_ \(A\) _is an atom of_ \(\alpha\)_, then there exists_ \(\mathbf{c}\in\mathbb{R}^{m}\) _such that_ \(\mathbf{f}(s)=\mathbf{c}\) _for_ \(s\in A\)__\(|\alpha|\)_-almost everywhere, where_ \(\mathbf{f}=(f_{1},\ldots,f_{m})\) Given an \(n\)-dimensional finite signed measure \(\mathbf{\alpha}\), define two atoms \(A_{1}\), \(A_{2}\) of \(|\mathbf{\alpha}|\) to be in the same class if \(|\mathbf{\alpha}|(A_{1}\cap A_{2})>0\). There are at most countably many different classes since \(|\mathbf{\alpha}|\) is bounded. Let \(\{A_{i}\}_{i\in\mathbb{N}}\) be a collection of atoms such that there is an \(A_{i}\) in each class and such that \(A_{i}\), \(A_{j}\) belong to different classes for \(i\neq j\). We can moreover assume that the collection of \(A_{i}\) are mutually disjoint. Let \[A=\bigcup_{i=1}^{\infty}A_{i},\quad B=S\backslash A\] The restriction of \(\mathbf{\alpha}\) to \(\{E\cap B\,:\,E\in\mathcal{F}\}\) is therefore non-atomic by property (iii) in Proposition7. We call this restriction of \(\mathbf{\alpha}\)_the non-atomic part of \(\mathbf{\alpha}\)_ (with respect to the particular set \(\{A_{i}\}_{i\in\mathbb{N}}\) of chosen atoms), and call the restriction of \(\mathbf{\alpha}\) to \(\{E\cap A\,:\,E\in\mathcal{F}\}\)_the purely atomic part of \(\mathbf{\alpha}\)_. If \(|\mathbf{\alpha}|(B)=0\), then changing the choices of \(A_{i}\) will not alter this fact, and we just call \(\mathbf{\alpha}\)_purely atomic_ in this case. The following lemma can be easily derived from results by Halmos [11]: **Lemma 12**.: _Let \(\mathbf{\alpha}\) be a non-atomic finite signed vector measure on \((S,\mathcal{F})\). Then for any \(A\in\mathcal{F}\) there exists a map \(\varphi\) from \([0,1]\) to subsets of \(A\) in \(\mathcal{F}\) such that \(\varphi(0)=\emptyset\), \(\varphi(1)=A\), \(\varphi(a)\subseteq\varphi(b)\) if \(a<b\), and such that \(\mathbf{\alpha}(\varphi(\lambda))=\lambda\mathbf{\alpha}(A)\) for all \(\lambda\in[0,1]\)._ ## 5 The Reach of Lorenz Hulls **Definition 17**.: _The Lorenz hull \(\mathsf{LH}(\mathbf{\alpha})\) of an \(n\)-dimensional finite signed measure \(\mathbf{\alpha}\) is the convex hull of the range of \(\mathbf{\alpha}\)._ We also call the range of \(\mathbf{\alpha}\) the _Lorenz skeleton_ of \(\mathbf{\alpha}\), notated \(\mathsf{LS}(\mathbf{\alpha})\), wherefrom \(\mathsf{LH}(\mathbf{\alpha})=\mathsf{conv.hull}(\mathsf{LS}(\mathbf{\alpha}))\). Classical results (ref. Halmos [11]) show that \(\mathsf{LS}(\mathbf{\alpha})\) is a centrally symmetric, closed and bounded set containing \(\mathbf{0}\), and that if \(\mathbf{\alpha}\) is non-atomic then \(\mathsf{LS}(\mathbf{\alpha})\) is moreover convex, i.e., \(\mathsf{LH}(\mathbf{\alpha})=\mathsf{LS}(\mathbf{\alpha})\). Consequently, \(\mathsf{LH}(\mathbf{\alpha})\) is a centrally symmetric, closed, bounded and convex set containing \(\mathbf{0}\), where in particular the closedness follows from Proposition1. For an \(n\)-dimensional finite signed measure \(\mathbf{\alpha}\) on \((S,\mathcal{F})\) and \(\sigma\)-finite positive measure \(\mu\) dominating \(\mathbf{\alpha}\) on \((S,\mathcal{F})\), let \[\llbracket\mathbf{x}^{*\,T}\mathbf{D}_{\mu}^{\mathbf{\alpha}}\star 0\rrbracket\coloneqq\{s \in S\,:\,\mathbf{x}^{*\,T}\mathbf{D}_{\mu}^{\mathbf{\alpha}}(s)\star 0\}\] for \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), \(\star\in\{>,\geq,=,\leq,<\}\). We note that \(\llbracket\mathbf{x}^{*\,T}\mathbf{D}_{\mu}^{\mathbf{\alpha}}\star 0\rrbracket\in\mathcal{F}\) for any \(\mathbf{x}^{*}\) and \(\star\) since \(\{\mathbf{z}\in\mathbb{R}^{n}\,:\,\mathbf{x}^{*\,T}\mathbf{z}\star 0\}\) is either a hyperplane or a (closed or open) half-space in \(\mathbb{R}^{n}\) and since \(\mathbf{D}_{\mu}^{\mathbf{\alpha}}\) is measurable. **Proposition 8**.: _Let \(\mathbf{\alpha}\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\), let \(\mu\) be a \(\sigma\)-finite positive measure dominating \(\mathbf{\alpha}\) on \((S,\mathcal{F})\), and let \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), \(A\in\mathcal{F}\). If \(\mu(A)>0\), then \(\mathbf{x}^{*\,T}\mathbf{\alpha}(A)>0\), \(\mathbf{x}^{*\,T}\mathbf{\alpha}(A)=0\), \(\mathbf{x}^{*\,T}\mathbf{\alpha}(A)<0\) if \(A\subseteq[\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}>0]\!]\), \(A\subseteq[\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}=0]\!]\), \(A\subseteq[\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}} <0]\!]\), respectively. If \(\mu(A)=0\) then \(\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}(A)=0\)._ Proof.: If \(\mu(A)=0\) then \(\boldsymbol{\alpha}(A)=0\) since \(\mu\) dominates \(\boldsymbol{\alpha}\), and then \(\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}(A)=0\). For when \(\mu(A)>0\), we will only discuss the situation where \(A\subseteq[\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha} }>0]\!]\) and omit the proof for the other situations. Let \[A_{\varepsilon}=\{s\in A\,:\,\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{ \boldsymbol{\alpha}}(s)>\varepsilon\}\] for \(\varepsilon>0\). Then \(\mu(A_{\varepsilon})=\delta>0\) for some \(\varepsilon>0\) since \(A=\bigcup_{k=1}^{\infty}A_{1/k}\). Thus \[\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}(A) =\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}(A_{\varepsilon})+ \boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}(A\backslash A_{\varepsilon})\] \[=\int_{A_{\varepsilon}}\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{ \mu}^{\boldsymbol{\alpha}}(s)\,\mu(\mathrm{d}s)+\int_{A\backslash A_{ \varepsilon}}\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha} }(s)\,\mu(\mathrm{d}s)\] \[\geq\varepsilon\delta+0>0,\] which is the desired result. **Proposition 9**.: _Let \(\boldsymbol{\alpha}\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\) and let \(\mu\) be a \(\sigma\)-finite positive measure dominating \(\boldsymbol{\alpha}\) on \((S,\mathcal{F})\). Then \(\|\boldsymbol{x}^{*}\|_{\mathsf{LH}(\boldsymbol{\alpha})}=\boldsymbol{x}^{*}{ }^{T}\boldsymbol{\alpha}([\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{ \boldsymbol{\alpha}}>0]\!])=\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}([\![ \boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\geq 0]\!])\)._ Proof.: Proposition 8 implies that \(\boldsymbol{x}^{*}{}^{T}\boldsymbol{\alpha}([\![\boldsymbol{x}^{*}{}^{T} \boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}>0]\!])\geq\boldsymbol{x}^{*}{}^{T} \boldsymbol{\alpha}(B)\) for all \(B\in\mathcal{F}\). Thus \(\|\boldsymbol{x}^{*}\|_{\mathsf{LS}(\boldsymbol{\alpha})}=\boldsymbol{x}^{*}{ }^{T}\boldsymbol{\alpha}([\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{ \boldsymbol{\alpha}}>0]\!])\) since \(\boldsymbol{\alpha}([\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{ \boldsymbol{\alpha}}>0]\!])\in\mathsf{LS}(\boldsymbol{\alpha})\). But \(\|\cdot\|_{\mathsf{LS}(\boldsymbol{\alpha})}=\|\cdot\|_{\mathsf{LH}( \boldsymbol{\alpha})}\) by the left-hand of (2). The same argument holds replacing \([\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}>0]\!]\) by \([\![\boldsymbol{x}^{*}{}^{T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\geq 0]\!]\). ## 6 The Lorenz Product Recalling the definition of product of \(\sigma\)-finite measures from Lemma 2, we have the following proposition as an analogue of Lemma 2 for finite signed measures: **Proposition 10**.: _Let \(\alpha\), \(\beta\) be finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, let \(\mu\), \(\nu\) be \(\sigma\)-finite positive measures dominating \(\alpha\), \(\beta\) on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, and let \(\mathcal{H}\) be the \(\sigma\)-algebra such that \((S\times T,\mathcal{H})=(S,\mathcal{F})\times(T,\mathcal{G})\). The function \(\omega:\mathcal{H}\to\mathbb{R}\) defined by_ \[\omega(E)=\int_{E}\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}(s)\frac{\mathrm{d} \beta}{\mathrm{d}\nu}(t)\,(\mu\times\nu)(\mathrm{d}(s,t))\] _for all \(E\in\mathcal{H}\) is then a finite signed measure on \((S\times T,\mathcal{H})\) that satisfies_ \[\omega(A\times B)=\alpha(A)\beta(B) \tag{6}\] _for all \(A\in\mathcal{F}\), \(B\in\mathcal{G}\)._ Proof.: Referring back to Definition 12 let \(\{E_{i}\}_{i\in\mathbb{N}}\) be a collection of pairwise disjoint elements of \(\mathcal{H}\) and let \(E=\bigcup_{i\in\mathbb{N}}E_{i}\). Let \(f_{i}:S\times T\to\mathbb{R}\) be defined by \[f_{i}(s,t)=\sum_{k=1}^{i}\mathbf{1}_{E_{k}}(s,t)\frac{\mathrm{d}\alpha}{\mathrm{ d}\mu}(s)\frac{\mathrm{d}\beta}{\mathrm{d}\nu}(t)\] for all \((s,t)\in S\times T\), \(i\in\mathbb{N}\), and let \(f:S\times T\to\mathbb{R}\) be defined by \[f(s,t)=\mathbf{1}_{E}(s,t)\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}(s)\frac{ \mathrm{d}\beta}{\mathrm{d}\nu}(t)\] for all \((s,t)\in S\times T\). Then \[\sum_{k=1}^{i}\omega(E_{k})=\int_{S\times T}f_{i}\,\mathrm{d}(\mu\times\nu), \quad\omega(E)=\int_{S\times T}f\,\mathrm{d}(\mu\times\nu)\] and \(f_{i}\) approaches \(f\) everywhere on \(S\times T\) as \(i\to\infty\). Since \[\int_{S\times T}|f_{i}|\,\mathrm{d}(\mu\times\nu) \leq\int_{S\times T}\left|\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}( s)\middle|\left|\frac{\mathrm{d}\beta}{\mathrm{d}\nu}(t)\right|(\mu\times\nu)( \mathrm{d}(s,t))\right.\] \[=\int_{S}\left|\frac{\mathrm{d}\alpha}{\mathrm{d}\mu}(s)\right| \int_{T}\left|\frac{\mathrm{d}\beta}{\mathrm{d}\nu}(t)\right|\nu(\mathrm{d}t) \mu(\mathrm{d}s)\] \[=|\alpha|(S)|\beta|(T)<\infty \tag{7}\] where the first equality follows by Fubini's theorem (Lemma 5) and where the second equality follows by Lemma 11, \[\omega(E)=\lim_{i\to\infty}\sum_{k=1}^{i}\omega(E_{k})\] by the dominated convergence theorem (Lemma 6). Thus \(\omega\) is a signed measure. The fact that \(|\omega(E)|<\infty\) for all \(E\in\mathcal{H}\) follows by (7) with \(f_{i}\) replaced by \(f\). Lastly, (6) follows by a direct computation by definition of \(\omega\) and by Fubini's theorem Lemma 5, again using the finiteness of the second integral in (7). By (6) and Lemma 3, the measure \(\omega\) defined in Proposition 10 is the unique finite signed measure on \((S\times T,\mathcal{H})\) for which (6) holds for all \(A\in\mathcal{F}\), \(B\in\mathcal{G}\), so that the choices of \(\mu\) and \(\nu\) in the definition of \(\omega\) do not matter. Extending the notation already in place for \(\sigma\)-finite positive measures we let \[\alpha\times\beta\] denote the measure defined by Proposition 10. It should be noted that Proposition 10, when taken as an observation on the structure of \(\alpha\times\beta\) and not as the grounds for definition thereof, affords the following rephrasing: **Proposition 11**.: _Let \(\alpha\), \(\beta\) be finite signed measures and let \(\mu\), \(\nu\) be \(\sigma\)-finite positive measures dominating \(\alpha\), \(\beta\) respectively. Then \(\alpha\times\beta\) is finite, \(\mu\times\nu\) dominates \(\alpha\times\beta\), and \(\mathbf{D}^{\alpha\times\beta}_{\mu\times\nu}=\mathbf{D}^{\alpha}_{\mu}\mathbf{D}^{\beta}_{\nu}\)._ Proof.: The finiteness of \(\alpha\times\beta\) has already been established in Proposition10, under different name. The fact that \(\mu\times\nu\) dominates \(\alpha\times\beta\) also follows from the first equation of Proposition10. The last claim follows since \(\mathbf{D}^{\alpha\times\beta}_{\mu\times\nu}=\frac{\mathrm{d}(\alpha\times\beta )}{\mathrm{d}(\mu\times\nu)}\) can be taken to be any function that satisfies the selfsame equation as the coefficient of \((\mu\times\nu)(\mathrm{d}(s,t))\) for all \(E\in\mathcal{H}\), per the definition following Lemma10. **Proposition 12**.: _The product operation on finite signed measures is associative._ Proof.: Let \(\alpha\), \(\beta\), \(\gamma\) be finite signed measures with respective dominating measures \(\mu\), \(\nu\), \(\eta\). Then \((\mu\times\nu)\times\eta=\mu\times(\nu\times\eta)\) dominates both \((\alpha\times\beta)\times\gamma\) and \(\alpha\times(\beta\times\gamma)\) by Proposition11, and \[\mathbf{D}^{(\alpha\times\beta)\times\gamma}_{\mu\times\nu\times\eta}=\mathbf{D}^{ \alpha\times\beta}_{\mu\times\nu}\mathbf{D}^{\gamma}_{\eta}=\mathbf{D}^{\alpha}_{\mu} \mathbf{D}^{\beta}_{\nu}\mathbf{D}^{\gamma}_{\eta}=\mathbf{D}^{\alpha}_{\mu}\mathbf{D}^{\beta \times\gamma}_{\nu\times\eta}=\mathbf{D}^{\alpha\times(\beta\times\gamma)}_{\mu \times\nu\times\eta}\] by repeated applications of Proposition11. The statement then follows by Lemma10. For \(n\)-dimensional finite signed measures \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\), \(\mathbf{\beta}=(\beta_{1},\beta_{2}\), \(\ldots,\beta_{n})\) on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, we let \[\mathbf{\alpha}\times\mathbf{\beta}\coloneqq(\alpha_{1}\times\beta_{1},\alpha_{2} \times\beta_{2},\ldots,\alpha_{n}\times\beta_{n})\] be the _coordinate-wise product_ of \(\mathbf{\alpha}\) and \(\mathbf{\beta}\), which is an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\times(T,\mathcal{G})\), as underscored by the first claim of Proposition11. No confusion will arise from this generalization of the symbol "\(\times\)" since \(\mathbf{\alpha}\times\mathbf{\beta}\) reduces to \(\alpha\times\beta\) for \(1\)-dimensional \(\mathbf{\alpha}=\alpha\), \(\mathbf{\beta}=\beta\). We also let \(\odot^{n}\) denote the coordinate-wise product function on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\), i.e., \[\odot^{n}(\mathbf{x},\mathbf{y})=(x_{1}y_{1},\ldots,x_{n}y_{n})\] for \(\mathbf{x}=(x_{1},\ldots,x_{n})\), \(\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\). We also extend this notation to the case where the coordinates are function of range \(\mathbb{R}\), in the natural way. This notation is useful for extending the previous result to vector measures: **Proposition 13**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional finite signed measures and let \(\mu\), \(\nu\) be \(\sigma\)-finite positive measures dominating \(\mathbf{\alpha}\), \(\mathbf{\beta}\), respectively. Then \(\mu\times\nu\) dominates \(\mathbf{\alpha}\times\mathbf{\beta}\) and \(\mathbf{D}^{\alpha\times\mathbf{\beta}}_{\mu\times\nu}=\odot^{n}(\mathbf{D}^{\alpha}_{\mu },\mathbf{D}^{\beta}_{\nu})\)._ Proof.: By definition \(\mu\) dominates \(\mathbf{\alpha}\) if and only if \(\mu\) dominates each coordinate of \(\mathbf{\alpha}\) and likewise for \(\nu\) and \(\mathbf{\beta}\) and for \(\mu\times\nu\) and \(\mathbf{\alpha}\times\mathbf{\beta}\). The proposition thus directly follows from the coordinate-wise application of Proposition11. The following proposition is the technical heart of the paper, and the culmination of the "trivial" machinery established thus far: **Proposition 14**.: (Theorem 4) _Let \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}\), \(\mathbf{\beta}^{\prime}\) be \(n\)-dimensional finite signed measures such that \(\mathsf{LH}(\mathbf{\alpha})\subseteq\mathsf{LH}(\mathbf{\alpha}^{\prime})\), \(\mathsf{LH}(\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\beta}^{\prime})\). Then \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\alpha}^{ \prime}\times\mathbf{\beta}^{\prime})\)._ Proof.: We can restrict our attention to the case \(\mathbf{\beta}=\mathbf{\beta}^{\prime}\) as the general case will then follow by a symmetric argument. Let \(\mu\), \(\mu^{\prime}\) and \(\nu\) be \(\sigma\)-finite positive measures dominating \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\) and \(\mathbf{\beta}\), respectively, on their respective spaces. (E.g., \(\mu=|\mathbf{\alpha}|\), etc.) Then \(\mu\times\nu\) dominates \(\mathbf{\alpha}\times\mathbf{\beta}\) and \[\mathbf{D}_{\mu\times\nu}^{\mathbf{\alpha}\times\mathbf{\beta}}=\odot^{n}(\mathbf{D}_{\mu}^{ \mathbf{\alpha}},\mathbf{D}_{\nu}^{\mathbf{\beta}}) \tag{8}\] by Proposition 13. Noting that \[\mathbf{x}^{T}(\odot^{n}(\mathbf{y},\mathbf{z}))=\sum_{i=1}^{n}x_{i}y_{i}z_{i}=(\odot^{n} (\mathbf{x},\mathbf{z}))^{T}\mathbf{y} \tag{9}\] for all \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})\), \(\mathbf{z}=(z_{1},z_{2},\ldots,z_{n})\in\mathbb{R}^{n}\), we define \[\mathbf{x}_{t}^{*}:=\odot^{n}(\mathbf{x}^{*},\mathbf{D}_{\nu}^{\mathbf{\beta}}(t))\] for all \(t\in T\), \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), so that \[\mathbf{x}^{*T}\mathbf{D}_{\mu\times\nu}^{\mathbf{\alpha}\times\mathbf{\beta}}(s,t)=\mathbf{x}_{t }^{*T}\mathbf{D}_{\mu}^{\mathbf{\alpha}}(s) \tag{10}\] for all \((s,t)\in S\times T\), \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), by (8), (9). Then \[\|\mathbf{x}^{*}\|_{\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})} =\mathbf{x}^{*T}(\mathbf{\alpha}\times\mathbf{\beta})([\mathbf{[x}^{*T}\mathbf{D}_{ \mu\times\nu}^{\mathbf{\alpha}\times\mathbf{\beta}}>0])\] \[=\int_{[\mathbf{x}^{*T}\mathbf{D}_{\mu\times\nu}^{\mathbf{\alpha}\times\mathbf{ \beta}}>0]}\mathbf{x}_{t}^{*T}\mathbf{D}_{\mu}^{\mathbf{\alpha}}(s)\,\mu(\mathrm{d}s)\,\nu (\mathrm{d}t)\] \[=\int_{T}\mathbf{x}_{t}^{*T}\mathbf{\alpha}([\mathbf{[x}_{t}^{*T}\mathbf{D}_{\mu} ^{\mathbf{\alpha}}>0])\,\nu(\mathrm{d}t)\] \[=\int_{T}\|\mathbf{x}_{t}^{*}\|_{\mathsf{LH}(\mathbf{\alpha})}\,\nu( \mathrm{d}t)\] \[\leq\int_{T}\|\mathbf{x}_{t}^{*}\|_{\mathsf{LH}(\mathbf{\alpha}^{\prime} )}\,\nu(\mathrm{d}t)\] \[=\|\mathbf{x}^{*}\|_{\mathsf{LH}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta })}\] for all \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), where the first and fifth equalities follow by Proposition 9, where the second and fourth equalities follow by linearity of the integral, where the third equality follows by (10) and by Fubini's theorem (Lemma 5), and where the inequality follows by Proposition3. Thus, by Proposition3 again, \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\alpha}^{\prime} \times\mathbf{\beta})\). As a consequence of Proposition14, \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\) is uniquely determined by \(\mathsf{LH}(\mathbf{\alpha})\) and \(\mathsf{LH}(\mathbf{\beta})\), in the sense of the following corollary: **Corollary 1**.: (Theorem1) _Let \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}\), \(\mathbf{\beta}^{\prime}\) be \(n\)-dimensional finite signed measures. If \(\mathsf{LH}(\mathbf{\alpha})=\mathsf{LH}(\mathbf{\alpha}^{\prime})\), \(\mathsf{LH}(\mathbf{\beta})=\mathsf{LH}(\mathbf{\beta}^{\prime})\), then \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})=\mathsf{LH}(\mathbf{\alpha}^{\prime} \times\mathbf{\beta}^{\prime})\)._ Let \(H_{1}=\mathsf{LH}(\mathbf{\alpha})\), \(H_{2}=\mathsf{LH}(\mathbf{\beta})\) for some \(n\)-dimensional finite signed measures \(\mathbf{\alpha}\), \(\mathbf{\beta}\). We define the _Lorenz product_ of \(H_{1}\) and \(H_{2}\), denoted by \(H_{1}H_{2}\), to be the Lorenz hull \(H=\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\). This product is well-defined by Corollary1. We note that Theorem3 of the introduction is a direct corollary of Proposition14. The Lorenz product is commutative since \(\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta})=\mathsf{LS}(\mathbf{\beta}\times\mathbf{ \alpha})\) for all finite signed vector measures \(\mathbf{\alpha}\), \(\mathbf{\beta}\), even while \(\mathbf{\alpha}\times\mathbf{\beta}\) and \(\mathbf{\beta}\times\mathbf{\alpha}\) have underlying ground sets that are reversed Cartesian products. The associativity of the Lorenz product follows by associativity of the product of finite signed vector measures, itself obtained by coordinate-wise application of Proposition12. ## 7 Sums and Distributivity In this section we note that the Minkowski sum of Lorenz hulls is a Lorenz hull (which is not a new observation, since the set of all zonoids and all Lorenz hulls coincides), and show that the Lorenz product is distributive over such sums. The distributivity follows by an analogous property of products of "disjoint sums" of measures. We note that since the Lorenz product is commutative the distributivity is naturally both-sided, though it might be better to view the left- and right-distributivity as independent corollaries of the analogous identities for measures, since the measure product may not be commutative in more general cases--e.g., quaternion-valued measures, as suggested by the material in Section8. Let \((S,\mathcal{F})\), \((T,\mathcal{G})\) be measurable spaces with \(S\), \(T\) disjoint. We let \(\mathcal{F}\oplus\mathcal{G}\) denote the set \[\{A\cup B\,:\,A\in\mathcal{F},B\in\mathcal{G}\},\] that one can easily check is a \(\sigma\)-algebra on \(S\cup T\). and call it _the union-sum of \(\mathcal{F}\) and \(\mathcal{G}\)_. For \(n\)-dimensional finite signed measures \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) on \((S,\mathcal{F})\) and \((T,\mathcal{G})\), respectively, we also let \(\mathbf{\alpha}\oplus\mathbf{\beta}\) denote the \(n\)-dimensional finite signed measure on \((S\cup T,\mathcal{F}\oplus\mathcal{G})\) defined by \[(\mathbf{\alpha}\oplus\mathbf{\beta})(A\cup B)=\mathbf{\alpha}(A)+\mathbf{\beta}(B)\] for all \(A\in\mathcal{F}\), \(B\in\mathcal{G}\). **Proposition 15**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, with \(S\), \(T\) disjoint. Then \(\mathsf{LH}(\mathbf{\alpha}\oplus\mathbf{\beta})=\mathsf{LH}(\mathbf{\alpha})+\mathsf{LH} (\mathbf{\beta})\)._ Proof.: It is easy to check that \(\mathsf{LS}(\mathbf{\alpha}\oplus\mathbf{\beta})=\mathsf{LS}(\mathbf{\alpha})+\mathsf{LS}( \mathbf{\beta})\). The statement thus follows by the first part of Proposition 5. The product of finite signed vector measures is distributive over the direct sum operation "\(\oplus\)": **Proposition 16**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, with \(S\), \(T\) disjoint. Let \(\mathbf{\tau}\) be an \(n\)-dimensional finite signed measure on \((\Omega,\mathcal{H})\). Then \((\mathbf{\alpha}\oplus\mathbf{\beta})\times\mathbf{\tau}=(\mathbf{\alpha}\times\mathbf{\tau}) \oplus(\mathbf{\beta}\times\mathbf{\tau})\)._ Proof.: It is easy to check that the two measures in question have the same domain, i.e., that the union-sum of the \(\sigma\)-algebras of \((S,\mathcal{F})\times(\Omega,\mathcal{H})\) and \((T,\mathcal{G})\times(\Omega,\mathcal{H})\) coincides with the \(\sigma\)-algebra of \((S\cup T,\mathcal{F}\oplus\mathcal{G})\times(\Omega,\mathcal{H})\). Moreover, it is also easy to check that the two measures agree on sets of the form \(A\times B\) where \(A\in\mathcal{F}\oplus\mathcal{G}\), \(B\in\mathcal{H}\). The statement thus follows by Lemma 3. **Proposition 17**.: (Theorem 2)_\((H_{1}+H_{2})H_{3}=H_{1}H_{3}+H_{2}H_{3}\) for any Lorenz hulls \(H_{1}\), \(H_{2}\), \(H_{3}\subseteq\mathbb{R}^{n}\)._ Proof.: This is a direct consequence of the last two propositions since for every \(n\)-dimensional Lorenz hulls \(H_{1}\), \(H_{2}\) there exist finite signed vector measures \(\mathbf{\alpha}\), \(\mathbf{\beta}\) such that \(\mathsf{LH}(\mathbf{\alpha})=H_{1}\), \(\mathsf{LH}(\mathbf{\beta})=H_{2}\) and such that \(\mathbf{\alpha}\oplus\mathbf{\beta}\) is defined and since \(\mathsf{LH}(\mathbf{\gamma})\mathsf{LH}(\mathbf{\tau})=\mathsf{LH}(\mathbf{\gamma}\times \mathbf{\tau})\) by definition of the Lorenz product for all \(n\)-dimensional finite signed vector measures \(\mathbf{\gamma}\), \(\mathbf{\tau}\). ## 8 The Lorenz Product for Complex Measures We generalize the above results to complex measures. In comparison with Definition 12 and Definition 15, we have the following definitions: **Definition 18**.: _A complex measure on a measurable space \((S,\mathcal{F})\) is a function \(\alpha:\mathcal{F}\to\mathbb{C}\) such that_ \[\alpha\Big{(}\bigcup_{i=1}^{\infty}A_{i}\Big{)}=\sum_{i=1}^{\infty}\alpha(A_ {i}) \tag{11}\] _for any collection \(\{A_{i}\}_{i\in\mathbb{N}}\) of pairwise disjoint elements of \(\mathcal{F}\)._ **Definition 19**.: _An \(n\)-dimensional complex measure on a measurable space \((S,\mathcal{F})\) is a function \(\mathbf{\alpha}:\mathcal{F}\to\mathbb{C}^{n}\) so that_ \[\mathbf{\alpha}(A)=(\alpha_{1}(A),\alpha_{2}(A),\ldots,\alpha_{n}(A))\] _for all \(A\in\mathcal{F}\), where each \(\alpha_{i}(A)\), \(1\leq i\leq n\), is a complex measure on \((S,\mathcal{F})\)._ We extend the coordinate-wise product function \(\odot^{n}\) to be on \(\mathbb{C}^{n}\times\mathbb{C}^{n}\), i.e., \[\odot^{n}(\mathbf{x},\mathbf{y})=(x_{1}y_{1},\ldots,x_{n}y_{n})\] for \(\mathbf{x}=(x_{1},\ldots,x_{n})\), \(\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathbb{C}^{n}\). By replacing \(\mathbb{R}^{n}\) with \(\mathbb{C}^{n}\), replacing finite signed measures with complex measures, and replacing absolute values of real numbers with moduli of complex numbers, Definition13, the definition of being absolutely continuous with respect to a positive measure, Definition17, the definition of the operation "\(\oplus\)", and the definition of integrals of vector valued functions (4) can be generalized, while Lemma3, Lemma5, Lemma6, Lemma10, Lemma11 (the range of \(h\) becoming the unit circle of the complex plane), Proposition10, Proposition11, Proposition13, Proposition15, Proposition16 still hold. In particular the range of (the total variation of) an \(n\)-dimensional complex measure is bounded, and \(\boldsymbol{\alpha}\times\boldsymbol{\beta}\) for \(n\)-dimensional complex measures \(\boldsymbol{\alpha}\), \(\boldsymbol{\beta}\) is well-defined. For any \(n\) without ambiguity, let \(\psi:\mathbb{C}^{n}\to\mathbb{R}^{2n}\) be defined by \[\psi(\boldsymbol{z})=(\operatorname{Re}(z_{1}),\operatorname{Im}(z_{1}), \ldots,\operatorname{Re}(z_{n}),\operatorname{Im}(z_{n}))\] for all \(\boldsymbol{z}=(z_{1},\ldots,z_{n})\in\mathbb{C}^{n}\), where \(\operatorname{Re}(z)\) and \(\operatorname{Im}(z)\) are the real and imaginary part of \(z\), respectively, for \(z\in\mathbb{C}\). It is easy to check that \(\psi\) is a bijection, and that both \(\psi\) and \(\psi^{-1}\) are linear and continuous (thus measurable). For any \(n\)-dimensional complex measure \(\boldsymbol{\alpha}\) on \((S,\mathcal{F})\), we let \(\langle\boldsymbol{\alpha}\rangle\) be the \(2n\)-dimensional finite signed measure on \((S,\mathcal{F})\) defined by \[\langle\boldsymbol{\alpha}\rangle(A)=\psi(\boldsymbol{\alpha}(A))\] for all \(A\in\mathcal{F}\). Obviously, a positive measure \(\mu\) dominates \(\boldsymbol{\alpha}\) if and only if \(\mu\) dominates \(\langle\boldsymbol{\alpha}\rangle\). Moreover, it is easy to check that \[\langle\boldsymbol{\alpha}\rangle(A)=\int_{A}\psi\circ\boldsymbol{D}_{\mu}^{ \boldsymbol{\alpha}}\,\mathrm{d}\mu \tag{12}\] for all \(A\in\mathcal{F}\), for \(\sigma\)-finite \(\mu\) that dominates \(\boldsymbol{\alpha}\). By the fact that the convex hull of \(A\subseteq\mathbb{C}^{n}\) is the set of convex combinations of elements of \(A\) and that \(\psi\) is a linear bijection, one has \[\mathsf{LH}(\boldsymbol{\alpha})=\psi^{-1}(\mathsf{LH}(\langle\boldsymbol{ \alpha}\rangle)),\] which implies that \(\mathsf{LH}(\boldsymbol{\alpha})\) is compact since \(\psi^{-1}\) is continuous as a function. It is also easy to check that \(\mathsf{LH}(\boldsymbol{\alpha})\) is centrally symmetric and convex while containing \(\boldsymbol{0}\in\mathbb{C}^{n}\). Moreover, the following proposition holds: **Proposition 18**.: _Let \(\boldsymbol{\alpha}\), \(\boldsymbol{\alpha}^{\prime}\) be \(n\)-dimensional complex measures. Then \(\mathsf{LH}(\boldsymbol{\alpha})\subseteq\mathsf{LH}(\boldsymbol{\alpha}^{ \prime})\) if and only if \(\mathsf{LH}(\langle\boldsymbol{\alpha}\rangle)\subseteq\mathsf{LH}(\langle \boldsymbol{\alpha}^{\prime}\rangle)\)._ The isomorphic product function \[\boxtimes^{2n}:\mathbb{R}^{2n}\times\mathbb{R}^{2n}\to\mathbb{R}^{2n}\] with respect to \(\odot^{n}:\mathbb{C}^{n}\times\mathbb{C}^{n}\to\mathbb{C}^{n}\) is defined by \[\boxtimes^{2n}(\mathbf{x},\mathbf{y}) =\psi\big{(}\odot^{n}(\psi^{-1}(\mathbf{x}),\psi^{-1}(\mathbf{y}))\big{)}\] \[=(x_{1}y_{1}-x_{2}y_{2},x_{1}y_{2}+x_{2}y_{1},\ldots,\] \[x_{2n-1}y_{2n-1}-x_{2n}y_{2n},x_{2n-1}y_{2n}+x_{2n}y_{2n-1})\] for \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{2n-1},x_{2n})\), \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{2n-1},y_{2n})\in\mathbb{R}^{2n}\). **Proposition 19**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional complex measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively. Let \(\mu\), \(\nu\) be \(\sigma\)-finite positive measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively, such that \(\mu\) dominates \(\mathbf{\alpha}\), \(\nu\) dominates \(\mathbf{\beta}\). Let \((S\times T,\mathcal{H})=(S,\mathcal{F})\times(T,\mathcal{G})\). Then_ \[\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle(E)=\int_{E}\boxtimes^{2n}(\mathbf{D}_{ \mu}^{\langle\mathbf{\alpha}\rangle}(s),\mathbf{D}_{\nu}^{\langle\mathbf{\beta}\rangle}(t ))\left(\mu\times\nu\right)(\mathrm{d}(s,t))\] _for all \(E\in\mathcal{H}\)._ Proof.: One has \[\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle(E) =\int_{E}\psi\circ\mathbf{D}_{\mu\times\nu}^{\mathbf{\alpha}\times\mathbf{ \beta}}\,\mathrm{d}(\mu\times\nu)\] \[=\int_{E}\psi\big{(}\odot^{n}(\mathbf{D}_{\mu}^{\mathbf{\alpha}}(s),\mathbf{D }_{\nu}^{\mathbf{\beta}}(t))\big{)}\left(\mu\times\nu\right)(\mathrm{d}(s,t))\] \[=\int_{E}\boxtimes^{2n}(\mathbf{D}_{\mu}^{\langle\mathbf{\alpha}\rangle}(s ),\mathbf{D}_{\nu}^{\langle\mathbf{\beta}\rangle}(t))\left(\mu\times\nu\right)( \mathrm{d}(s,t))\] for all \(E\in\mathcal{H}\), where the first and last equalities follow by (12), and where the second equality follows by the complex version of Proposition13. **Proposition 20**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}\), \(\mathbf{\beta}^{\prime}\) be \(n\)-dimensional complex measures. If \(\mathsf{LH}(\mathbf{\alpha})\subseteq\mathsf{LH}(\mathbf{\alpha}^{\prime})\), \(\mathsf{LH}(\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\beta}^{\prime})\), then \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\alpha}^{ \prime}\times\mathbf{\beta}^{\prime})\)._ Proof.: Similarly to the proof of Proposition14, we can assume \(\mathbf{\beta}=\mathbf{\beta}^{\prime}\). Let \(\mathbf{\alpha}\) be on \((S,\mathcal{F})\), \(\mathbf{\alpha}^{\prime}\) be on \((S^{\prime},\mathcal{F}^{\prime})\), \(\mathbf{\beta}\) be on \((T,\mathcal{G})\). Let \(\mu\), \(\mu^{\prime}\) and \(\nu\) be \(\sigma\)-finite positive measures dominating \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\) and \(\mathbf{\beta}\), respectively, on their respective spaces. Then \(\mu\times\nu\) dominates \(\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle\) and \[\mathbf{D}_{\mu\times\nu}^{\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle}=\boxtimes^ {2n}(\mathbf{D}_{\mu}^{\langle\mathbf{\alpha}\rangle},\mathbf{D}_{\nu}^{\langle\mathbf{\beta} \rangle}) \tag{13}\] by Proposition19. Noting that \[\mathbf{x}^{T}(\boxtimes^{2n}(\mathbf{y},\mathbf{z}))\] \[=\sum_{k=1}^{n}x_{2k-1}(y_{2k-1}z_{2k-1}-y_{2k}z_{2k})+x_{2k}(y_{2k -1}z_{2k}+y_{2k}z_{2k-1})\] \[=\sum_{k=1}^{n}(x_{2k-1}z_{2k-1}+x_{2k}z_{2k})y_{2k-1}+(-x_{2k-1}z _{2k}+x_{2k}z_{2k-1})y_{2k}\] \[=(\boxtimes^{2n}(\mathbf{x},\overline{\mathbf{z}}))^{T}\mathbf{y} \tag{14}\] for all \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{2n-1},x_{2n})\), \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{2n-1},y_{2n})\), \(\mathbf{z}=(z_{1},z_{2},\ldots,\allowbreak z_{2n-1},z_{2n})\in\mathbb{R}^{2n}\), where \(\overline{\mathbf{z}}\coloneqq\psi(\overline{\psi}^{-1}(\mathbf{z}))=(z_{1},-z_{2}, \ldots,z_{2n-1},-z_{2n})\), we define \[\mathbf{x}_{\overline{t}}^{*}\coloneqq\boxtimes^{2n}\Big{(}\mathbf{x}^{*},\overline{ \mathbf{D}_{\nu}^{(\mathbf{\beta})}(t)}\Big{)}\] for all \(t\in T\), \(\mathbf{x}^{*}\in\mathbb{R}^{2n}\), so that \[\mathbf{x}^{*T}\mathbf{D}_{\mu\times\nu}^{(\mathbf{\alpha}\times\mathbf{\beta})}(s,t)=\mathbf{x}_ {\overline{t}}^{*T}\mathbf{D}_{\mu}^{(\mathbf{\alpha})}(s) \tag{15}\] for all \((s,t)\in S\times T\), \(\mathbf{x}^{*}\in\mathbb{R}^{2n}\), by (13), (14). Since \(\mathsf{LH}(\langle\mathbf{\alpha}\rangle)\subseteq\mathsf{LH}(\langle\mathbf{\alpha} ^{\prime}\rangle)\) by Proposition18, one has \[\|\mathbf{x}^{*}\|_{\mathsf{LH}((\mathbf{\alpha}\times\mathbf{\beta}))} =\mathbf{x}^{*T}\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle([\mathbf{x}^{* T}\mathbf{D}_{\mu\times\nu}^{(\mathbf{\alpha}\times\mathbf{\beta})}>0]\!]\rangle\] \[=\int_{[\mathbf{x}^{*T}\mathbf{D}_{\mu\times\nu}^{(\mathbf{\alpha}\times\bm {\beta})}>0]}\mathbf{x}^{*T}\mathbf{D}_{\mu\times\nu}^{(\mathbf{\alpha}\times\mathbf{\beta})}(s,t)\,(\mu\times\nu)(\mathrm{d}(s,t))\] \[=\int_{T}\int_{[\mathbf{x}_{\overline{t}}^{*T}\mathbf{D}_{\mu}^{(\mathbf{ \alpha})}>0]}\mathbf{x}_{\overline{t}}^{*T}\mathbf{D}_{\mu}^{(\mathbf{\alpha})}(s)\,\mu( \mathrm{d}s)\,\nu(\mathrm{d}t)\] \[=\int_{T}\mathbf{x}_{\overline{t}}^{*T}\langle\mathbf{\alpha}\rangle([ \mathbf{x}_{\overline{t}}^{*T}\mathbf{D}_{\mu}^{(\mathbf{\alpha})}>0]\!]\rangle\,\nu( \mathrm{d}t)\] \[=\int_{T}\|\mathbf{x}_{\overline{t}}^{*}\|_{\mathsf{LH}(\langle\mathbf{ \alpha}\rangle)}\,\nu(\mathrm{d}t)\] \[\leq\int_{T}\|\mathbf{x}_{\overline{t}}^{*}\|_{\mathsf{LH}(\langle\bm {\alpha}^{\prime}\rangle)}\,\nu(\mathrm{d}t)\] \[=\|\mathbf{x}^{*}\|_{\mathsf{LH}(\langle\mathbf{\alpha}\times\mathbf{\beta} \rangle)}\] for all \(\mathbf{x}^{*}\in\mathsf{Sph}_{\mathbb{R}^{2n}}\), where the first and fifth equalities follow by Proposition9, where the second and fourth equalities follow by linearity of the integral, where the third equality follows by (15) and by Fubini's theorem (Lemma5), and where the inequality follows by Proposition3. Thus, by Proposition3 again, \(\mathsf{LH}(\langle\mathbf{\alpha}\times\mathbf{\beta}\rangle)\subseteq\mathsf{LH}( \langle\mathbf{\alpha}^{\prime}\times\mathbf{\beta}\rangle)\), so that \(\mathsf{LH}(\mathbf{\alpha}\times\mathbf{\beta})\subseteq\mathsf{LH}(\mathbf{\alpha}^{ \prime}\times\mathbf{\beta})\) by Proposition18. It follows that Corollary1 holds for \(n\)-dimensional complex measures \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}\), \(\mathbf{\beta}^{\prime}\), so that the Lorenz product can also be defined for Lorenz hulls of complex vector measures, for which inclusion-preservation and Proposition 17 still hold. ## 9 The Lorenz Product of Lorenz Skeletons It is a natural question that whether the Lorenz skeleton \(\mathsf{LS}(\boldsymbol{\alpha}\times\boldsymbol{\beta})\) keeps invariant while the underlying \(n\)-dimensional finite signed measures \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) alter in a way that keeps \(\mathsf{LS}(\boldsymbol{\alpha})\) and \(\mathsf{LS}(\boldsymbol{\beta})\) unchanged. We give a positive answer to this question, starting with analysing a discrete case. **Definition 20**.: _Let \(p\) denote a chosen norm that is compatible with the Euclidean topology on \(\mathbb{R}^{n}\). Let_ \[d(\boldsymbol{x},A)=\inf_{\boldsymbol{y}\in A}p(\boldsymbol{x}-\boldsymbol{y})\] _for all \(\boldsymbol{x}\in\mathbb{R}^{n}\), \(A\subseteq\mathbb{R}^{n}\). The Hausdorff distance \(d_{\mathrm{H}}(A,B)\) induced by \(p\) between subsets \(A\) and \(B\) of \(\mathbb{R}^{n}\) is defined by_ \[d_{\mathrm{H}}(A,B)=\max\{\sup_{\boldsymbol{x}\in A}d(\boldsymbol{x},B),\sup_ {\boldsymbol{y}\in B}d(\boldsymbol{y},A)\}\] _for all \(A\), \(B\subseteq\mathbb{R}^{n}\)._ Two closed sets \(A\), \(B\subseteq\mathbb{R}^{n}\) equal to each other if and only if \(d_{\mathrm{H}}(A,B)=0\). Let \(\|\cdot\|_{1}\) denote the \(1\)-norm on \(\mathbb{R}^{n}\), i.e., \[\|\boldsymbol{x}\|_{1}=\sum_{i=1}^{n}|x_{i}|\] for all \(\boldsymbol{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\). All Hausdorff distance appear in this section should be seen as induced by the \(1\)-norm. **Proposition 21**.: _Let \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\). Then_ \[\|\boldsymbol{D}^{\boldsymbol{\alpha}}_{|\boldsymbol{\alpha}|}\|_{1}=1\] _and_ \[\left|\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}|\boldsymbol{\alpha}|}\right|= \frac{\mathrm{d}|\alpha_{i}|}{\mathrm{d}|\boldsymbol{\alpha}|}\leq 1\] \(|\boldsymbol{\alpha}|\)_-almost everywhere._ Proof.: The fact \[\left|\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}|\boldsymbol{\alpha}|}\right|= \frac{\mathrm{d}|\alpha_{i}|}{\mathrm{d}|\boldsymbol{\alpha}|}\] follows by Lemma 11 applied with \(|\boldsymbol{\alpha}|\) in place of \(\mu\) and \(\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}|\boldsymbol{\alpha}|}\) in place of \(g\). That \[\frac{\mathrm{d}|\alpha_{i}}{\mathrm{d}|\boldsymbol{\alpha}|}\leq 1\] follows by Lemma4 applied with \([0,1]\) in place of \(E\) since \(0\leq|\alpha_{i}|(A)\leq|\mathbf{\alpha}|(A)\) for all \(A\in\mathcal{F}\). One then has \[\int_{A}(\|\mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}\|_{1}-1)\,\mathrm{d}|\mathbf{\alpha} |=\sum_{i=1}^{n}\int_{A}\left|\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}|\mathbf{\alpha }|}\right|\mathrm{d}|\mathbf{\alpha}|-|\mathbf{\alpha}|(A)=0\] and it follows that \(\|\mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}\|_{1}=1\)\(|\mathbf{\alpha}|\)-almost everywhere by Lemma4 applied with \(\{0\}\) in place of \(E\). Let \[\mathrm{U}^{n}_{\|\mathbf{\cdot}\|_{1}}\coloneqq\{\mathbf{x}\in\mathbb{R}^{n}\,:\,\| \mathbf{x}\|_{1}=1\}\] be the unit sphere in \(\mathbb{R}^{n}\) with respect to the \(1\)-norm. It follows by Proposition21 that \(\mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}(s)\in\mathrm{U}^{n}_{\|\mathbf{\cdot}\|_{1}}\) for all \(s\in S\) except for \(s\in E\) for some \(E\in\mathcal{F}\) such that \(|\mathbf{\alpha}|(E)=0\). One notes the following basic fact: if \(z_{1}\), \(\ldots\), \(z_{N}\) are real numbers then their exists \(I\subseteq[N]\) such that \[\left|\sum_{k\in I}z_{k}\right|\geq\frac{1}{2}\sum_{k=1}^{N}|z_{k}|.\] Then for a finite signed measure \(\beta\) on \((T,\mathcal{G})\), \[|\beta|(T)\leq 2\sup_{B\in\mathcal{G}}|\beta(B)|\] by definition of the total variation \(|\beta|\) of \(\beta\). Consider \(n\)-dimensional finite signed measure \(\mathbf{\beta}\) on \((T,\mathcal{G})\) and its total variation \(|\mathbf{\beta}|\) with respect to the \(1\)-norm, one has \[|\mathbf{\beta}|(T)\leq 2nM\] where \(M\) is any nonnegative number such that \(\mathsf{LS}(\mathbf{\beta})\subseteq\mathsf{Cube}(M)\) where \[\mathsf{Cube}(M)\coloneqq\{(z_{1},\ldots,z_{n})\in\mathbb{R}^{n}\,:\,|z_{i}| \leq M,1\leq i\leq n\}\] for \(M\geq 0\). **Proposition 22**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}\), \(\mathbf{\beta}^{\prime}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((S^{\prime},\mathcal{F}^{\prime})\), \((T,\mathcal{G})\), \((T^{\prime},\mathcal{G}^{\prime})\), respectively, where all four measurable spaces are countable sets with their discrete \(\sigma\)-algebras, and where \(\mathsf{LS}(\mathbf{\alpha})\), \(\mathsf{LS}(\mathbf{\alpha}^{\prime})\), \(\mathsf{LS}(\mathbf{\beta})\), \(\mathsf{LS}(\mathbf{\beta}^{\prime})\) are all included in \(\mathsf{Cube}(M)\) for a fixed \(M>0\). Let \(\varepsilon>0\), then_ \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\bm {\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}<\varepsilon\] _whenever \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}),\mathsf{LS}(\mathbf{\alpha}^{\prime })\big{)}<\delta\), \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\beta}),\mathsf{LS}(\mathbf{\beta}^{\prime}) \big{)}<\delta\) for \(\delta<\varepsilon/4nM\)._ Proof.: Firstly note that all functions in this proof are measurable since the underlying spaces are discrete, and note that \(|\mathbf{\alpha}|(S)\), \(|\mathbf{\alpha}^{\prime}|(S^{\prime})\), \(|\mathbf{\beta}|(T)\), \(|\mathbf{\beta}|(T^{\prime})\leq 2nM\). Let \(\delta<\varepsilon/4nM\). If \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}),\mathsf{LS}(\mathbf{\alpha}^{\prime })\big{)}<\delta\), then in particular for any measurable \(f:S\to\{0,1\}\) there exists a measurable \(f^{\prime}:S^{\prime}\to\{0,1\}\) such that \[\bigg{\|}\int_{S}f\,\mathrm{d}\mathbf{\alpha}-\int_{S^{\prime}}f^{\prime}\, \mathrm{d}\mathbf{\alpha}^{\prime}\bigg{\|}_{1}<\delta.\] Now for any \(h:S\times T\to\{0,1\}\) one can construct \(h^{\prime}:S^{\prime}\times T\to\{0,1\}\) such that \[\bigg{\|}\int_{S}h_{t}\,\mathrm{d}\mathbf{\alpha}-\int_{S^{\prime}}h^{\prime}_{t} \,\mathrm{d}\mathbf{\alpha}^{\prime}\bigg{\|}_{1}<\delta.\] for all \(t\in T\), where \(h_{t}:S\to\{0,1\}\), \(h^{\prime}_{t}:S^{\prime}\to\{0,1\}\) are defined by \[h_{t}(s)=h(s,t),\quad h^{\prime}_{t}(s^{\prime})=h^{\prime}(s^{\prime},t)\] for all \(s\in S\), \(s^{\prime}\in S^{\prime}\). Then \[\bigg{\|}\int_{S\times T}h\,\mathrm{d}(\mathbf{\alpha}\times\mathbf{\beta })-\int_{S^{\prime}\times T}h^{\prime}\,\mathrm{d}(\mathbf{\alpha}^{\prime}\times \mathbf{\beta})\bigg{\|}_{1}\] \[=\bigg{\|}\int_{T}\int_{S}h(s,t)\odot^{n}(\mathbf{D}^{\mathbf{\alpha}}_{| \mathbf{\alpha}|}(s),\mathbf{D}^{\mathbf{\beta}}_{|\mathbf{\beta}|}(t))\,|\mathbf{\alpha}|(\mathrm{ d}s)\,|\mathbf{\beta}|(\mathrm{d}t)\] \[\qquad\qquad\qquad-\int_{T}\int_{S}^{\prime}h(s^{\prime},t)\odot ^{n}(\mathbf{D}^{\mathbf{\alpha}^{\prime}}_{|\mathbf{\alpha}^{\prime}|}(s^{\prime}),\mathbf{D }^{\mathbf{\beta}}_{|\mathbf{\beta}|}(t))\,|\mathbf{\alpha}^{\prime}|(\mathrm{d}s^{\prime} )\,|\mathbf{\beta}|(\mathrm{d}t)\bigg{\|}_{1}\] \[=\bigg{\|}\int_{T}\odot^{n}\bigg{(}\bigg{(}\bigg{(}\int_{S}h_{t} \mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}\,\mathrm{d}|\mathbf{\alpha}|-\int_{S^{\prime }}h^{\prime}_{t}\mathbf{D}^{\mathbf{\alpha}^{\prime}}_{|\mathbf{\alpha}^{\prime}|}\, \mathrm{d}|\mathbf{\alpha}^{\prime}|\bigg{)},\mathbf{D}^{\mathbf{\beta}}_{|\mathbf{\beta}|}(t )\bigg{)}\,|\mathbf{\beta}|(\mathrm{d}t)\bigg{\|}_{1}\] \[\leq\int_{T}\bigg{\|}\odot^{n}\bigg{(}\bigg{(}\int_{S}h_{t}\, \mathrm{d}\mathbf{\alpha}-\int_{S^{\prime}}h^{\prime}_{t}\,\mathrm{d}\mathbf{\alpha} ^{\prime}\bigg{)},\mathbf{D}^{\mathbf{\beta}}_{|\mathbf{\beta}|}(t)\bigg{)}\bigg{\|}_{1} \,|\mathbf{\beta}|(\mathrm{d}t)\] \[\leq\int_{T}\bigg{\|}\bigg{(}\int_{S}h_{t}\,\mathrm{d}\mathbf{\alpha} -\int_{S^{\prime}}h^{\prime}_{t}\,\mathrm{d}\mathbf{\alpha}^{\prime}\bigg{)} \bigg{\|}_{1}\,|\mathbf{\beta}|(\mathrm{d}t)<2nM\delta\] where the first equality follows by Proposition13 and by Fubini's theorem Lemma5, where the second equality follows by linearity of integrations, and where the second inequality follows by Proposition21. This shows that for each \(\mathbf{x}\in\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta})\) there exists \(\mathbf{x}^{\prime}\in\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta})\) such that \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}<2nM\delta\). A symmetric argument shows that for each \(\mathbf{x}^{\prime}\in\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta})\) there exists \(\mathbf{x}\in\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta})\) such that \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}<2nM\delta\). Then \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\mathbf{ \alpha}^{\prime}\times\mathbf{\beta})\big{)}\leq 2nM\delta\) by definition. Similarly, \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}), \mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}\leq 2nM\delta\). It follows that \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\mathbf{ \alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}\leq 4nM\delta<\varepsilon\). In fact, Proposition22 holds when only \(T\) and \(S^{\prime}\) are known to be countable by the above proof. In particular, \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\mathbf{ \alpha}^{\prime}\times\mathbf{\beta})\big{)}\leq 2nM\delta\) when \(T\) is countable, and \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}), \mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}\leq 2nM\delta\) when \(S^{\prime}\) is countable. **Proposition 23**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively. Let \(A\) be an atom of \(|\mathbf{\alpha}|\) and \(B\) be an atom of \(|\mathbf{\beta}|\). Let \(f:S\times T\to\mathbb{R}\) be bounded and measurable. Then_ \[\int_{S\times B}f\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{\beta})= \odot^{n}\bigg{(}\int_{S}f(s,t_{0})\,\boldsymbol{\alpha}(\mathrm{d}s), \boldsymbol{\beta}(B)\bigg{)}\] _for some \(t_{0}\in B\),_ \[\int_{A\times T}f\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{\beta})= \odot^{n}\bigg{(}\boldsymbol{\alpha}(A),\int_{T}f(s_{0},t)\,\boldsymbol{\beta }(\mathrm{d}t)\bigg{)}\] _for some \(s_{0}\in A\), and_ \[\int_{A\times B}f\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{\beta})=f(s _{0},t_{0})\big{(}\odot^{n}(\boldsymbol{\alpha}(A),\boldsymbol{\beta}(B))\big{)}\] _for some \(s_{0}\in A\), \(t_{0}\in B\)._ Proof.: Let \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\), \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{n})\). Fubini's theorem Lemma 5 implies that the function \(h_{i}^{S}:T\to\mathbb{R}\) defined by \[h_{i}^{S}(t)=\int_{S}f(s,t)\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}|\boldsymbol{ \alpha}|}(s)\frac{\mathrm{d}\beta_{i}}{\mathrm{d}|\boldsymbol{\beta}|}(t)\,| \boldsymbol{\alpha}|(\mathrm{d}s)\] for all \(t\in T\) is measurable for each \(i\), \(1\leq i\leq n\), so that the function \((\boldsymbol{h}^{S},\boldsymbol{D}^{\boldsymbol{\beta}}_{|\boldsymbol{\beta}| }):T\to\mathbb{R}^{2n}\) is \(|\boldsymbol{\beta}|\)-almost constant on \(B\) by (iv) in Proposition 7, where \(\boldsymbol{h}^{S}:T\to\mathbb{R}^{n}\) is defined by \[\boldsymbol{h}^{S}(t) =\int_{S}f(s,t)\big{(}\odot^{n}(\boldsymbol{D}^{\boldsymbol{ \alpha}}_{|\boldsymbol{\alpha}|}(s),\boldsymbol{D}^{\boldsymbol{\beta}}_{| \boldsymbol{\beta}|}(t))\big{)}\,|\boldsymbol{\alpha}|(\mathrm{d}s)\] \[=\odot^{n}\bigg{(}\int_{S}f(s,t)\,\boldsymbol{\alpha}(\mathrm{d} s),\boldsymbol{D}^{\boldsymbol{\beta}}_{|\boldsymbol{\beta}|}(t)\bigg{)}\] for all \(t\in T\), where the second equality follows by linearity of integrals. This implies \[\int_{S\times B}f\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol {\beta})\] \[=\int_{S\times B}f\big{(}\odot^{n}(\boldsymbol{D}^{\boldsymbol{ \alpha}}_{|\boldsymbol{\alpha}|}(s),\boldsymbol{D}^{\boldsymbol{\beta}}_{| \boldsymbol{\beta}|}(t))\big{)}\,(|\boldsymbol{\alpha}|\times|\boldsymbol{ \beta}|)(\mathrm{d}(s,t))\] \[=\int_{B}\int_{S}f\big{(}\odot^{n}(\boldsymbol{D}^{\boldsymbol{ \alpha}}_{|\boldsymbol{\alpha}|}(s),\boldsymbol{D}^{\boldsymbol{\beta}}_{| \boldsymbol{\beta}|}(t))\big{)}\,|\boldsymbol{\alpha}|(\mathrm{d}s)\,| \boldsymbol{\beta}|(\mathrm{d}t)\] \[=\int_{B}\boldsymbol{h}^{S}\,\mathrm{d}|\boldsymbol{\beta}|= \odot^{n}\bigg{(}\int_{S}f(s,t_{0})\,\boldsymbol{\alpha}(\mathrm{d}s), \boldsymbol{D}^{\boldsymbol{\beta}}_{|\boldsymbol{\beta}|}(t_{0})\bigg{)}| \boldsymbol{\beta}|(B)\] \[=\odot^{n}\bigg{(}\int_{S}f(s,t_{0})\,\boldsymbol{\alpha}( \mathrm{d}s),\boldsymbol{\beta}(B)\bigg{)}\] for some \(t_{0}\in B\) for which \((\boldsymbol{h}^{S},\boldsymbol{D}^{\boldsymbol{\beta}}_{|\boldsymbol{\beta}| })=(\boldsymbol{h}^{S},\boldsymbol{D}^{\boldsymbol{\beta}}_{|\boldsymbol{\beta} |})(t_{0})\)\(|\boldsymbol{\beta}|\)-almost everywhere, where the second equality follows by Fubini's theorem Lemma 5 and by linearity of integrals. The other equations follow by similar arguments. **Proposition 24**.: _Let \(\boldsymbol{\alpha}\), \(\boldsymbol{\beta}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively. For any \(\varepsilon>0\), there exist \(n\)-dimensional finite signed measures \(\boldsymbol{\alpha}^{\prime}\), \(\boldsymbol{\beta}^{\prime}\) on discrete \(\sigma\)-algebras \(\mathcal{F}^{\prime}\), \(\mathcal{G}^{\prime}\) of countable sets \(S^{\prime}\), \(T^{\prime}\), respectively, such that \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\boldsymbol{\alpha}),\mathsf{LS}(\boldsymbol{ \alpha}^{\prime})\big{)}<\varepsilon\), \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\boldsymbol{\beta}),\mathsf{LS}(\boldsymbol{ \beta}^{\prime})\big{)}<\varepsilon\), \(d_{\mathrm{H}}\big{(}\mathsf{LS}(\boldsymbol{\alpha}\times\boldsymbol{\beta}), \mathsf{LS}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime}) \big{)}<\varepsilon\)._ Proof.: Without loss of generality we assume \(|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}|(T)>0\), for other cases can be trivially solved once the general construction is clear. We firstly discuss the case where \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) are both non-atomic. Fix \(\varepsilon_{0}>0\) such that \(\varepsilon_{0}<\varepsilon\). Let \(0<\delta<\varepsilon_{0}/4|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}|(T)\). One can decompose \(\mathrm{U}^{n}_{\|\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol{\beta} \boldsymbol{\beta}\boldsymbol{\beta}\boldsymbol such that \[\bigg{\|}\int_{S\times T}h\,\mathrm{d}(\boldsymbol{\alpha}\times \boldsymbol{\beta})-\int_{S^{\prime}\times T^{\prime}}h^{\prime}\,\mathrm{d}( \boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<\varepsilon.\] For this, let \[r_{p,q}=\frac{\int_{S_{p}\times T_{q}}h\,\mathrm{d}(|\boldsymbol{ \alpha}|\times|\boldsymbol{\beta}|)}{|\boldsymbol{\alpha}|(S_{p})|\boldsymbol {\beta}|(T_{q})}\in[0,1]\] and fix \(h^{\prime}\) to be such that \[\sum_{i,j\in[N]}h^{\prime}((p,i),(q,j))\leq r_{p,q}N^{2}<\sum_{i,j \in[N]}h^{\prime}((p,i),(q,j))+1\] for all \(p\), \(q\in[K]\). Then on one hand \[\bigg{\|}r_{p,q}N^{2}(\odot^{n}(\boldsymbol{a}_{p},\boldsymbol{b }_{q}))-\int_{(\{p\}\times[N])\times(\{q\}\times[N])}h^{\prime}\,\mathrm{d}( \boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}\] \[=\bigg{\|}\bigg{(}r_{p,q}N^{2}-\sum_{i,j\in[N]}h^{\prime}((p,i), (q,j))\bigg{)}(\odot^{n}(\boldsymbol{a}_{p},\boldsymbol{b}_{q}))\bigg{\|}_{1}\] \[\leq\|\odot^{n}(\boldsymbol{a}_{p},\boldsymbol{b}_{q})\|_{1}= \frac{|\boldsymbol{\alpha}|(S_{p})|\boldsymbol{\beta}|(T_{q})}{N^{2}}\|\odot^ {n}(\boldsymbol{u}_{p},\boldsymbol{u}_{q})\|_{1}\] \[\leq\frac{n}{N^{2}}|\boldsymbol{\alpha}|(S_{p})|\boldsymbol{ \beta}|(T_{q})\] for all \(p\), \(q\in K\). On the other hand \[\bigg{\|}\int_{S_{p}\times T_{q}}h\,\mathrm{d}(\boldsymbol{ \alpha}\times\boldsymbol{\beta})-r_{p,q}N^{2}(\odot^{n}(\boldsymbol{a}_{p}, \boldsymbol{b}_{q}))\bigg{\|}_{1}\] \[=\bigg{\|}\int_{S_{p}\times T_{q}}h(\odot^{n}(\boldsymbol{D}_{| \boldsymbol{\alpha}|}^{\boldsymbol{\alpha}}(s),\boldsymbol{D}_{|\boldsymbol{ \beta}|}^{\boldsymbol{\beta}}(t))-\odot^{n}(\boldsymbol{u}_{p},\boldsymbol{u} _{q}))\,(|\boldsymbol{\alpha}|\times|\boldsymbol{\beta}|)(\mathrm{d}(s,t)) \bigg{\|}_{1}\] \[\leq\int_{S_{p}\times T_{q}}|h|\|\odot^{n}(\boldsymbol{D}_{| \boldsymbol{\alpha}|}^{\boldsymbol{\alpha}}(s),\boldsymbol{D}_{|\boldsymbol{ \beta}|}^{\boldsymbol{\beta}}(t))-\odot^{n}(\boldsymbol{u}_{p},\boldsymbol{u} _{q})\|_{1}\,(|\boldsymbol{\alpha}|\times|\boldsymbol{\beta}|)(\mathrm{d}(s,t))\] \[\leq 2\delta|\boldsymbol{\alpha}|(S_{p})|\boldsymbol{\beta}|(T_{q})\] for all \(p\), \(q\in K\), where the second equality follows by Proposition13 and linearity of integrals, and where the second inequality follows by (16). It follows that \[\bigg{\|}\int_{S\times T}h\,\mathrm{d}(\boldsymbol{\alpha}\times \boldsymbol{\beta})-\int_{S^{\prime}\times T^{\prime}}h^{\prime}\,\mathrm{d}( \boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}\] \[\qquad\leq\sum_{p,q\in[K]}\Big{(}\frac{n}{N^{2}}+2\delta\Big{)}| \boldsymbol{\alpha}|(S_{p})|\boldsymbol{\beta}|(T_{q})=\Big{(}\frac{n}{N^{2} }+2\delta\Big{)}|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}|(T)<\varepsilon_{0}.\] This means for any \(\mathbf{x}\in\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta})\) there exists \(\mathbf{x}^{\prime}\in\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\) such that \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}<\varepsilon_{0}\). To show the other direction, consider an arbitrary function \(h^{\prime}:S^{\prime}\times T^{\prime}\to\{0,1\}\), and let \[r^{\prime}_{p,q}=\frac{1}{N^{2}}\sum_{i,j\in[N]}h^{\prime}((p,i),(q,j))\in[0,1]\] for all \(p\), \(q\in[K]\). Since \(\mathbf{\alpha}\), \(\mathbf{\beta}\) are non-atomic, \(|\mathbf{\alpha}|\), \(|\mathbf{\beta}|\) are non-atomic by (ii) in Proposition7, and one can fix subsets \(S^{\prime\prime}_{p,q}\in\mathcal{F}\), \(T^{\prime\prime}_{p,q}\in\mathcal{G}\) of \(S_{p}\), \(T_{q}\), respectively, such that \[|\mathbf{\alpha}|(S^{\prime\prime}_{p,q})=(r^{\prime}_{p,q})^{\frac{1}{2}}|\mathbf{ \alpha}|(S_{p}),\quad|\mathbf{\beta}|(T^{\prime\prime}_{p,q})=(r^{\prime}_{p,q})^ {\frac{1}{2}}|\mathbf{\beta}|(T_{q})\] for each \(p\), \(q\in[K]\) by Lemma12. Let \(h:S\times T\to\{0,1\}\) be defined by \[h(s,t)=\mathbf{1}_{S^{\prime\prime}_{p,q}\times T^{\prime\prime}_{p,q}}(s,t)\] for all \((s,t)\in S_{p}\times T_{q}\), \(p\), \(q\in[K]\). Then \[\left\|\int_{S_{p}\times T_{q}}h\,\mathrm{d}(\mathbf{\alpha}\times \mathbf{\beta})-\int_{(\{p\}\times[N])\times(\{q\}\times[N])}h^{\prime}\,\mathrm{d }(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\right\|_{1}\] \[=\left\|\int_{S_{p}\times T_{q}}\mathbf{1}_{S^{\prime\prime}_{p,q} \times T^{\prime\prime}_{p,q}}\,\mathrm{d}(\mathbf{\alpha}\times\mathbf{\beta})-r^{ \prime}_{p,q}N^{2}(\odot^{n}(\mathbf{a}_{p},\mathbf{b}_{q}))\right\|_{1}\] \[=\left\|\int_{S^{\prime\prime}_{p,q}\times T^{\prime\prime}_{p,q }}\,\mathrm{d}(\mathbf{\alpha}\times\mathbf{\beta})-|\mathbf{\alpha}|(S^{\prime\prime}_{p, q})|\mathbf{\beta}|(T^{\prime\prime}_{p,q})(\odot^{n}(\mathbf{u}_{p},\mathbf{u}_{q})) \right\|_{1}\] \[=\left\|\int_{S^{\prime\prime}_{p,q}\times T^{\prime\prime}_{p,q }}(\odot^{n}(\mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}(s),\mathbf{D}^{\mathbf{\beta}}_{| \mathbf{\beta}|}(t))-\odot^{n}(\mathbf{u}_{p},\mathbf{u}_{q}))\,(|\mathbf{\alpha}|\times|\mathbf{ \beta}|)(\mathrm{d}(s,t))\right\|_{1}\] \[\leq\int_{S^{\prime\prime}_{p,q}\times T^{\prime\prime}_{p,q}}\| \odot^{n}(\mathbf{D}^{\mathbf{\alpha}}_{|\mathbf{\alpha}|}(s),\mathbf{D}^{\mathbf{\beta}}_{|\mathbf{ \beta}|}(t))-\odot^{n}(\mathbf{u}_{p},\mathbf{u}_{q})\|_{1}\,(|\mathbf{\alpha}|\times|\mathbf{ \beta}|)(\mathrm{d}(s,t))\] \[\leq 2\delta|\mathbf{\alpha}|(S^{\prime\prime}_{p,q})|\mathbf{\beta}|(T^{ \prime\prime}_{p,q})\leq 2\delta|\mathbf{\alpha}|(S_{p})|\mathbf{\beta}|(T_{q})\] for all \(p\), \(q\in K\), where the third equality follows by Proposition13 and linearity of integrals, and where the second inequality follows by (16). It follows that \[\left\|\int_{S\times T}h\,\mathrm{d}(\mathbf{\alpha}\times\mathbf{\beta})- \int_{S^{\prime}\times T^{\prime}}h^{\prime}\,\mathrm{d}(\mathbf{\alpha}^{ \prime}\times\mathbf{\beta}^{\prime})\right\|_{1}\] \[\leq\sum_{p,q\in[K]}2\delta|\mathbf{\alpha}|(S_{p})|\mathbf{\beta}|(T_{q}) =2\delta|\mathbf{\alpha}|(S)|\mathbf{\beta}|(T)<\varepsilon_{0}.\] This means for any \(\mathbf{x}^{\prime}\in\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\) there exists \(\mathbf{x}\in\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta})\) such that \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}<\varepsilon_{0}\). It finally follows that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\mathbf{ \alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}\leq\varepsilon_{0}<\varepsilon\] for our construction of \(\boldsymbol{\alpha}^{\prime}\) and \(\boldsymbol{\beta}^{\prime}\). The arguments for showing \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\boldsymbol{\alpha}),\mathsf{LS}(\boldsymbol{ \alpha}^{\prime})\big{)}<\varepsilon,\quad d_{\mathrm{H}}\big{(}\mathsf{LS}( \boldsymbol{\beta}),\mathsf{LS}(\boldsymbol{\beta}^{\prime})\big{)}<\varepsilon\] are similar and simpler, in which we need to let \[0<\delta<\min\{\varepsilon_{0}/4|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}| (T),\varepsilon_{0}/4|\boldsymbol{\alpha}|(S),\varepsilon_{0}/4\boldsymbol {\beta}|(T)\}\] and \[N>\max\{(2n|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}|(T)/ \varepsilon_{0})^{\frac{1}{2}},2n|\boldsymbol{\alpha}|(S)/\varepsilon_{0},2n |\boldsymbol{\beta}|(T)/\varepsilon_{0}\}\] replace the existing requirement of \(\delta\) and \(N\). Calling the above the first part of our proof, we now start to consider the most general case. By the discussion below Proposition 7, \(S\) can be decomposed as union of disjoint subsets \(S_{\mathrm{con}}\), \(S_{\mathrm{atm}}\) where the restriction of \(\boldsymbol{\alpha}\) to \(S_{\mathrm{con}}\) is non-atomic and where \(S_{\mathrm{atm}}\) is union of at most countably many mutually disjoint atoms \(A_{1},A_{2},\ldots\) of \(|\boldsymbol{\alpha}|\). Similarly \(T\) can be decomposed as union of disjoint subsets \(T_{\mathrm{con}}\), \(T_{\mathrm{atm}}\) where the restriction of \(\boldsymbol{\beta}\) to \(T_{\mathrm{con}}\) is non-atomic and where \(T_{\mathrm{atm}}\) is union of at most countably many mutually disjoint atoms \(B_{1},B_{2},\ldots\) of \(|\boldsymbol{\beta}|\). Let \(A_{a}=\emptyset\) for \(a>N_{1}\) if \(S_{\mathrm{atm}}\) contains \(N_{1}<\infty\) atoms of \(|\boldsymbol{\alpha}|\) and let \(B_{b}=\emptyset\) for \(b>N_{2}\) if \(T_{\mathrm{atm}}\) contains \(N_{2}<\infty\) atoms of \(|\boldsymbol{\beta}|\). Let \(\boldsymbol{\alpha}_{1}\), \(\boldsymbol{\alpha}_{2}\) be the respective restrictions of \(\boldsymbol{\alpha}\) to \(S_{\mathrm{con}}\), \(S_{\mathrm{atm}}\), and let \(\boldsymbol{\beta}_{1}\), \(\boldsymbol{\beta}_{2}\) be the respective restrictions of \(\boldsymbol{\beta}\) to \(T_{\mathrm{con}}\), \(T_{\mathrm{atm}}\). Let \(\boldsymbol{\alpha}_{1}^{\prime}\), \(\boldsymbol{\beta}_{1}^{\prime}\) on the discrete \(\sigma\)-algebra on \(S_{\mathrm{con}}^{\prime}=T_{\mathrm{con}}^{\prime}=[K]\times[N]\) be defined respectively for \(\boldsymbol{\alpha}_{1}\), \(\boldsymbol{\beta}_{1}\) as \(\boldsymbol{\alpha}^{\prime}\), \(\boldsymbol{\beta}^{\prime}\) be defined respectively for \(\boldsymbol{\alpha}\), \(\boldsymbol{\beta}\) in the first part of our proof with additional requirements that \(\varepsilon_{0}<\varepsilon/3\), \(N>2n|\boldsymbol{\alpha}|(S)|\boldsymbol{\beta}|(T)/\varepsilon_{0}\). Let \(\boldsymbol{\alpha}_{2}^{\prime}\), \(\boldsymbol{\beta}_{2}^{\prime}\) on the discrete \(\sigma\)-algebra on \(S_{\mathrm{atm}}^{\prime}=T_{\mathrm{atm}}^{\prime}=\mathbb{N}\) be defined by \[\boldsymbol{\alpha}_{2}^{\prime}(\{a\})=\boldsymbol{\alpha}(A_{a}),\quad \boldsymbol{\beta}_{2}^{\prime}(\{b\})=\boldsymbol{\beta}(B_{b})\] for \(a\), \(b\in\mathbb{N}\). Finally, let \(\boldsymbol{\alpha}^{\prime}=\boldsymbol{\alpha}_{1}^{\prime}\oplus \boldsymbol{\alpha}_{2}^{\prime}\), \(\boldsymbol{\beta}^{\prime}=\boldsymbol{\beta}_{1}^{\prime}\oplus\boldsymbol{ \beta}_{2}^{\prime}\). As the usual, firstly consider an arbitrary measurable function \(h:S\times T\to\{0,1\}\), and we claim to find \(h^{\prime}:S^{\prime}\times T^{\prime}\to\{0,1\}\) such that \[\bigg{\|}\int_{S\times T}h\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{ \beta})-\int_{S^{\prime}\times T^{\prime}}h^{\prime}\,\mathrm{d}(\boldsymbol{ \alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<3\varepsilon_ {0}, \tag{17}\] where \(h^{\prime}\) is automatically measurable since \(S^{\prime}\times T^{\prime}\) is at most countable. The first part of our proof has already shown that one can make the restriction of \(h^{\prime}\) to \(S_{\mathrm{con}}^{\prime}\times T_{\mathrm{con}}^{\prime}\) to be such that \[\bigg{\|}\int_{S_{\mathrm{con}}\times T_{\mathrm{con}}}h\,\mathrm{d}(\boldsymbol {\alpha}\times\boldsymbol{\beta})-\int_{S_{\mathrm{con}}^{\prime}\times T_{ \mathrm{con}}^{\prime}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^{\prime} \times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<\varepsilon_{0}. \tag{18}\] By Proposition 23, there exists \(t_{p,b}\in B_{b}\) such that \[\int_{S_{p}\times B_{b}}h\,\mathrm{d}(\boldsymbol{\alpha}\times \boldsymbol{\beta})=\odot^{n}\bigg{(}\int_{S_{p}}h(s,t_{p,b})\,\boldsymbol{ \alpha}(\mathrm{d}s),\boldsymbol{\beta}(B_{b})\bigg{)}\] for all \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\). Let \[\ell_{p,b}=\frac{\int_{S_{p}}h(s,t_{p,b})\,|\boldsymbol{\alpha} |(\mathrm{d}s)}{|\boldsymbol{\alpha}|(S_{p})}\in[0,1]\] and let the restriction of \(h^{\prime}\) on \(S^{\prime}_{\mathrm{con}}\times T^{\prime}_{\mathrm{atm}}\) be such that \[\sum_{i\in[N]}h^{\prime}((p,i),b)\leq\ell_{p,b}N<\sum_{i\in[N]}h^{ \prime}((p,i),b)+1\] for all \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\), and such that \(h^{\prime}((p,i),b)=0\) for all \((p,i)\in[K]\times[N]\), \(b\in\mathbb{N}\) such that \(B_{b}=\emptyset\). Then \[\bigg{\|}\int_{S_{p}\times B_{b}}h\,\mathrm{d}(\boldsymbol{\alpha }\times\boldsymbol{\beta})-\int_{(\{p\}\times[N])\times\{b\}}h^{\prime}\, \mathrm{d}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime}) \bigg{\|}_{1}\] \[\leq\bigg{\|}\odot^{n}\bigg{(}\int_{S_{p}}h(s,t_{p,b})\, \boldsymbol{\alpha}(\mathrm{d}s),\boldsymbol{\beta}(B_{b})\bigg{)}-\ell_{p,b} N\odot^{n}(\boldsymbol{a}_{p},\boldsymbol{\beta}(B_{b}))\bigg{\|}_{1}\] \[\qquad\qquad+\bigg{\|}\ell_{p,b}N\odot^{n}(\boldsymbol{a}_{p}, \boldsymbol{\beta}(B_{b}))-\sum_{i\in[N]}h^{\prime}((p,i),b)\odot^{n}( \boldsymbol{a}_{p},\boldsymbol{\beta}^{\prime}(\{b\}))\bigg{\|}_{1}\] \[\leq\Big{(}\delta+\frac{n}{N}\Big{)}|\boldsymbol{\alpha}|(S_{p}) |\boldsymbol{\beta}|(B_{b})\] for all \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\) by calculations similar as in the first part of our proof, and \[\bigg{\|}\int_{S_{\mathrm{con}}\times T_{\mathrm{atm}}}h\,\mathrm{ d}(\boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{ con}}\times T^{\prime}_{\mathrm{atm}}}h^{\prime}\,\mathrm{d}(\boldsymbol{ \alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}\] \[\leq\sum_{b=1}^{\infty}\sum_{p\in[K]}\Big{(}\delta+\frac{n}{N} \Big{)}|\boldsymbol{\alpha}|(S_{p})|\boldsymbol{\beta}|(B_{b})\] \[=\Big{(}\delta+\frac{n}{N}\Big{)}|\boldsymbol{\alpha}|(S_{\mathrm{ con}})|\boldsymbol{\beta}|(T_{\mathrm{atm}})<\varepsilon_{0}. \tag{19}\] A similar process shows that on can set the restriction of \(h^{\prime}\) on \(S^{\prime}_{\mathrm{atm}}\times T^{\prime}_{\mathrm{con}}\) to be such that \[\bigg{\|}\int_{S_{\mathrm{atm}}\times T_{\mathrm{con}}}h\,\mathrm{d}( \boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{atm}} \times T^{\prime}_{\mathrm{con}}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^{ \prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<\varepsilon_{0}. \tag{20}\] At last, for each \(a\), \(b\in\mathbb{N}\) such that \(A_{a}\), \(B_{b}\neq\emptyset\), their exist \(s^{a,b}\in A_{a}\), \(t^{a,b}\in B_{b}\) such that \[\int_{A_{a}\times B_{b}}h\,\mathrm{d}(\boldsymbol{\alpha}\times \boldsymbol{\beta})=h(s^{a,b},t^{a,b})\big{(}\odot^{n}(\boldsymbol{\alpha}(A_{a }),\boldsymbol{\beta}(B_{b}))\big{)}\] by Proposition 23. Let the restriction of \(h^{\prime}\) on \(S^{\prime}_{\mathrm{atm}}\times T^{\prime}_{\mathrm{atm}}\) be such that \(h^{\prime}(a,b)=h(s^{a,b},t^{a,b})\). Then \[\int_{A_{a}\times B_{b}}h\,\mathrm{d}(\boldsymbol{\alpha} \times\boldsymbol{\beta})-\int_{\{a\}\times\{b\}}h^{\prime}\, \mathrm{d}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\] \[=h(s^{a,b},t^{a,b})\big{(}\odot^{n}(\boldsymbol{\alpha}(A_{a}), \boldsymbol{\beta}(B_{b}))\big{)}\] \[=0\] for all \(a\), \(b\in\mathbb{N}\) such that \(A_{a}\), \(B_{b}\neq\emptyset\), so that \[\bigg{\|}\int_{S_{\mathrm{atm}}\times T_{\mathrm{atm}}}h\,\mathrm{d}( \boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{atm}} \times T^{\prime}_{\mathrm{atm}}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^ {\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}=0. \tag{21}\] The inequality (17) then follows by (18), (19), (20) and (21). The other direction is again easier. Consider an arbitrary \(h^{\prime}:S^{\prime}\times T^{\prime}\to\{0,1\}\). We claim to construct a measurable \(h:S\times T\to\{0,1\}\) such that \[\bigg{\|}\int_{S\times T}h\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{ \beta})-\int_{S^{\prime}\times T^{\prime}}h^{\prime}\,\mathrm{d}(\boldsymbol{ \alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<3\varepsilon_{ 0}. \tag{22}\] Again, the first part of our proof has shown that one can make the restriction of \(h\) to \(S_{\mathrm{con}}\times T_{\mathrm{con}}\) to be such that \[\bigg{\|}\int_{S_{\mathrm{con}}\times T_{\mathrm{con}}}h\,\mathrm{d}( \boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{con}} \times T^{\prime}_{\mathrm{con}}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^ {\prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<\varepsilon_{0}. \tag{23}\] Let \[\ell^{\prime}_{p,b}=\frac{1}{N}\sum_{i\in[N]}h^{\prime}((p,i),b)\in[0,1]\] for all \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\). Since \(\boldsymbol{\alpha}\) is non-atomic, one can fix a subset \(S^{\prime\prime\prime}_{p,b}\in\mathcal{F}\) of \(S_{p}\) such that \[|\boldsymbol{\alpha}|(S^{\prime\prime\prime}_{p,b})=\ell^{\prime}_{p,b}| \boldsymbol{\alpha}|(S_{p})\] for each \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\) by Lemma 12. Let \(h:S_{\mathrm{con}}\times T_{\mathrm{atm}}\to\{0,1\}\) be defined by \[h(s,t)=\mathbf{1}_{S^{\prime\prime\prime}_{p,b}}(s)\] for all \((s,t)\in S_{p}\times B_{b}\), \(p\in[K]\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\). Then \[\left\|\int_{S_{p}\times B_{b}}h\,\mathrm{d}(\boldsymbol{\alpha} \times\boldsymbol{\beta})-\int_{(\{p\}\times[N])\times\{b\}}h^{\prime}\, \mathrm{d}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime}) \right\|_{1}\] \[=\left\|\int_{S_{p}\times B_{b}}\mathbf{1}_{S^{\prime\prime\prime \prime}_{p,b}\times B_{b}}\,\mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{ \beta})-\ell^{\prime}_{p,b}N\big{(}\odot^{n}(\boldsymbol{a}_{p},\boldsymbol{ \beta}^{\prime}(\{b\}))\big{)}\right\|_{1}\] \[=\left\|\odot^{n}(\boldsymbol{\alpha}(S^{\prime\prime\prime}_{p,b }),\boldsymbol{\beta}(B_{b}))-|\boldsymbol{\alpha}|(S^{\prime\prime\prime}_{p, b})\big{(}\odot^{n}(\boldsymbol{u}_{p},\boldsymbol{\beta}(B_{b}))\big{)} \right\|_{1}\] \[=|\boldsymbol{\beta}|(B_{b})\bigg{\|}\odot^{n}\bigg{(}\int_{S^{ \prime\prime\prime}_{p,b}}(\boldsymbol{D}^{\boldsymbol{\alpha}}_{| \boldsymbol{\alpha}|}-\boldsymbol{u}_{p})\,\mathrm{d}|\boldsymbol{\alpha}|, \boldsymbol{v}_{b}\bigg{)}\bigg{\|}_{1}\] \[\leq|\boldsymbol{\beta}|(B_{b})\int_{S^{\prime\prime\prime}_{p,b }}\|\boldsymbol{D}^{\boldsymbol{\alpha}}_{|\boldsymbol{\alpha}|}-\boldsymbol{ u}_{p}\|_{1}\,\mathrm{d}|\boldsymbol{\alpha}|\] \[\leq\delta|\boldsymbol{\alpha}|(S_{p})|\boldsymbol{\beta}|(B_{b})\] for all \(p\in K\), \(b\in\mathbb{N}\) such that \(B_{b}\neq\emptyset\), where \(\boldsymbol{v}_{b}\coloneqq\boldsymbol{\beta}(B_{b})/|\boldsymbol{\beta}|(B_{ b})\in\mathrm{U}^{n}_{\|\|_{1}}\). It follows that \[\bigg{\|}\int_{S_{\mathrm{con}}\times T_{\mathrm{atm}}}h\, \mathrm{d}(\boldsymbol{\alpha}\times\boldsymbol{\beta}) -\int_{S^{\prime}_{\mathrm{con}}\times T^{\prime}_{\mathrm{atm}} }h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^ {\prime})\bigg{\|}_{1}\] \[\leq\sum_{b=1}^{\infty}\sum_{p\in[K]}\delta|\boldsymbol{\alpha}| (S_{p})|\boldsymbol{\beta}|(B_{b})\] \[=\delta|\boldsymbol{\alpha}|(S_{\mathrm{con}})|\boldsymbol{\beta}| (T_{\mathrm{atm}})<\varepsilon_{0}. \tag{24}\] A similar process shows that on can set the restriction of \(h\) on \(S_{\mathrm{atm}}\times T_{\mathrm{con}}\) to be such that \[\bigg{\|}\int_{S_{\mathrm{atm}}\times T_{\mathrm{con}}}h\,\mathrm{d}( \boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{atm}} \times T^{\prime}_{\mathrm{con}}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^{ \prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}<\varepsilon_{0}. \tag{25}\] Lastly, for each \(a\), \(b\in\mathbb{N}\) such that \(A_{a}\), \(B_{b}\neq\emptyset\), let the restriction of \(h\) on \(S_{\mathrm{atm}}\times T_{\mathrm{atm}}\) be such that \[h(s,t)=h^{\prime}(a,b)\] for all \((s,t)\in A_{a}\times B_{b}\), \(a\), \(b\in\mathbb{N}\), \(A_{a}\), \(B_{b}\neq\emptyset\). Then it is easy to check that \[\bigg{\|}\int_{S_{\mathrm{atm}}\times T_{\mathrm{atm}}}h\,\mathrm{d}( \boldsymbol{\alpha}\times\boldsymbol{\beta})-\int_{S^{\prime}_{\mathrm{atm}} \times T^{\prime}_{\mathrm{atm}}}h^{\prime}\,\mathrm{d}(\boldsymbol{\alpha}^{ \prime}\times\boldsymbol{\beta}^{\prime})\bigg{\|}_{1}=0. \tag{26}\] The inequality (22) then follows by (23), (24), (25) and (26). Finally, by (17) and (22), \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\boldsymbol{\alpha}\times\boldsymbol{\beta}),\mathsf{LS}(\boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime}) \big{)}\leq 3\varepsilon_{0}<\varepsilon\] for our construction of \(\mathbf{\alpha}^{\prime}\) and \(\mathbf{\beta}^{\prime}\). The arguments for showing \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}),\mathsf{LS}(\mathbf{\alpha}^{\prime}) \big{)}<\varepsilon,\quad d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\beta}),\mathsf{ LS}(\mathbf{\beta}^{\prime})\big{)}<\varepsilon\] are again similar and simpler. **Proposition 25**.: _Let \(\mathbf{\alpha}\), \(\mathbf{\beta}\) be \(n\)-dimensional finite signed measures on \((S,\mathcal{F})\), \((T,\mathcal{G})\), respectively. Then for any \(\varepsilon>0\) there exists \(\delta>0\) such that_ \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\mathbf{ \alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}<\varepsilon\] _for any \(n\)-dimensional finite signed measures \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}^{\prime}\) for which_ Proof.: Let \(M>1\) be large enough so that \(\mathsf{Cube}(M)\) includes all sets \(A\), \(B\subseteq\mathbb{R}^{n}\) such that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}),A\big{)}<1,\quad d_{\mathrm{H}} \big{(}\mathsf{LS}(\mathbf{\beta}),B\big{)}<1.\] Choose \(\delta<\varepsilon/36nM<\varepsilon/3\) for \(\delta\in(0,1)\) and let \(\mathbf{\alpha}^{\prime}\), \(\mathbf{\beta}^{\prime}\) be arbitrary \(n\)-dimensional finite signed measures such that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}),\mathsf{LS}(\mathbf{\alpha}^{\prime })\big{)}<\delta,\quad d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\beta}),\mathsf{ LS}(\mathbf{\beta}^{\prime})\big{)}<\delta.\] By Proposition24 there exist \(n\)-dimensional finite signed measures \(\mathbf{\lambda}\), \(\mathbf{\omega}\), \(\mathbf{\lambda}^{\prime}\), \(\mathbf{\omega}^{\prime}\) on discrete \(\sigma\)-algebras on countable sets such that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\bm {\lambda}\times\mathbf{\omega})\big{)}<\delta,\] and such that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}^{\prime}\times\mathbf{\beta}^{\prime }),\mathsf{LS}(\mathbf{\lambda}^{\prime}\times\mathbf{\omega}^{\prime})\big{)}<\delta,\] In particular, \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\lambda}),\mathsf{LS}(\mathbf{\lambda}^{ \prime})\big{)}<3\delta,\quad d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\omega}), \mathsf{LS}(\mathbf{\omega}^{\prime})\big{)}<3\delta\] so that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\lambda}\times\mathbf{\omega}),\mathsf{LS}( \mathbf{\lambda}^{\prime}\times\mathbf{\omega}^{\prime})\big{)}\leq 12nM\delta< \varepsilon/3\] by Proposition22 since \(\mathsf{LS}(\mathbf{\lambda})\), \(\mathsf{LS}(\mathbf{\omega})\), \(\mathsf{LS}(\mathbf{\lambda}^{\prime})\), \(\mathsf{LS}(\mathbf{\omega}^{\prime})\subseteq\mathsf{Cube}(M)\). It then follows that \[d_{\mathrm{H}}\big{(}\mathsf{LS}(\mathbf{\alpha}\times\mathbf{\beta}),\mathsf{LS}(\bm {\alpha}^{\prime}\times\mathbf{\beta}^{\prime})\big{)}<\varepsilon/3+\delta+\delta<\varepsilon\] and this concludes the proof. Taking \(\varepsilon\to 0\) in Proposition25, one has \(\mathsf{LS}(\boldsymbol{\alpha}\times\boldsymbol{\beta})=\mathsf{LS}( \boldsymbol{\alpha}^{\prime}\times\boldsymbol{\beta}^{\prime})\) if \(\mathsf{LS}(\boldsymbol{\alpha})=\mathsf{LS}(\boldsymbol{\alpha}^{\prime})\), \(\mathsf{LS}(\boldsymbol{\beta})=\mathsf{LS}(\boldsymbol{\beta}^{\prime})\). This implies that we can define _the Lorenz product of Lorenz skeletons_ in the same way as we define the Lorenz product of Lorenz hulls. I.e., the Lorenz product \(K_{1}K_{2}\) of Lorenz skeletons \(K_{1}\), \(K_{2}\) is the Lorenz skeleton \(\mathsf{LS}(\boldsymbol{\alpha}\times\boldsymbol{\beta})\) where \(\mathsf{LS}(\boldsymbol{\alpha})=K_{1}\), \(\mathsf{LS}(\boldsymbol{\beta})=K_{2}\). Moreover, Proposition25 also implies that the Lorenz product of Lorenz skeletons is continuous with respect to the Hausdorff distance. ## 10 Appendix: An Equivalent Definition As a generalization of the first statement of Proposition8, we have the next proposition: **Proposition 26**.: _Let \(\boldsymbol{\alpha}\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\), let \(\mu\) be a \(\sigma\)-finite positive measure dominating \(\boldsymbol{\alpha}\) on \((S,\mathcal{F})\), let \(f:S\to[0,\infty)\) be a measurable function such that \(\int_{S}f\,\mathrm{d}|\boldsymbol{\alpha}|<\infty\), and let \(A\in\mathcal{F}\). Then \(\boldsymbol{x}^{*T}\!\int_{A}f\,\mathrm{d}\boldsymbol{\alpha}\geq 0\), \(\boldsymbol{x}^{*T}\!\int_{A}f\,\mathrm{d}\boldsymbol{\alpha}\leq 0\) if \(A\subseteq[\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\geq 0]\), \(A\subseteq[\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\leq 0]\), respectively._ Proof.: Use the fact that \[f(s)\cdot\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}(s)\geq 0,\quad f(s)\cdot\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}( s)\leq 0\] for \(s\in[\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\geq 0]\), \(s\in[\boldsymbol{x}^{*T}\boldsymbol{D}_{\mu}^{\boldsymbol{\alpha}}\leq 0]\), respectively. The rest of the proof follows by definition of \(\int_{A}f\,\mathrm{d}\boldsymbol{\alpha}\) and by linearity of the ordinary Lebesgue integral. **Proposition 27**.: _Let \(\boldsymbol{\alpha}\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\). Then_ \[\mathsf{LH}(\boldsymbol{\alpha})=\Big{\{}\int_{S}f\,\mathrm{d}\boldsymbol{ \alpha}\,:\,f:S\to[0,1]\text{ is measurable}\Big{\}}.\] Proof.: Let \[V=\Big{\{}\int_{S}f\,\mathrm{d}\boldsymbol{\alpha}\,:\,f:S\to[0,1]\text{ is measurable}\Big{\}}\] for short. It is easy to see that \(V\) is convex and that \(\mathsf{LS}(\boldsymbol{\alpha})\subseteq V\), so that \(\mathsf{LH}(\boldsymbol{\alpha})\subseteq V\). For the reverse containment, it suffices to show that \[\|\boldsymbol{x}^{*}\|_{\mathsf{LH}(\boldsymbol{\alpha})}\geq\|\boldsymbol{ x}^{*}\|_{V}\] for all \(\boldsymbol{x}^{*}\in\mathsf{Sph}_{\mathbb{R}^{n}}\), so that \(\overline{V}\subseteq\overline{\mathsf{LH}(\boldsymbol{\alpha})}\) by Proposition3, from which \(V\subseteq\overline{V}\subseteq\mathsf{LH}(\boldsymbol{\alpha})\) since \(\mathsf{LH}(\alpha)\) is closed. In turn, to establish the inequality, by Proposition 9 it suffices to show that \[\boldsymbol{x}^{*T}\boldsymbol{\alpha}([\boldsymbol{x}^{*T}\boldsymbol{D}^{ \boldsymbol{\alpha}}_{|\boldsymbol{\alpha}|}\geq 0])\geq\boldsymbol{x}^{*T} \int_{S}f\,\mathrm{d}\boldsymbol{\alpha}\] for all \(\boldsymbol{x}^{*}\in\mathsf{Sph}_{\mathbb{R}^{n}}\) and all measurable \(f:S\to[0,1]\). However, this is easily implied by Proposition 26 since \(f\geq 0\), \(1-f\geq 0\) on \(S\). **Corollary 2**.: _Let \(\boldsymbol{\alpha}\) be an \(n\)-dimensional finite signed measure on \((S,\mathcal{F})\) where \(\mathcal{F}=\sigma(\mathcal{A})\) for \(\mathcal{A}=\{A_{1},\ldots,A_{m}\}\) a disjoint partition of \(S\). Then_ \[\mathsf{LH}(\boldsymbol{\alpha})=\Big{\{}\sum_{i=1}^{m}\lambda_{i}\boldsymbol{ \alpha}(A_{i})\,:\,\lambda_{i}\in[0,1],1\leq i\leq m\Big{\}}.\] A _zonoid_ is the range of a non-atomic \(n\)-dimensional finite signed measure, and as discussed at the beginning of Section 5, it is also a Lorenz hull. For completeness, we show that the reverse statement also holds.7 Footnote 7: A very brief proof can be found in Bolker [2] (Theorem 1.6). We hereby present one with details enough to our own satisfaction. **Proposition 28**.: _For any \(n\)-dimensional finite signed measure \(\boldsymbol{\alpha}\) on \((S,\mathcal{F})\), there exists \((S^{\prime},\mathcal{F}^{\prime})\) and non-atomic \(n\)-dimensional finite signed measure \(\boldsymbol{\alpha}^{\prime}\) on \((S^{\prime},\mathcal{F}^{\prime})\) such that \(\mathsf{LS}(\boldsymbol{\alpha}^{\prime})=\mathsf{LH}(\boldsymbol{\alpha})\)._ Proof.: Recalling the discussion following Proposition 7, let \(\{A_{i}\}_{i\in\mathbb{N}}\) be a chosen collection of atoms with respect to the purely atomic part \(\boldsymbol{\alpha}_{|A}\) of \(\boldsymbol{\alpha}\), and let \(\boldsymbol{\alpha}_{|B}\) denote the non-atomic part of \(\boldsymbol{\alpha}\). Consider \(A^{\prime}=(0,\infty)\) and \(\mathcal{B}\) the Borel \(\sigma\)-algebra on \(A^{\prime}\). Let \(\boldsymbol{\beta}\) be the non-atomic finite vector measure on \((A^{\prime},\mathcal{B})\) such that \(\boldsymbol{\beta}(E)=\mu(E)\boldsymbol{\alpha}(A_{i})\) for all Borel subsets \(E\subseteq(i-1,i]\), where \(\mu\) is the Lebesgue measure on Borel subsets of \(\mathbb{R}\), for all \(i\in\mathbb{N}\). Then \[\mathsf{LS}(\boldsymbol{\beta}) =\{\boldsymbol{\beta}(E)\,:\,E\in\mathcal{B}\}\] \[=\Big{\{}\sum_{i=1}^{\infty}\boldsymbol{\beta}(E\cap(i-1,i])\,: \,E\in\mathcal{B}\Big{\}}\] \[=\Big{\{}\sum_{i=1}^{\infty}\lambda_{i}\boldsymbol{\alpha}(A_{i} )\,:\,\lambda_{i}\in[0,1],i\in\mathbb{N}\Big{\}}.\] On the other hand, \[\mathsf{LH}(\boldsymbol{\alpha}_{|A}) =\Big{\{}\int_{A}f\,\mathrm{d}\boldsymbol{\alpha}\,:\,f:S\to[0,1] \text{ is measurable}\Big{\}}\] \[=\Big{\{}\sum_{i=1}^{\infty}\int_{A_{i}}f\,\mathrm{d}\boldsymbol {\alpha}\,:\,f:S\to[0,1]\text{ is measurable}\Big{\}}\] \[=\Big{\{}\sum_{i=1}^{\infty}\lambda_{i}\boldsymbol{\alpha}(A_{i} )\,:\,\lambda_{i}\in[0,1],i\in\mathbb{N}\Big{\}},\] where the first equality follows by Proposition 27, where the second equality follows since \(|\mathbf{\alpha}|(\bigcup_{i=n}^{\infty}A_{i})\to 0\) as \(n\to\infty\), and where the third equality follows since \(f\) is \(|\mathbf{\alpha}|\)-almost constant on each atom \(A_{i}\). Thus \(\mathsf{LS}(\mathbf{\beta})=\mathsf{LH}(\mathbf{\alpha}_{|A})\). Let \(\mathbf{\alpha}^{\prime}=\mathbf{\beta}\oplus\mathbf{\alpha}_{|B}\). Then \(\mathbf{\alpha}^{\prime}\) is non-atomic and \[\mathsf{LS}(\mathbf{\alpha}^{\prime})=\mathsf{LS}(\mathbf{\beta})+\mathsf{LS}(\mathbf{ \alpha}_{|B})=\mathsf{LH}(\mathbf{\alpha}_{|A})+\mathsf{LH}(\mathbf{\alpha}_{|B})= \mathsf{LH}(\mathbf{\alpha}),\] where the first and last equality follow by Proposition 15.
2305.19656
Novel slow dynamics of phase transition in the partially ordered frustrated magnet DyRu2Si2
DyRu2Si2 is a frustrated magnet to exhibit multiple magnetic phase transition in zero and finite magnetic fields. We investigated and characterized the phase transition between the partially-ordered antiferromagnetic phases at zero field by ac susceptibility measurements. Detailed ac susceptibility measurements reveal the novel critical dynamics of the phase transition; extremely slow dynamics with the relaxation time in the order of 10-100 msec, speed-up of the dynamics on cooling indicating its non-thermally activated origin and growing of the ferromagnetic correlations towards the phase transition temperature. On the basis of these findings, we propose the novel phase transition process, namely, the spontaneous striped-arrangement of the precedently emergent "belt-like" ferromagnetic spin texture.
Subaru Yoshimoto, Yoshikazu Tabata, Takeshi Waki, Hiroyuki Nakamura
2023-05-31T08:41:03Z
http://arxiv.org/abs/2305.19656v2
Novel slow dynamics of phase transition in the partially ordered frustrated magnet DyRu\({}_{2}\)Si\({}_{2}\) ###### Abstract DyRu\({}_{2}\)Si\({}_{2}\) is a frustrated magnet to exhibit multiple magnetic phase transition in zero and finite magnetic fields. We investigated and characterized the phase transition between the partially-ordered antiferromagnetic phases at zero field by ac susceptibility measurements. Detailed ac susceptibility measurements reveal the novel critical dynamics of the phase transition; extremely slow dynamics with the relaxation time in the order of 10-100 msec, speed-up of the dynamics on cooling indicating its non-thermally activated origin and growing of the ferromagnetic correlations towards the phase transition temperature. On the basis of these findings, we propose the novel phase transition process, namely, the spontaneous striped-arrangement of the precedently emergent "belt-like" ferromagnetic spin texture. ## 1 Introduction Frustrated magnets have been widely investigated for their rich variety of properties. In a frustrated system, where interactions compete, a vast number of physical states having the same energy coexist and, therefore, the system is highly degenerated. These frustrated systems, such as spin glass and spin ice, are known to exhibit slow and complicated dynamics at low temperature because of their high degeneracy[1, 7, 2, 3, 4, 5, 6]. In the present report, we focus on the novel slow dynamics in the frustrated magnet DyRu\({}_{2}\)Si\({}_{2}\). It is one of the series of the intermetallic compound _RT\({}_{2}\)X\({}_{2}\)_ (_R_ = rare earth, \(T\) = 4d or 5d metal, \(X\) = Si or Ge), with the tetragonal ThCr\({}_{2}\)Si\({}_{2}\)-type structure, which exhibits diversity of magnetic properties such as ferromagnetism, antiferromagnetism and paramagnetic heavy fermion [10, 8, 9]. The magnetic properties of _RT\({}_{2}\)X\({}_{2}\)_ are controlled by the interaction between the conduction electrons and f-electrons. In the heavy rare earth compounds, the f-electrons have strongly localized nature and complicated frustrated magnetism is observed[11, 10] due to the frustration effect of the oscillating long-range RKKY interaction. Among of them, DyRu\({}_{2}\)Si\({}_{2}\) is a representative frustrated _RT\({}_{2}\)X\({}_{2}\)_, where magnetic Dy\({}^{3+}\) ions have a strong c-axis anisotropy[12]. It exhibits multistep phase transition against temperature and magnetic field and has a complicated \(H\)-\(T\) phase diagram as shown in Fig. 1, which has paramagnetic and four antiferromagnetic phases. Kawano et al. have revealed the magnetic structure of each phase from the results of neutron scattering[13]. In zero magnetic field, it was indicated that the phase I and II are partially antiferromagnetically ordered phases, where fluctuating spins are still remain even below their transition temperatures. The magnetic structures of these phases, which are extensively investigated in this study, are schematically shown in Figs. 2 (a) and (b). The phase I has a stripe structure which has a long period along the a-axis with the propagation vector \(Q=(2/9,0,0)\)[13]. It is noteworthy that disordered paramagnetic a-planes (denoted as the D\({}_{\rm I}\) plane hereafter) appears every 9 planes. The phase II has the magnetic structure where the spins in the D\({}_{\rm I}\) planes are partly ordered along the b-axis. Such partially ordered states due to the insufficient lift of the degeneracy are also found in other frustrated magnets [15, 14, 18, 17, 19, 16, 20, 21]. In these systems, slow spin dynamics owing to the paramagnetic, but strongly correlated, fragment is often observed[22]. Thus from the magnetic structures of the partially ordered phases I and II, one can expect novel slow dynamics in this DyRu\({}_{2}\)Si\({}_{2}\) due to the paramagneticaly fluctuating spins. Figure 1: (Color online) \(H\)-\(T\) phase diagram of DyRu\({}_{2}\)Si\({}_{2}\). This phase diagram was reported in the literature[12] and confirmed in the present study. In the present study, we have performed the detailed ac-susceptibility measurements to reveal it and have found novel critical dynamics accompanying the phase transition between these partially ordered phases I and II. The striking features of the novel dynamics are the following three points. First, its relaxation time is extremely long (order of 10-100 msec). Second, the dynamics becomes faster on cooling indicating non-thermally activated origin. Third, ferromagnetic correlations grow towards the phase transition temperature in spite of the fact that both phases are antiferromagnetic. Of course, in spin glass or spin ice, slow dynamics with the relaxation time of sec-msec order is often observed[1, 7]. In such a system, it can be attributed to the glassy nature due to the high degeneracy at low temperature. However, since this phase transition is not the case in DyRu\({}_{2}\)Si\({}_{2}\), it is an intriguing outcome that we observed such a long relaxation time over the I-II phase transition. On the basis of these findings, we proposed the mechanism of the phase transition, namely, the spontaneous stripe-arrangement of the precedently emergent "belt-like" ferromagnetic spin textures. Figure 2: (Color online) Schematic views of the magnetic structures projected on the basal c-plane of (a) the phase I and (b) II of DyRu\({}_{2}\)Si\({}_{2}\). A square on the left bottom on each figure represents the unit cell and each circle or square represents a Dy ion on the corner or on body-centered position. The unit of axes are half a lattice constant \(a/2\). A closed circle and a open circle indicate the Dy ions with the spin parallel and antiparallel to the c-axis respectively. An orange square with a cross represents the Dy ion with the fluctuating spin. Experimental Details We synthesized a polycrystalline sample with an arc furnace followed by single crystal growth by the Czochralski pulling method with a tetra arc furnace. The grown single crystal was cut, and finally, a cubic-like-shaped sample with the weight of 10.1 mg was obtained for dc and ac susceptibility measurements. We performed dc and ac susceptibility measurements using the SQUID magnetometer (MPMS, Quantum Design) equipped in the Research Center for Low Temperature and Materials Sciences, Kyoto University. Firstly, the magnetic field dependences of magnetization at several temperatures and the temperature dependences of the dc and ac susceptibilities at several magnetic fields were examined to confirm the \(H\)-\(T\) phase diagram of DyRu\({}_{2}\)Si\({}_{2}\) in the literature[12]. The result is denoted on Fig. 1. Also we measured detailed frequency dependences of ac susceptibility in the vicinity of the I-II phase transition temperature at zero bias field for revealing the critical dynamics and at \(H=5\) kOe for a comparison. The ac susceptibility measurements were performed with the amplitude of the oscillating field of 3 Oe and the frequency region of 0-1000 Hz. ## 3 Results The temperature dependences of the dc susceptibility over the I-II phase transition at 100 Oe in the cooling and heating processes are shown in Fig. 3. On the measurement, we first cooled down to 10 K under zero-field-cooled (ZFC) condition and set magnetic field of 100 Oe. Then we measured magnetization on cooling down to 1.8 K and went backward. Let \(\chi^{\rm h}\), \(\chi^{\rm c}\) and \(T_{\rm N2}\) be the susceptibilities measured on heating and on cooling and the transition temperature of the I-II phase transition, respectively. The temperature dependences of \(\chi^{\rm h}\) and \(\chi^{\rm c}\) are roughly similar. The peak temperature of \(\chi^{\rm h}\) and \(\chi^{\rm c}\), which are the sign of the phase transition, are the same. Although, since there is the hysteresis behavior, the phase transition temperature \(T_{\rm N2}\) cannot be accurately identified, it is approximately evaluated as 3.6 K. This corresponds to the phase transition temperature reported in the earlier work[12]. As shown in the lower panel of Fig. 3, the magnitude of \(\chi^{\rm h}\) is greater than that of \(\chi^{\rm c}\) in the temperature region of \(T\geq T_{\rm N2}\) and, vice versa, \(\chi^{\rm c}>\chi^{\rm h}\) for \(T<T_{\rm N2}\). This hysteresis behavior, however, not observed over the para-I phase transition (not shown). This hysteresis behavior of the dc susceptibility indicates presence of slow dynamics accompanying the I-II phase transition in the time-scale of the measurement or longer. The inset of Fig. 3 shows the lower temperature susceptibility below \(T_{\rm N2}\). The measurement was performed with the heating process after ZFC down to 0.46 K and applying magnetic field of 100 Oe. Here, it is found that the susceptibility increase down to the lowest measured temperature 0.46 K, which indicates that the presence of the fluctuating Dy spins in the phase II (Fig. 2 (b)) and its persistence down to this temperature. We are not sure whether there is temperature hysteresis in this temperature region as well as above 1.8 K and whether there is another phase transition, where the fluctuating spins order, at lower temperature. It will be investigated in the near future. Figures 4 (a) and (b) are the temperature dependences of the ac susceptibility, the real part \(\chi^{\prime}\) and imaginary part \(\chi^{\prime\prime}\) at the bias field of 0 and 5 kOe. At zero field (Figs. 4 (a)), two-step phase transition at \(T_{\rm N1}=29.5\) K and at \(T_{\rm N2}\), corresponding to the para-I and I-II phase transitions respectively, are clearly seen. Here \(T_{\rm N1}\) is the phase transition temperature of the para-I phase transition. In the plot of the real part \(\chi^{\prime}\) at zero field, the peak temperature corresponding to the I-II phase transition is 3.8 K, which is slightly different from 3.6 K of the peak temperature in the dc susceptibility. This difference should be coming from the non-equilibrium effect in the vicinity of \(T_{\rm N2}\). The transition temperature should ideally be unique and thus we have to say that the true transition temperautre is not able to be identified but some value around 3.6 K. In zero field, there is no frequency dependence of \(\chi^{\prime}\) in the paramagnetic phase and in the vicinity of the para-I phase transition, whereas strong frequency dependence of \(\chi^{\prime}\) and corresponding substantial \(\chi^{\prime\prime}\) appear, especially above 100 Hz in the phase I and II. It is remarkable that striking frequency dependence of \(\chi^{\prime}\) in the lower frequency region is found in the vicinity of the I-II phase transition temperature and the peak at \(T_{\rm N2}\) attenuates with increasing frequency and disappears above 100 Hz. Correspondingly, a sharp peak of \(\chi^{\prime\prime}\) and its suppression with increasing frequency are observed at \(T_{\rm N2}\). At \(H=5\) kOe (Fig. 4 (b)), similar behavior of both \(\chi^{\prime}\) and \(\chi^{\prime\prime}\) are seen in the paramagnetic phase and phase I, whereas, the peak anomalies of \(\chi^{\prime}\) and \(\chi^{\prime\prime}\) attributed to the I-II phase transition are absent. These results indicate the presence of the slow dynamics in the phase I and II with long relaxation times of order of 10 msec. This feature should be attributed to the fluctuating spins in each partially ordered phase. The slower dynamics in the vicinity of \(T_{\rm N2}\) is more striking and should be associated with the characteristic hysteresis behavior of the dc susceptibility. In order to investigate the slow dynamics attributed to the I-II phase transition more deeply, we measured the more detailed frequency dependences of ac susceptibility between the temperature of 3.0 K and 6.0 K at zero field. Since the I-II phase transition shows the temperature hysteresis, we measured the frequency dependences on both cooling and heating. The frequency dependences measured on cooling are shown in Figs. 5 (a) and (b), which show the plots of the temperature range above 3.7 K and below 3.6 K, respectively. In Fig. 5 (a), the real part \(\chi^{\prime}\) at 6.0 K shows a one-step-like structure which has a reduction at around 200 Hz and it changes to a two-step-like structure, where another reduction at around 10 Hz appears, with approaching \(T_{\rm N2}\). In Fig. 5 (b), it changes to a one-step-like one again with decreasing temperature further. One can see the feature of the change of dynamics more clearly in the imaginary part \(\chi^{\prime\prime}\). At 6.0 K in Fig. 5 (a), it shows a one-peak-like structure with a peak around 1000 Hz, or higher, corresponding to the one-step-like structure of \(\chi^{\prime}\) at 6.0 K. With decreasing temperature, an additional peak appears around 10 Hz. This peak shifts to higher frequency with approaching \(T_{\rm N2}\). In Fig. 5 (b), it merges into the higher frequency peak further below \(T_{\rm N2}\). These changes of correspond to the development of the two-step-like structure and the retransformation to the one-step-like one in \(\chi^{\prime}\). These results indicate that the system has several relaxation components owing to the phases I and II themselves and the I-II phase transition. It is noteworthy that the characteristic frequency, owing to the I-II phase transition that gives the reduction of \(\chi^{\prime}\) and the peak of \(\chi^{\prime\prime}\) observed below 6.0 K, increases with decreasing temperature. The frequency dependence of ac susceptibility measured on heating exhibits similar behavior, but greater magnitude. ## 4 Analysis The I-II phase transition at \(T_{\rm N2}\) in DyRu\({}_{2}\)Si\({}_{2}\) is noteworthy in three points. First, it has the characteristic temperature hysteresis. Second, it is accompanied with the extremely slow dynamics, where the peak anomaly at \(T_{\rm N2}\) has a significant frequency dependence and it attenuates with increasing frequency. Third, the characteristic frequency of the "critical dynamics" of the I-II phase transition increases with decreasing temperature, which implies non-thermally activated origin. For further discussion on the dynamics of the I-II phase transition, we subtracted the "background" dynamics owing to the phase I, which is observed in high frequency range. As seen in the \(H\)-\(T\) phase diagram in Fig. 1, the phase II appears in the low field. We therefore assumed that the frequency dependence of the ac susceptibility at \(H=5\) kOe, which characterizes the dynamics of the phase I, is the background. Figure 6 shows the comparison of the frequency dependences of \(\chi^{\prime\prime}\) at these two magnetic fields at a representative temperature \(T=3.6\) K on the cooling process. It indicates that \(\chi^{\prime\prime}\) at \(H=5\) kOe is appropriate to be assumed as the background, except for the low frequency region where a small broad peak around 1 Hz was observed. For the subtraction, first, we neglected the small peak. This is because the peak is considered to originate from the dynamics of the process where the fluctuating spins in the D\({}_{\rm I}\) plane are getting aligned to the magnetic field direction as lowering temperature, and such a dynamics should be absent at zero magnetic field. Thus, as depicted by the pink solid curve in the figure, we presumed the fitting result using \(\chi^{\prime\prime}\) of the high frequency region above 20 Hz at \(H=5\) kOe to be the background in the full frequency range. The fitting function is \(\chi^{\prime\prime}_{\rm bg}(\omega)=\chi_{\rm bg}\,\omega\,\tau_{\rm bg}/\{1+ (\omega\,\tau_{\rm bg})^{\alpha}\}\), where \(\omega\) is frequency and \(\chi_{\rm bg}\), \(\tau_{\rm bg}\) and \(\alpha\) are fitting parameters. This function describes the high-frequency \(\chi^{\prime\prime}\) at \(H=5\) kOe well, even though it is an ad hoc function and lacks clear physical meaning. Figure 7 shows the frequency dependence of \(\chi^{\prime\prime}\) at \(T=3.6\) K after the background-subtraction. Hereafter let this subtracted susceptibility be \(\chi^{\prime\prime}_{\rm sub}\). It has two peak structures around 40 Hz and 300 Hz. The frequency dependence of \(\chi^{\prime\prime}_{\rm sub}\) is well described by double Debye relaxation; \[\chi^{\prime\prime}_{\rm sub}=\chi_{\rm s}\frac{\omega\tau_{\rm s}}{1+(\omega \tau_{\rm s})^{2}}+\chi_{\rm f}\frac{\omega\tau_{\rm f}}{1+(\omega\tau_{\rm f })^{2}}, \tag{1}\] where \(\chi_{\rm s,f}\) and \(\tau_{\rm s,f}\) are the isothermal susceptibilities and relaxation times and the subscripts "s" and "f" denote the slower and faster terms with longer and shorter relaxation times, respectively. The temperature variation of the frequency dependence of \(\chi^{\prime\prime}_{\rm sub}\) in the cooling process is shown in the upper panel of Fig. 8. The both peaks emerge at 6.0 K, and grow up on cooling. The slower term in the lower frequency region increases toward \(T_{\rm N2}\) and exhibits maximum with slightly shifting towards higher frequency. On the other hand, the faster one keeps on growing on further cooling. \(\chi^{\prime\prime}_{\rm sub}\) in the heating process was also derived by the same procedure, shown in the lower panel of Fig. 8. It is also able to be fit by the double Debye relaxation and shows the similar temperature variation. Figures 9 (a) and (b) are the temperature dependences of the parameters, isothermal susceptibilities and relaxation times, respectively. The superscripts of "h" and "c" denote the parameters in the heating and cooling processes, respectively. In Fig. 9 (a), the isothermal susceptibility of the slower term \(\chi_{\rm s}\) exhibits clear peak and hysteresis around the I-II phase transition temperature \(T_{\rm N2}\). \(\chi_{\rm s}^{\rm c}\) and \(\chi_{\rm s}^{\rm h}\) show the peaks at slightly different temperatures, \(T_{\rm peak}^{\rm c}=3.6\) K and \(T_{\rm peak}^{\rm h}=3.8\) K, and the peak in the heating process is more pronounced. \(\chi_{\rm s}^{\rm c}\) and \(\chi_{\rm s}^{\rm h}\) merge far above \(T_{\rm N2}\) (\(T>5.6\) K) and below \(T_{\rm N2}\) (\(T<3.2\) K). In the figure, we also plotted the real part of the ac susceptibility at the lowest frequency, 0.1 Hz, \(\chi_{0}^{\prime\,{\rm c}}\) and \(\chi_{0}^{\prime\,{\rm h}}\) measured on both the cooling and heating processes. They behave similarly to \(\chi_{\rm s}^{\rm c,h}\) around \(T_{\rm N2}\). Note that we didn't subtract background from these susceptibilities \(\chi_{0}^{\prime\,{\rm c},{\rm h}}\). Thus, it can be concluded that the characteristic features in \(\chi_{\rm s}^{\rm c}\) and \(\chi_{\rm s}^{\rm h}\) are not artifacts owing to the background-subtraction or the phenomenological fitting by the double Debye relaxation. In the contrary to the slower term, the isothermal susceptibilities of the faster terms, \(\chi_{\rm f}^{\rm c}\) and \(\chi_{\rm f}^{\rm h}\), do not exhibit remarkable hysteresis behaviors and any anomalies around \(T_{\rm N2}\). The both \(\chi_{\rm f}\) moderately increase down to the lower temperature than \(T_{\rm N2}\). It should be noted that this is consistent with the increase of the dc susceptibility on cooling down to 0.4 K as shown in the inset of Fig. 3. The difference of the temperature dependences of \(\chi_{\rm s}\) and \(\chi_{\rm f}\) obviously indicates that the slower and faster terms of \(\chi_{\rm sub}^{\prime\prime}\) are attributed to different dynamics: the critical dynamics of the I-II phase transition and the dynamics of the disordered spins in the phase II, respectively. Figure 9 (b) shows the temperature dependences of the relaxation times of the slower and faster components, \(\tau_{\rm s}\) and \(\tau_{\rm f}\). The relaxation time attributed to the I-II phase transition dynamics \(\tau_{\rm s}\) reveals two noteworthy facts about the phase transition. First, the relaxation time is extremely long. At around 5.8 K, it is in the order of 100 msec and it declines almost linearly towards \(T_{\rm N2}\). Second, it is indicated that the dynamics of the I-II phase transition is non-thermally activated, because the relaxation time decreases with temperature decreasing. This is consistent with the temperature variation of \(\chi_{\rm sub}^{\prime\prime}\) shown in Fig. 8 and it is more apparent here. It should be also noted that the critical slowing down towards \(T_{\rm N2}\) is absent. As the isothermal susceptibility \(\chi_{\rm s}\), the relaxation time \(\tau_{\rm s}\) also shows the hysteresis behavior, especially below \(T_{\rm N2}\). The relaxation time in the cooling process \(\tau_{\rm s}^{\rm c}\) shows a hump at around 3.4 K. On the other hand, that in the heating process \(\tau_{\rm s}^{\rm h}\) shows a minimum at around 3.2 K and is smaller than \(\tau_{\rm s}^{\rm c}\). Above \(T_{\rm N2}\), the size relationship between \(\tau_{\rm s}^{\rm c}\) and \(\tau_{\rm s}^{\rm h}\) is reversed, namely, \(\tau_{\rm s}^{\rm h}\) is slightly larger than \(\tau_{\rm s}^{\rm c}\). In contrast to \(\tau_{\rm s}\), the relaxation time of the faster term, \(\tau_{\rm f}\) doesn't show significant hysteresis behavior and increases moderately with temperature decreasing below \(T_{\rm peak}^{\rm c}\). It indicates that the faster dynamics is thermally activated. Again, these differences between \(\tau_{\rm s}\) and \(\tau_{\rm f}\) indicate the different origins of the two dynamics. Figure 9 (c) shows the temperature dependences of products of the isothermal susceptibility and temperature \(T\chi_{\rm s}\) for the two processes, which corresponds to the spin correlations; \[T\chi_{\rm s}\propto\sum_{i,j}<S_{i}\,S_{j}>. \tag{2}\] This quantity increases when ferromagnetic (FM) spin correlations develop, whereas, decreases when antiferromagnetic (AFM) spin correlations develop. The I-II phase transition is the process where the spins in the disordered \({\rm D_{I}}\) plane forms the striped AFM order along the b-axis. Nevertheless, \(T\chi_{\rm s}^{\rm c,h}\) increase towards \(T_{\rm N2}\) which indicates that growth of dynamic FM correlations is involved in the phase transition. It might look contradictory, however, as discussed in Sec. 5, it indicates important information about this extraordinary phase transition. ## 5 Discussion In this section, we propose the mechanism of the I-II phase transition indicated from the above analysis. As a summary of the analysis, the features of the dynamics attributed to the I-II phase transition are the follows; 1. Dynamic FM correlations with long relaxation time appear at around 6 K and grow towards the I-II phase transition temperature \(T_{\rm N2}\), 2. The dynamics is non-thermally activated, 3. The dynamics shows hysteresis behavior as shown in the temperature dependences of \(\chi_{\rm s}\) and \(\tau_{\rm s}\) (Figs. 9). The feature (1) indicates that large dynamic FM spin textures appear precedently to the phase transition. In general, response time of a spin system is in the order of nsec - psec. Thus, the observed long relaxation time indicates that the spin textures should be considerably large. Since the I-II phase transition is the process where the fluctuating spins in the \({\rm D_{I}}\) planes form the striped order of phase II, it is reasonable to consider that the FM spin textures appear and grow in the \({\rm D_{I}}\) plane and they are strongly involved with the development of the striped spin correlations. On the basis of these consideration, we propose a schematic picture of the I-II phase transition as shown in Fig. 10, where the development and shift of spin correlations with temperature variation in the D\({}_{\rm I}\) plane are shown. The striped pattern of the phase II in Fig. 10 indicates that the nearest-neighbor (NN) interaction is FM along both the b- and c-axis and the AFM next-nearest-neighbor (NNN), and maybe further long range interactions, compete with the NN FM interaction along the b-axis, whereas, further interactions are too weak to compete with the NN FM interactions along the c-axis. Thus, we hypothesize that the non-frustrated two-row FM spin textures are formable at much higher temperature than the phase transition temperature and the large "belt-like" FM correlated spin textures emerge around 6 K as precursors of the striped AFM structure of the phase II, as shown in the right panel of Fig. 10. Then, they grow larger and become denser with decreasing temperature, and finally spontaneously form the striped magnetic structure at \(T_{\rm N2}\) as shown in the left panel of Fig. 10. The non-thermally activated behavior of the relaxation time \(\tau_{\rm s}\) can be explained in this picture as follows. The key of the explanation is that the relaxation time reflects the size of the spin textures, namely, larger spin textures fluctuate more slowly and have longer life span and smaller ones have shorter life times. As we discuss above, when decreasing temperature, the precedently emergent belt-like spin textures grow up and become denser, and then, they are combined in antiparallel to each other along the b-axis by the AFM NNN and further long-range interactions. It is schematically shown in the middle panel of Fig. 10 by the purple rectangle frame. Once they are combined, the spin textures are not FM but striped AFM, thus they no longer contribute to the response to the uniform ac field dominantly. On the other hand, the remaining small fragment of FM spin textures predominantly contribute to the ac response. As a result, the relaxation time of the dynamics measured by the ac susceptibility becomes shorter as approaching \(T_{\rm N2}\) because the size of the FM spin texture predominantly responding to the ac field becomes smaller. That is to say, the non-thermally activated behavior of the ac response may indicate the shift of the spin correlations from the isolated FM spin textures to the striped AFM ones. The fact that there is the temperature hysteresis over the I-II phase transition can be also explained. In this picture, the I-II phase transition is the spontaneous arrangement of the large belt-like FM spin textures. This ordering process should be extremely slower than conventional magnetic phase transitions because the time-scale of the dynamics of the units of the ordering, the belt-like FM spin textures, is very long, being order of 10-100 msec. Thus, the temperature hysteresis should be observed in the conventional experimental time-scale (order of second or minute). The details of the hysteresis behaviors observed in \(T\chi_{\rm s}\) and \(\tau_{\rm s}\) are discussed below. As shown in Figs. 9 (c) and (b), \(T\chi_{\rm s}\) in the heating process is larger than that in the cooling one, and \(\tau_{\rm s}\) in the heating process is shorter than that in the cooling one below \(T_{\rm N2}\) and is longer above \(T_{\rm N2}\). In our scenario, these hysteresis can be interpreted as the difference between the "solidification" of the belt-like FM spin textures and "melting" of the striped order in the D\({}_{\rm I}\) plane, which correspond to the cooling and heating processes, respectively. The former is the process described so far and the latter is the opposite process where the complete striped structure of the phase II breaks into the belt-like FM textures. Above \(T_{\rm N2}\), it is naturally expected that larger FM spin textures are more densely persist in the "melting" process and, vice versa, smaller spin textures should be frequently formed in the "solidification" process. Thus, the spin correlations \(T\chi_{\rm s}\) should be larger on the heating process than those on the cooling one and the relaxation time \(\tau_{\rm s}\) in the heating process should be longer than that in the cooling process above \(T_{\rm N2}\). On the other hand, since the stable state is the stripe AFM below \(T_{\rm N2}\), the size relationship of the fragment of FM spin textures is reversed, namely, larger spin textures persist in the solidification process and vice versa. Thus, \(\tau_{\rm s}\) below \(T_{\rm N2}\) should be longer in the cooling process than that in the heating process. At last, we discuss the question that the novel slow critical dynamics is only present in the I-II phase transition but is absent in the para-I phase transition at \(T_{\rm N1}\). Probably, this comes from the difference between the dimensionality of these two phase transitions. The I-II phase transition is the process where the fluctuating spins in the pseudo two-dimensional disordered \({\rm D_{I}}\) plane order into the striped structure. On the other hand, the para-I phase transition is the process where the fluctuating spins in the three-dimensional paramagnetic state order. In general, low dimensionality enhances the fluctuations and destabilizes the ordered state in the system. The extraordinary ordering process of the I-II phase transition, namely, the precedent emergence of the large spin textures and their spontaneous arrangement, can be a consequence of its low dimensionality. ## 6 Summary We performed the ac susceptibility measurements of the frustrated magnet \({\rm DyRu_{2}Si_{2}}\), especially in the vicinity of the phase transition between the partially ordered phases I and II. Detailed analysis of the temperature and frequency dependences of the ac susceptibility reveal the novel critical dynamics of the I-II phase transition. The temperature dependences of the relaxation time and isothermal susceptibility indicate the following three striking features. First, the dynamic FM correlations with extremely long relaxation time appear precedently at around 6.0 K and grow towards the phase transition temperature \(T_{\rm N2}\). Second, the dynamic FM correlations exhibit non-thermally activated behavior. Third, the dynamics shows the hysteresis behavior. On the basis of these features, we propose the mechanism of this phase transition as shortly described as follows. In the \({\rm D_{I}}\) plane, which is the emergent 2-dimensional disordered system in the phase I, the large and stable belt-like FM spin textures formed by NN FM interactions appear as precursors around 6.0 K. With decreasing temperature towards \(T_{\rm N2}\), they become denser, more likely to come next to each other along the b-axis and combined when they are antiparallel to each other due to the NNN AFM and further long range interactions. Eventually, they spontaneously from the striped structure of phase II at \(T_{\rm N2}\). We will perform the neutron scattering experiments in near future to verify our hypothesis about the ordering process of the I-II phase transition. We expect to observe development and shift of the spin correlations from the broad FM correlations to the sharp stripe ones around \((2/9,2/9,0)\) in the reciprocal lattice space. **Acknowledgements** The authors acknowledge the support from the JPSP Grant-in-Aid for Scientific Research (B) (No. 20H01852).